Conference Proceedings

CCS ’19- Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security

CCS ’19- Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security

Full Citation in the ACM Digital Library

SESSION: Session 1A: Attack I

1 Trillion Dollar Refund: How To Spoof PDF Signatures

  • Vladislav Mladenov
  • Christian Mainka
  • Karsten Meyer zu Selhausen
  • Martin Grothe
  • Jörg Schwenk

The Portable Document Format (PDF) is the de-facto standard for document exchange worldwide. To guarantee the authenticity and integrity of documents, digital signatures are used. Several public and private services ranging from governments, public enterprises, banks, and payment services rely on the security of PDF signatures.

In this paper, we present the first comprehensive security evaluation on digital signatures in PDFs. We introduce three novel attack classes which bypass the cryptographic protection of digitally signed PDF files allowing an attacker to spoof the content of a signed PDF. We analyzed 22 different PDF viewers and found 21 of them to be vulnerable, including prominent and widely used applications such as Adobe Reader DC and Foxit. We additionally evaluated eight online validation services and found six to be vulnerable. A possible explanation for these results could be the absence of a standard algorithm to verify PDF signatures — each client verifies signatures differently, and attacks can be tailored to these differences. We, therefore, propose the standardization of a secure verification algorithm, which we describe in this paper.

All findings have been responsibly disclosed, and the affected vendors were supported during fixing the issues. As a result, three generic CVEs for each attack class were issued [50-52]. Our research on PDF signatures and more information is also online available at https://www.pdf-insecurity.org/.

Practical Decryption exFiltration: Breaking PDF Encryption

  • Jens Müller
  • Fabian Ising
  • Vladislav Mladenov
  • Christian Mainka
  • Sebastian Schinzel
  • Jörg Schwenk

The Portable Document Format, better known as PDF, is one of the most widely used document formats worldwide, and in order to ensure information confidentiality, this file format supports document encryption. In this paper, we analyze PDF encryption and show two novel techniques for breaking the confidentiality of encrypted documents. First, we abuse the PDF feature of partially encrypted documents to wrap the encrypted part of the document within attacker-controlled content and therefore, exfiltrate the plaintext once the document is opened by a legitimate user. Second, we abuse a flaw in the PDF encryption specification to arbitrarily manipulate encrypted content. The only requirement is that a single block of known plaintext is needed, and we show that this is fulfilled by design. Our attacks allow the recovery of the entire plaintext of encrypted documents by using exfiltration channels which are based on standard compliant PDF properties. We evaluated our attacks on 27 widely used PDF viewers and found all of them to be vulnerable. We responsibly disclosed the vulnerabilities and supported the vendors in fixing the issues.

SESSION: Session 1B: Cryptographic Primitives

Omniring: Scaling Private Payments Without Trusted Setup

  • Russell W. F. Lai
  • Viktoria Ronge
  • Tim Ruffing
  • Dominique Schröder
  • Sri Aravinda Krishnan Thyagarajan
  • Jiafan Wang

Monero is the largest cryptocurrency with built-in cryptographic privacy features. The transactions are authenticated using zero-knowledge spend proofs, which provide a certain level of anonymity by hiding the source accounts from which the funds are sent among a set of other accounts. Due to its similarities to ring signatures, this core cryptographic component is called Ring Confidential Transactions (RingCT). Because of its practical relevance, several works attempt to analyze the security of RingCT. Since RingCT is rather complex, most of them are either informal, miss fundamental functionalities, or introduce undesirable trusted setup assumptions. Regarding efficiency, Monero currently deploys a scheme in which the size of the spend proof is linear in the ring size. This limits the ring size to only a few accounts, which in turn limits the acquired anonymity significantly and facilitates de-anonymization attacks. As a solution to these problems, we present the first rigorous formalization of RingCT as a cryptographic primitive. We then propose a generic construction of RingCT and prove it secure in our formal security model. By instantiating our generic construction with new efficient zero-knowledge proofs, we obtain Omniring, a fully-fledged RingCT scheme in the discrete logarithm setting that provides the highest concrete and asymptotic efficiency as of today. Omniring is the first RingCT scheme which 1) does not require a trusted setup or pairing-friendly elliptic curves, 2) has a proof size logarithmic in the size of the ring, and 3) allows to share the same ring between all source accounts in a transaction, thereby enabling significantly improved privacy level without sacrificing performance. Our zero-knowledge proofs rely on novel enhancements to the Bulletproofs framework (S&P 2018), which we believe are of independent interest.

WI Is Not Enough: Zero-Knowledge Contingent (Service) Payments Revisited

  • Georg Fuchsbauer

While fair exchange of goods is known to be impossible without assuming a trusted party, smart contracts in cryptocurrencies forgo such parties by assuming trust in the currency system. They allow a seller to sell a digital good, which the buyer will obtain if and only if she pays. Zero-knowledge contingent payments (zkCP) show that, despite the limited expressiveness of its scripting language, this is even possible in Bitcoin by using zero-knowledge proofs. At CCS’17, Campanelli, Gennaro, Goldfeder and Nizzardo showed that the zkCP protocol was flawed, in that the buyer could obtain information about the good without paying. They proposed countermeasures to repair zkCP and moreover observed that zkCP cannot be used when a service is sold. They introduce the notion of ZK contingent payments for services and give an instantiation based on a witness-indistinguishable (WI) proof system. We show that some of their proposed countermeasures are not sufficient by presenting an attack against their fixed zkCP scheme. We also show that their realization of zkCP for services is insecure, as the buyer could learn the desired information (i.e., whether the service was provided) without paying; in particular, we show that WI of the used proof system is not enough.

SESSION: Session 1C: Cloud Security I

A Machine-Checked Proof of Security for AWS Key Management Service

  • José Bacelar Almeida
  • Manuel Barbosa
  • Gilles Barthe
  • Matthew Campagna
  • Ernie Cohen
  • Benjamin Gregoire
  • Vitor Pereira
  • Bernardo Portela
  • Pierre-Yves Strub
  • Serdar Tasiran

We present a machine-checked proof of security for the domain management protocol of Amazon Web Services’ KMS (Key Management Service) a critical security service used throughout AWS and by AWS customers. Domain management is at the core of AWS KMS; it governs the top-level keys that anchor the security of encryption services at AWS. We show that the protocol securely implements an ideal distributed encryption mechanism under standard cryptographic assumptions. The proof is machine-checked in the EasyCrypt proof assistant and is the largest EasyCrypt development to date.

Mitigating Leakage in Secure Cloud-Hosted Data Structures: Volume-Hiding for Multi-Maps via Hashing

  • Sarvar Patel
  • Giuseppe Persiano
  • Kevin Yeo
  • Moti Yung

Volume leakage has recently been identified as a major threat to the security of cryptographic cloud-based data structures by Kellaris \em et al. [CCS’16] (see also the attacks in Grubbs \em et al. [CCS’18] and Lacharité \em et al. [S&P’18]). In this work, we focus on volume-hiding implementations of \em encrypted multi-maps as first considered by Kamara and Moataz [Eurocrypt’19]. Encrypted multi-maps consist of outsourcing the storage of a multi-map to an untrusted server, such as a cloud storage system, while maintaining the ability to perform private queries. Volume-hiding encrypted multi-maps ensure that the number of responses (volume) for any query remains hidden from the adversarial server. As a result, volume-hiding schemes can prevent leakage attacks that leverage the adversary’s knowledge of the number of query responses to compromise privacy. We present both conceptual and algorithmic contributions towards volume-hiding encrypted multi-maps. We introduce the first formal definition of volume-hiding leakage functions. In terms of design, we present the first volume-hiding encrypted multi-map dprfMM whose storage and query complexity are both asymptotically optimal. Furthermore, we experimentally show that our construction is practically efficient. Our server storage is smaller than the best previous construction while we improve query complexity by a factor of 10-16x. In addition, we introduce the notion of differentially private volume-hiding leakage functions which strikes a better, tunable balance between privacy and efficiency. To accompany our new notion, we present a differentially private volume-hiding encrypted multi-map dpMM whose query complexity is the volume of the queried key plus an additional logarithmic factor. This is a significant improvement compared to all previous volume-hiding schemes whose query overhead was the maximum volume of any key. In natural settings, our construction improves the average query overhead by a factor of 150-240x over the previous best volume-hiding construction even when considering small privacy budget of ε=0.2.

SESSION: Session 1D: Forensics

The Next 700 Policy Miners: A Universal Method for Building Policy Miners

  • Carlos Cotrini
  • Luca Corinzia
  • Thilo Weghorn
  • David Basin

A myriad of access control policy languages have been and continue to be proposed. The design of policy miners for each such language is a challenging task that has required specialized machine learning and combinatorial algorithms. We present an alternative method, universal access control policy mining (Unicorn). We show how this method streamlines the design of policy miners for a wide variety of policy languages including ABAC, RBAC, RBAC with user-attribute constraints, RBAC with spatio-temporal constraints, and an expressive fragment of XACML. For the latter two, there were no known policy miners until now. To design a policy miner using Unicorn, one needs a policy language and a metric quantifying how well a policy fits an assignment of permissions to users. From these, one builds the policy miner as a search algorithm that computes a policy that best fits the given permission assignment. We experimentally evaluate the policy miners built with Unicorn on logs from Amazon and access control matrices from other companies. Despite the genericity of our method, our policy miners are competitive with and sometimes even better than specialized state-of-the-art policy miners. The true positive rates of policies we mined differ by only 5% from the policies mined by the state of the art and the false positive rates are always below 5%. In the case of ABAC, it even outperforms the state of the art.

Towards Continuous Access Control Validation and Forensics

  • Chengcheng Xiang
  • Yudong Wu
  • Bingyu Shen
  • Mingyao Shen
  • Haochen Huang
  • Tianyin Xu
  • Yuanyuan Zhou
  • Cindy Moore
  • Xinxin Jin
  • Tianwei Sheng

Access control is often reported to be “profoundly broken” in real-world practices due to prevalent policy misconfigurations introduced by system administrators (sysadmins). Given the dynamics of resource and data sharing, access control policies need to be continuously updated. Unfortunately, to err is human-sysadmins often make mistakes such as over-granting privileges when changing access control policies. With today’s limited tooling support for continuous validation, such mistakes can stay unnoticed for a long time until eventually being exploited by attackers, causing catastrophic security incidents. We present P-DIFF, a practical tool for monitoring access control behavior to help sysadmins early detect unintended access control policy changes and perform postmortem forensic analysis upon security attacks. P-DIFF continuously monitors access logs and infers access control policies from them. To handle the challenge of policy evolution, we devise a novel time-changing decision tree to effectively represent access control policy changes, coupled with a new learning algorithm to infer the tree from access logs. P-DIFF provides sysadmins with the inferred policies and detected changes to assist the following two tasks: (1) validating whether the access control changes are intended or not; (2) pinpointing the historical changes responsible for a given security attack. We evaluate P-DIFF with a variety of datasets collected from five real-world systems, including two from industrial companies. P-DIFF can detect 86%-100% of access control policy changes with an average precision of 89%. For forensic analysis, P-DIFF can pinpoint the root-cause change that permits the target access in 85%-98% of the evaluated cases.

SESSION: Session 1E: Privacy I

Watching You Watch: The Tracking Ecosystem of Over-the-Top TV Streaming Devices

  • Hooman Mohajeri Moghaddam
  • Gunes Acar
  • Ben Burgess
  • Arunesh Mathur
  • Danny Yuxing Huang
  • Nick Feamster
  • Edward W. Felten
  • Prateek Mittal
  • Arvind Narayanan

The number of Internet-connected TV devices has grown significantly in recent years, especially Over-the-Top (“OTT”) streaming devices, such as Roku TV and Amazon Fire TV. OTT devices offer an alternative to multi-channel television subscription services, and are often monetized through behavioral advertising. To shed light on the privacy practices of such platforms, we developed a system that can automatically download OTT apps (also known as channels), and interact with them while intercepting the network traffic and performing best-effort TLS interception. We used this smart crawler to visit more than 2,000 channels on two popular OTT platforms, namely Roku and Amazon Fire TV. Our results show that tracking is pervasive on both OTT platforms, with traffic to known trackers present on 69% of Roku channels and 89% of Amazon Fire TV channels. We also discover widespread practice of collecting and transmitting unique identifiers, such as device IDs, serial numbers, WiFi MAC addresses and SSIDs, at times over unencrypted connections. Finally, we show that the countermeasures available on these devices, such as limiting ad tracking options and adblocking, are practically ineffective. Based on our findings, we make recommendations for researchers, regulators, policy makers, and platform/app developers.

Oh, the Places You’ve Been! User Reactions to Longitudinal Transparency About Third-Party Web Tracking and Inferencing

  • Ben Weinshel
  • Miranda Wei
  • Mainack Mondal
  • Euirim Choi
  • Shawn Shan
  • Claire Dolin
  • Michelle L. Mazurek
  • Blase Ur

Internet companies track users’ online activity to make inferences about their interests, which are then used to target ads and personalize their web experience. Prior work has shown that existing privacy-protective tools give users only a limited understanding and incomplete picture of online tracking. We present Tracking Transparency, a privacy-preserving browser extension that visualizes examples of long-term, longitudinal information that third-party trackers could have inferred from users’ browsing. The extension uses a client-side topic modeling algorithm to categorize pages that users visit and combines this with data about the web trackers encountered over time to create these visualizations. We conduct a longitudinal field study in which 425 participants use one of six variants of our extension for a week. We find that, after using the extension, participants have more accurate perceptions of the extent of tracking and also intend to take privacy-protecting actions.

SESSION: Session 2A: Side Channels I

Page Cache Attacks

  • Daniel Gruss
  • Erik Kraft
  • Trishita Tiwari
  • Michael Schwarz
  • Ari Trachtenberg
  • Jason Hennessey
  • Alex Ionescu
  • Anders Fogh

We present a new side-channel attack that targets one of the most fundamental software caches in modern computer systems: the operating system page cache. The page cache is a pure software cache that contains all disk-backed pages, including program binaries, shared libraries, and other files. On Windows, dynamic pages are also part of this cache and can be attacked as well, e.g., data, heap, and stacks. Our side channel permits unprivileged monitoring of accesses to these pages of other processes, with a spatial resolution of 4kB and a temporal resolution of 2µs on Linux (≤6.7 measurements per second), and 466ns on Windows 10 (≤223 measurements per second). We systematically analyze the side channel by demonstrating different hardware-agnostic local attacks, including a sandbox-bypassing high-speed covert channel, an ASLR break on Windows 10, and various information leakages that can be used for targeted extortion, spam campaigns, and more directly for UI redressing attacks. We also show that, as with hardware cache attacks, we can attack the generation of temporary passwords on vulnerable cryptographic implementations. Our hardware-agnostic attacks can be mitigated with our proposed security patches, but the basic side channel remains exploitable via timing measurements. We demonstrate this with a remote covert channel exfiltrating information from a colluding process through innocuous server requests.

Hardware-Backed Heist: Extracting ECDSA Keys from Qualcomm’s TrustZone

  • Keegan Ryan

Trusted Execution Environments (TEEs) such as ARM TrustZone are in widespread use in both mobile and embedded devices, and they are used to protect sensitive secrets while often sharing the same computational hardware as untrusted code. Although there has been limited research in the area, the threat of microarchitectural attacks against ARM TrustZone has not been thoroughly studied. This is not the case for other TEEs, such as Intel SGX, where the security promises of the TEE have been violated numerous times by the academic community, showing that it is possible to use side-channel attacks to gain detailed insight into the microarchitectural behavior of trusted code. In this work, we show that TrustZone is susceptible to similar attacks, and we demonstrate the ability to achieve cache attacks with high temporal precision, high spatial precision, and low noise. These tools make it easy to monitor the data flow and code flow of TrustZone code with great resolution, and we apply our techniques to investigate the security of a real-world application. We examine ECDSA signing in Qualcomm’s implementation of Android’s hardware-backed keystore and identify a series of vulnerabilities that leak sensitive cryptographic information through shared microarchitectural structures. By using the powerful attacks developed in this paper, we are able to successfully extract this sensitive information and fully recover a 256-bit private key from Qualcomm’s version of the hardware-backed keystore.

VoltJockey: Breaching TrustZone by Software-Controlled Voltage Manipulation over Multi-core Frequencies

  • Pengfei Qiu
  • Dongsheng Wang
  • Yongqiang Lyu
  • Gang Qu

ARM TrustZone builds a trusted execution environment based on the concept of hardware separation. It has been quite successful in defending against various software attacks and forcing attackers to explore vulnerabilities in interface designs and side channels. The recently reported CLKscrew attack breaks TrustZone through software by overclocking CPU to generate hardware faults. However, overclocking makes the processor run at a very high frequency, which is relatively easy to detect and prevent, for example by hardware frequency locking. In this paper, we propose an innovative software-controlled hardware fault-based attack, VoltJockey, on multi-core processors that adopt dynamic voltage and frequency scaling (DVFS) techniques for energy efficiency. Unlike CLKscrew, we manipulate the voltages rather than the frequencies via DVFS unit to generate hardware faults on the victim cores, which makes VoltJockey stealthier and harder to prevent than CLKscrew. We deliberately control the fault generation to facilitate differential fault analysis to break TrustZone. The entire attack process is based on software without any involvement of hardware. We implement VoltJockey on an ARM-based Krait processor from a commodity Android phone and demonstrate how to reveal the AES key from TrustZone and how to breach the RSA-based TrustZone authentication. These results suggest that VoltJockey has a comparable efficiency to side channels in obtaining TrustZone-guarded credentials, as well as the potential of bypassing the RSA-based verification to load untrusted applications into TrustZone. We also discuss both hardware-based and software-based countermeasures and their limitations.

Principled Unearthing of TCP Side Channel Vulnerabilities

  • Yue Cao
  • Zhongjie Wang
  • Zhiyun Qian
  • Chengyu Song
  • Srikanth V. Krishnamurthy
  • Paul Yu

Recent work has showcased the presence of subtle TCP side channels in modern operating systems, that can be exploited by off-path adversaries to launch pernicious attacks such as hijacking a connection. Unfortunately, most work to date is on the manual discovery of such side-channels, and patching them subsequently. In this work we ask “Can we develop a principled approach that can lead to the automated discovery of such hard-to-find TCP side-channels?” We identify that the crux of why such side-channels exist is the violation of the non-interference property between simultaneous TCP connections i.e., there exist cases wherein a change in state of one connection implicitly leaks some information to a different connection (controlled possibly by an attacker). To find such non-interference property violations, we argue that model-checking is a natural fit. However, because of limitations with regards to its scalability, there exist many challenges in using model checking. Specifically, these challenges relate to (a) making the TCP code base self-contained and amenable to model checking and (b) limiting the search space of model checking and yet achieving reasonable levels of code coverage. We develop a tool that we call SCENT (for Side Channel Excavation Tool) that addresses these challenges in a mostly automated way. At the heart of SCENT is an automated downscaling component that transforms the TCP code base in a consistent way to achieve both a reduction in the state space complexity encountered by the model checker and the number and types of inputs needed for verification. Our extensive evaluations show that SCENT leads to the discovery of 12 new side channel vulnerabilities in the Linux and FreeBSD kernels. In particular, a real world validation with one class of vulnerabilities shows that an off-path attacker is able to infer whether two arbitrary hosts are communicating with each other, within slightly more than 1 minute, on average.

SESSION: Session 2B: ML Security I

Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment

  • Ziqi Yang
  • Jiyi Zhang
  • Ee-Chien Chang
  • Zhenkai Liang

The wide application of deep learning technique has raised new security concerns about the training data and test data. In this work, we investigate the model inversion problem under adversarial settings, where the adversary aims at inferring information about the target model’s training data and test data from the model’s prediction values. We develop a solution to train a second neural network that acts as the inverse of the target model to perform the inversion. The inversion model can be trained with black-box accesses to the target model. We propose two main techniques towards training the inversion model in the adversarial settings. First, we leverage the adversary’s background knowledge to compose an auxiliary set to train the inversion model, which does not require access to the original training data. Second, we design a truncation-based technique to align the inversion model to enable effective inversion of the target model from partial predictions that the adversary obtains on victim user’s data. We systematically evaluate our approach in various machine learning tasks and model architectures on multiple image datasets. We also confirm our results on Amazon Rekognition, a commercial prediction API that offers “machine learning as a service”. We show that even with partial knowledge about the black-box model’s training data, and with only partial prediction values, our inversion approach is still able to perform accurate inversion of the target model, and outperform previous approaches.

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

  • Liwei Song
  • Reza Shokri
  • Prateek Mittal

The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, in both the security community and the privacy community. However, one big limitation of previous research is that the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards resolving this limitation by combining the two domains. In particular, we measure the success of membership inference attacks against six state-of-the-art defense methods that mitigate the risk of adversarial examples (i.e., evasion attacks). Membership inference attacks determine whether or not an individual data record has been part of a model’s training set. The accuracy of such attacks reflects the information leakage of training algorithms about individual members of the training set. Adversarial defense methods against adversarial examples influence the model’s decision boundaries such that model predictions remain unchanged for a small area around each input. However, this objective is optimized on training data. Thus, individual data records in the training set have a significant influence on robust models. This makes the models more vulnerable to inference attacks. To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions. We also propose two new inference methods that exploit structural properties of robust models on adversarially perturbed data. Our experimental evaluation demonstrates that compared with the natural training (undefended) approach, adversarial defense methods can indeed increase the target model’s risk against membership inference attacks. When using adversarial defenses to train the robust models, the membership inference advantage increases by up to 4.5 times compared to the naturally undefended models. Beyond revealing the privacy risks of adversarial defenses, we further investigate the factors, such as model capacity, that influence the membership information leakage.

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

  • Jinyuan Jia
  • Ahmed Salem
  • Michael Backes
  • Yang Zhang
  • Neil Zhenqiang Gong

In a membership inference attack, an attacker aims to infer whether a data sample is in a target classifier’s training dataset or not. Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample’s confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier’s training dataset. Membership inference attacks pose severe privacy and security threats to the training dataset. Most existing defenses leverage differential privacy when training the target classifier or regularize the training process of the target classifier. These defenses suffer from two key limitations: 1) they do not have formal utility-loss guarantees of the confidence score vectors, and 2) they achieve suboptimal privacy-utility tradeoffs. In this work, we propose MemGuard,the first defense with formal utility-loss guarantees against black-box membership inference attacks. Instead of tampering the training process of the target classifier, MemGuard adds noise to each confidence score vector predicted by the target classifier. Our key observation is that attacker uses a classifier to predict member or non-member and classifier is vulnerable to adversarial examples.Based on the observation, we propose to add a carefully crafted noise vector to a confidence score vector to turn it into an adversarial example that misleads the attacker’s classifier. Specifically, MemGuard works in two phases. In Phase I, MemGuard finds a carefully crafted noise vector that can turn a confidence score vector into an adversarial example, which is likely to mislead the attacker’s classifier to make a random guessing at member or non-member. We find such carefully crafted noise vector via a new method that we design to incorporate the unique utility-loss constraints on the noise vector. In Phase II, MemGuard adds the noise vector to the confidence score vector with a certain probability, which is selected to satisfy a given utility-loss budget on the confidence score vector. Our experimental results on three datasets show that MemGuard can effectively defend against membership inference attacks and achieve better privacy-utility tradeoffs than existing defenses. Our work is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attacks.

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks

  • Kenneth T. Co
  • Luis Muñoz-González
  • Sixte de Maupeou
  • Emil C. Lupu

Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.

SESSION: Session 2C: Secure Computing I

Efficient Two-Round OT Extension and Silent Non-Interactive Secure Computation

  • Elette Boyle
  • Geoffroy Couteau
  • Niv Gilboa
  • Yuval Ishai
  • Lisa Kohl
  • Peter Rindal
  • Peter Scholl

We consider the problem of securely generating useful instances of two-party correlations, such as many independent copies of a random oblivious transfer (OT) correlation, using a small amount of communication. This problem is motivated by the goal of secure computation with silent preprocessing, where a low-communication input-independent setup, followed by local (“silent”) computation, enables a lightweight “non-cryptographic” online phase once the inputs are known. Recent works of Boyle et al. (CCS 2018, Crypto 2019) achieve this goal with good concrete efficiency for useful kinds of two-party correlations, including OT correlations, under different variants of the Learning Parity with Noise (LPN) assumption, and using a small number of “base” oblivious transfers. The protocols of Boyle et al. have several limitations. First, they require a large number of communication rounds. Second, they are only secure against semi-honest parties. Finally, their concrete efficiency estimates are not backed by an actual implementation. In this work we address these limitations, making three main contributions: Eliminating interaction. Under the same assumption, we obtain the first concretely efficient 2-round protocols for generating useful correlations, including OT correlations, in the semi-honest security model. This implies the first efficient 2-round OT extension protocol of any kind and, more generally, protocols for non-interactive secure computation (NISC) that are concretely efficient and have the silent preprocessing feature. Malicious security. We provide security against malicious parties without additional interaction and with only a modest overhead; prior to our work, no similar protocols were known with any number of rounds. Implementation. Finally, we implemented, optimized, and benchmarked our 2-round OT extension protocol, demonstrating that it offers a more attractive alternative to the OT extension protocol of Ishai et al. (Crypto 2003) in many realistic settings.

Endemic Oblivious Transfer

  • Daniel Mansy
  • Peter Rindal

Oblivious Transfer has played a crucial role in the design of secure multi party computation. Nevertheless, there are not many practical solutions that achieve simulation based security and at the same time instantiable based on different assumptions. In this work, we consider a simulation based security notion that we call endemic security. We show how to construct highly efficient oblivious transfer in the random oracle model that achieves endemic security under a wide range of assumptions, among them DDH, CDH, LWE and coding based assumptions. We construct a secure oblivious transfer based on DDH that takes only a single communication round which allows significant performance gains. We also instantiate our oblivious transfer with the Crystals.Kyber key agreement. Our implementation shows that both instantiations can be computed in under one millisecond. Further, we revisit, correct and improve existing oblivious transfer extension techniques. We provide an implementation of an oblivious transfer extension protocol in the ideal cipher model that is actively secure, processing up to 23 million OTs per second and up to 10 times faster than previous secure implementations. We also show that our framework can compute endemically secure OT extension and the base OTs in just two rounds.

LevioSA: Lightweight Secure Arithmetic Computation

  • Carmit Hazay
  • Yuval Ishai
  • Antonio Marcedone
  • Muthuramakrishnan Venkitasubramaniam

We study the problem of secure two-party computation of arithmetic circuits in the presence of active (“malicious”) parties. This problem is motivated by privacy-preserving numerical computations, such as ones arising in the context of machine learning training and classification, as well as in threshold cryptographic schemes. In this work, we design, optimize, and implement anactively secure protocol for secure two-party arithmetic computation. A distinctive feature of our protocol is that it can make a fully modular black-box use of any passively secure implementation of oblivious linear function evaluation (OLE). OLE is a commonly used primitive for secure arithmetic computation, analogously to the role of oblivious transfer in secure computation for Boolean circuits. For typical (large but not-too-narrow) circuits, our protocol requires roughly 4 invocations of passively secure OLE per multiplication gate. This significantly improves over the recent TinyOLE protocol (Döttling et al., ACM CCS 2017), which requires 22 invocations of actively secure OLE in general, or 44 invocations of a specific code-based passively secure OLE. Our protocol follows the high level approach of the IPS compiler (Ishai et al., CRYPTO 2008, TCC 2009), optimizing it in several ways. In particular, we adapt optimization ideas that were used in the context of the practical zero-knowledge argument system Ligero (Ames et al., ACM CCS 2017) to the more general setting of secure computation, and explore the possibility of boosting efficiency by employing a “leaky” passively secure OLE protocol. The latter is motivated by recent (passively secure) lattice-based OLE implementations in which allowing such leakage enables better efficiency. We showcase the performance of our protocol by applying its implementation to several useful instances of secure arithmetic computation. On “wide” circuits, such as ones computing a fixed function on many different inputs, our protocol is 5x faster and transmits 4x less data than the state-of-the-art Overdrive (Keller et al., Eurocrypt 2018). Our benchmarks include a general passive-to-active OLE compiler, authenticated generation of “Beaver triples”, and a system for securely outsourcing neural network classification. The latter is the first actively secure implementation of its kind, strengthening the passive security provided by recent related works (Mohassel and Zhang, IEEE S&P 2017; Juvekar et al., USENIX 2018).

Onion Ring ORAM: Efficient Constant Bandwidth Oblivious RAM from (Leveled) TFHE

  • Hao Chen
  • Ilaria Chillotti
  • Ling Ren

Oblivious RAM (ORAM) is a cryptographic primitive that allows a client to hide access pattern to its data encrypted and stored at a remote server. Traditionally, ORAM algorithms assume the server acts purely as a storage device. Under this assumption, ORAM has at least log(N) bandwidth blowup for N data entries. After three decades of improvements, ORAM algorithms have reached the optimal logarithmic bandwidth blowup. Nonetheless, in many practical use-cases a constant bandwidth overhead is desirable. To this purpose, Devadas et al. (TCC 2016) formalized the server computation model for ORAM and proposed Onion ORAM which relies on homomorphic computation to achieve constant worst-case bandwidth blowup. This line of work is generally believed to be purely theoretical, due to the large overheads of homomorphic computation. In this paper, we present Onion Ring ORAM, the first efficient constant bandwidth ORAM scheme in the single server model, based on the Onion ORAM construction and the leveled version of the TFHE scheme by Chillotti et al.. We propose a series of improvements, most notably including a more efficient homomorphic permutation protocol. We implement Onion Ring ORAM and show that it can outperform state-of-the-art logarithmic-bandwidth ORAM like Path ORAMs and Ring ORAM when the network throughput is limited. Under one setting, our construction reduces monetary cost per access by 40% and end-to-end latency by 35% over Ring ORAM.

SESSION: Session 2D: Encryption (Searchable, Updatable, Homomorphic, etc.)

Encrypted Databases: New Volume Attacks against Range Queries

  • Zichen Gui
  • Oliver Johnson
  • Bogdan Warinschi

We present a range of novel attacks which exploit information about the volume of answers to range queries in encrypted database. Our attacks rely on a strategy which is simple yet robust and effective. We illustrate the robustness of our strategy in a number of ways. We show how i) to adapt the attack for several variations of a basic usage scenario ii) to defeat countermeasures intended to thwart the premise of our basic attack and iii) to perform partial reconstruction of secret data when unique reconstruction is information theoretically impossible. Furthermore, over the state of the art, our attacks require one order of magnitude fewer queries. We show how to improve the attacks even further, under the assumption that some partial information is known to the adversary. We validate experimentally all of our attacks through extensive experiments on real-world medical data and justify theoretically the effectiveness of our strategy for the basic attack scenario. Our new attacks further underscore the difficulty of striking an appropriate functionality-security trade-off for encrypted databases.

Updatable Oblivious Key Management for Storage Systems

  • Stanislaw Jarecki
  • Hugo Krawczyk
  • Jason Resch

We introduce Oblivious Key Management Systems (KMS) as a much more secure alternative to traditional wrapping-based KMS that form the backbone of key management in large-scale data storage deployments. The new system, that builds on Oblivious Pseudorandom Functions (OPRF), hides keys and object identifiers from the KMS, offers unconditional security for key transport, provides key verifiability, reduces storage, and more. Further, we show how to provide all these features in a distributed threshold implementation that enhances protection against server compromise.

We extend this system with updatable encryption capability that supports key updates (known as key rotation) so that upon the periodic change of OPRF keys by the KMS server, a very efficient update procedure allows a client of the KMS service to non-interactively update all its encrypted data to be decryptable only by the new key. This enhances security with forward and post-compromise security, namely, security against future and past compromises, respectively, of the client’s OPRF keys held by the KMS. Additionally, and in contrast to traditional KMS, our solution supports public key encryption and dispenses with any interaction with the KMS for data encryption (only decryption by the client requires such communication).

Our solutions build on recent work on updatable encryption but with significant enhancements applicable to the remote KMS setting. In addition to the critical security improvements, our designs are highly efficient and ready for use in practice. We report on experimental implementation and performance.

Efficient Multi-Key Homomorphic Encryption with Packed Ciphertexts with Application to Oblivious Neural Network Inference

  • Hao Chen
  • Wei Dai
  • Miran Kim
  • Yongsoo Song

Homomorphic Encryption (HE) is a cryptosystem which supports computation on encrypted data. Ló pez-Alt et al. (STOC 2012) proposed a generalized notion of HE, called Multi-Key Homomorphic Encryption (MKHE), which is capable of performing arithmetic operations on ciphertexts encrypted under different keys. In this paper, we present multi-key variants of two HE schemes with packed ciphertexts. We present new relinearization algorithms which are simpler and faster than previous method by Chen et al. (TCC 2017). We then generalize the bootstrapping techniques for HE to obtain multi-key fully homomorphic encryption schemes. We provide a proof-of-concept implementation of both MKHE schemes using Microsoft SEAL. For example, when the dimension of base ring is 8192, homomorphic multiplication between multi-key BFV (resp. CKKS) ciphertexts associated with four parties followed by a relinearization takes about 116 (resp. 67) milliseconds. Our MKHE schemes have a wide range of applications in secure computation between multiple data providers. As a benchmark, we homomorphically classify an image using a pre-trained neural network model, where input data and model are encrypted under different keys. Our implementation takes about 1.8 seconds to evaluate one convolutional layer followed by two fully connected layers on an encrypted image from the MNIST dataset.

Traceback for End-to-End Encrypted Messaging

  • Nirvan Tyagi
  • Ian Miers
  • Thomas Ristenpart

Messaging systems are used to spread misinformation and other malicious content, often with dire consequences. End-to-end encryption improves privacy but hinders content-based moderation and, in particular, obfuscates the original source of malicious content. We introduce the idea of message traceback, a new cryptographic approach that enables platforms to simultaneously provide end-to-end encryption while also being able to track down the source of malicious content reported by users. We formalize functionality and security goals for message traceback, and detail two constructions that allow revealing a chain of forwarded messages (path traceback) or the entire forwarding tree (tree traceback). We implement and evaluate prototypes of our traceback schemes to highlight their practicality, and provide a discussion of deployment considerations.

SESSION: Session 2E: Internet Security

SICO: Surgical Interception Attacks by Manipulating BGP Communities

  • Henry Birge-Lee
  • Liang Wang
  • Jennifer Rexford
  • Prateek Mittal

The Border Gateway Protocol (BGP) is the primary routing protocol for the Internet backbone, yet it lacks adequate security mechanisms. While simple BGP hijack attacks only involve an adversary hijacking Internet traffic destined to a victim, more complex and challenging interception attacks require that adversary intercept a victim’s traffic and forward it on to the victim. If an interception attack is launched incorrectly, the adversary’s attack will disrupt its route to the victim making it impossible to forward packets. To overcome these challenges, we introduce SICO attacks (Surgical Interception using COmmunities): a novel method of launching interception attacks that leverages BGP communities to scope an adversary’s attack and ensure a route to the victim. We then show how SICO attacks can be targeted to specific source IP addresses for reducing attack costs. Furthermore, we ethically perform SICO attacks on the real Internet backbone to evaluate their feasibility and effectiveness. Results suggest that SICO attacks can achieve interception even when previously proposed attacks would not be feasible and outperforms them by attracting traffic from an additional 16% of Internet hosts (worst case) and 58% of Internet hosts (best case). Finally, we analyze the Internet topology to find that at least 83% of multi-homed ASes are capable of launching these attacks.

Just the Tip of the Iceberg: Internet-Scale Exploitation of Routers for Cryptojacking

  • Hugo L. J. Bijmans
  • Tim M. Booij
  • Christian Doerr

The release of an efficient browser-based cryptominer, as introduced by Coinhive in 2017, has quickly spread throughout the web either as a new source of revenue for websites or exploited within the context of hacks and malicious advertisements. Several studies have analyzed the Alexa Top 1M and found 380 – 3,200 (0.038% – 0.32%) to be actively mining, with an estimated $41,000 per month revenue for the top 10 perpetrators. While placing a cryptominer on a popular website supplies considerable returns from its visitors’ web browsers, it only generates revenue while a client is visiting the page. Even though large popular websites attract millions of visitors, the relatively low number of exploiting websites limits the total revenue that can be made. In this paper, we report on a new attack vector that drastically overshadows all existing cryptojacking activity discovered to date. Through a firmware vulnerability in MikroTik routers, cyber criminals are able to rewrite outgoing user traffic and embed cryptomining code in every outgoing web connection. Thus, every web page visited by any user behind an infected router would mine to profit the criminals. Based on NetFlows recorded in a Tier 1 network, semiweekly crawls and telescope traffic, we followed their activities over a period of 10 months, and report on the modus operandi and coordinating infrastructure of the perpetrators, which were during this period in control of up to 1.4M routers, approximately 70% of all MikroTik devices deployed worldwide. We observed different levels of sophistication among adversaries, ranging from individual installations to campaigns involving large numbers of routers. Our results show that cryptojacking through MITM attacks is highly lucrative, a factor of 30 more than previous attack vectors.

Network Hygiene, Incentives, and Regulation: Deployment of Source Address Validation in the Internet

  • Matthew Luckie
  • Robert Beverly
  • Ryan Koga
  • Ken Keys
  • Joshua A. Kroll
  • k claffy

The Spoofer project has collected data on the deployment and characteristics of IP source address validation on the Internet since 2005. Data from the project comes from participants who install an active probing client that runs in the background. The client automatically runs tests both periodically and when it detects a new network attachment point. We analyze the rich dataset of Spoofer tests in multiple dimensions: across time, networks, autonomous systems, countries, and by Internet protocol version. In our data for the year ending August 2019, at least a quarter of tested ASes did not filter packets with spoofed source addresses leaving their networks. We show that routers performing Network Address Translation do not always filter spoofed packets, as 6.4% of IPv4/24 tested in the year ending August 2019 did not filter. Worse, at least two thirds of tested ASes did not filter packets entering their networks with source addresses claiming to be from within their network that arrived from outside their network. We explore several approaches to encouraging remediation and the challenges of evaluating their impact. While we have been able to remediate 352 IPv4/24, we have found an order of magnitude more IPv4/24 that remains unremediated, despite myriad remediation strategies, with 21% unremediated for more than six months. Our analysis provides the most complete and confident picture of the Internet’s susceptibility to date of this long-standing vulnerability. Although there is no simple solution to address the remaining long-tail of unremediated networks, we conclude with a discussion of possible non-technical interventions, and demonstrate how the platform can support evaluation of the impact of such interventions over time.

Security Certification in Payment Card Industry: Testbeds, Measurements, and Recommendations

  • Sazzadur Rahaman
  • Gang Wang
  • Danfeng (Daphne) Yao

The massive payment card industry (PCI) involves various entities such as merchants, issuer banks, acquirer banks, and card brands. Ensuring security for all entities that process payment card information is a challenging task. The PCI Security Standards Council requires all entities to be compliant with the PCI Data Security Standard (DSS), which specifies a series of security requirements. However, little is known regarding how well PCI DSS is enforced in practice. In this paper, we take a measurement approach to systematically evaluate the PCI DSS certification process for e-commerce websites. We develop an e-commerce web application testbed, BuggyCart, which can flexibly add or remove 35 PCI DSS related vulnerabilities. Then we use the testbed to examine the capability and limitations of PCI scanners and the rigor of the certification process. We find that there is an alarming gap between the security standard and its real-world enforcement. None of the 6 PCI scanners we tested are fully compliant with the PCI scanning guidelines, issuing certificates to merchants that still have major vulnerabilities. To further examine the compliance status of real-world e-commerce websites, we build a new lightweight scanning tool named PciCheckerLite and scan 1,203 e-commerce websites across various business sectors. The results confirm that 86% of the websites have at least one PCI DSS violation that should have disqualified them as non-compliant. Our in-depth accuracy analysis also shows that PciCheckerLite’s output is more precise than w3af. We reached out to the PCI Security Council to share our research results to improve the enforcement in practice.

SESSION: Session 3A: Fuzzing: Methods and Applications

Matryoshka: Fuzzing Deeply Nested Branches

  • Peng Chen
  • Jianzhong Liu
  • Hao Chen

Greybox fuzzing has made impressive progress in recent years, evolving from heuristics-based random mutation to approaches for solving individual branch constraints. However, they have difficulty solving path constraints that involve deeply nested conditional statements, which are common in image and video decoders, network packet analyzers, and checksum tools. We propose an approach for addressing this problem. First, we identify all the control flow-dependent conditional statements of the target conditional statement. Next, we select the taint flow-dependent conditional statements. Finally, we use three strategies to find an input that satisfies all conditional statements simultaneously. We implemented this approach in a tool called Matryoshka and compared its effectiveness on 13 open source programs against other state-of-the-art fuzzers. Matryoshka has significantly higher cumulative line and branch coverage than AFL, QSYM, and Angora. We manually classified the crashes found by Matryoshka into 41 unique new bugs and obtained 12 CVEs. Our evaluation also uncovered the key technique contributing to Matryoshka’s impressive performance: it collects only the nesting constraints that may cause the target conditional statement unreachable, which greatly simplifies the path constraints that it has to solve.

Intriguer: Field-Level Constraint Solving for Hybrid Fuzzing

  • Mingi Cho
  • Seoyoung Kim
  • Taekyoung Kwon

Hybrid fuzzing, which combines fuzzing and concolic execution, is promising in light of the recent performance improvements in concolic engines. We have observed that there is room for further improvement: symbolic emulation is still slow, unnecessary constraints dominate solving time, resources are overly allocated, and hard-to-trigger bugs are missed. To address these problems, we present a new hybrid fuzzer named Intriguer. The key idea of Intriguer is field-level constraint solving, which optimizes symbolic execution with field-level knowledge. Intriguer performs instruction-level taint analysis and records execution traces without data transfer instructions like mov. Intriguer then reduces the execution traces for tainted instructions that accessed a wide range of input bytes, and infers input fields to build field transition trees. With these optimizations, Intriguer can efficiently perform symbolic emulation for more relevant instructions and invoke a solver for complicated constraints only. Our evaluation results indicate that Intriguer outperforms the state-of-the-art fuzzers: Intriguer found all the bugs in the LAVA-M(5h) benchmark dataset for ground truth performance, and also discovered 43 new security bugs in seven real-world programs. We reported the bugs and received 23 new CVEs.

Learning to Fuzz from Symbolic Execution with Application to Smart Contracts

  • Jingxuan He
  • Mislav Balunović
  • Nodar Ambroladze
  • Petar Tsankov
  • Martin Vechev

Fuzzing and symbolic execution are two complementary techniques for discovering software vulnerabilities. Fuzzing is fast and scalable, but can be ineffective when it fails to randomly select the right inputs. Symbolic execution is thorough but slow and often does not scale to deep program paths with complex path conditions. In this work, we propose to learn an effective and fast fuzzer from symbolic execution, by phrasing the learning task in the framework of imitation learning. During learning, a symbolic execution expert generates a large number of quality inputs improving coverage on thousands of programs. Then, a fuzzing policy, represented with a suitable architecture of neural networks, is trained on the generated dataset. The learned policy can then be used to fuzz new programs. We instantiate our approach to the problem of fuzzing smart contracts, a domain where contracts often implement similar functionality (facilitating learning) and security is of utmost importance. We present an end-to-end system, ILF (for Imitation Learning based Fuzzer), and an extensive evaluation over >18K contracts. Our results show that ILF is effective: (i) it is fast, generating 148 transactions per second, (ii) it outperforms existing fuzzers (e.g., achieving 33% more coverage), and (iii) it detects more vulnerabilities than existing fuzzing and symbolic execution tools for Ethereum.

SESSION: Session 3B: Blockchain I

HyperService: Interoperability and Programmability Across Heterogeneous Blockchains

  • Zhuotao Liu
  • Yangxi Xiang
  • Jian Shi
  • Peng Gao
  • Haoyu Wang
  • Xusheng Xiao
  • Bihan Wen
  • Yih-Chun Hu

Blockchain interoperability, which allows state transitions across different blockchain networks, is critical functionality to facilitate major blockchain adoption. Existing interoperability protocols mostly focus on atomic token exchanges between blockchains. However, as blockchains have been upgraded from passive distributed ledgers into programmable state machines (thanks to smart contracts), the scope of blockchain interoperability goes beyond just token exchanges. In this paper, we present HyperService, the first platform that delivers interoperability and programmability across heterogeneous blockchains. HyperService is powered by two innovative designs: (i) a developer-facing programming framework that allows developers to build cross-chain applications in a unified programming model; and (ii) a secure blockchain-facing cryptography protocol that provably realizes those applications on blockchains. We implement a prototype of HyperService in approximately 35,000 lines of code to demonstrate its practicality. Our experiments show that (i) HyperService imposes reasonable latency, in order of seconds, on the end-to-end execution of cross-chain applications; (ii) the HyperService platform is scalable to continuously incorporate new large-scale production blockchains.

MatRiCT: Efficient, Scalable and Post-Quantum Blockchain Confidential Transactions Protocol

  • Muhammed F. Esgin
  • Raymond K. Zhao
  • Ron Steinfeld
  • Joseph K. Liu
  • Dongxi Liu

We introduce MatRiCT, an efficient RingCT protocol for blockchain confidential transactions, whose security is based on “post-quantum” (module) lattice assumptions. The proof length of the protocol is around two orders of magnitude shorter than the existing post-quantum proposal, and scales efficiently to large anonymity sets, unlike the existing proposal. Further, we provide the first full implementation of a post-quantum RingCT, demonstrating the practicality of our scheme. In particular, a typical transaction can be generated in a fraction of a second and verified in about 23 ms on a standard PC. Moreover, we show how our scheme can be extended to provide auditability, where a user can select a particular authority from a set of authorities to reveal her identity. The user also has the ability to select no auditing and all these auditing options may co-exist in the same environment. The key ingredients, introduced in this work, of MatRiCT are 1) the shortest to date scalable ring signature from standard lattice assumptions with no Gaussian sampling required, 2) a novel balance zero-knowledge proof and 3) a novel extractable commitment scheme from (module) lattices. We believe these ingredients to be of independent interest for other privacy-preserving applications such as secure e-voting. Despite allowing 64-bit precision for transaction amounts, our new balance proof, and thus our protocol, does not require a range proof on a wide range (such as 32- or 64-bit ranges), which has been a major obstacle against efficient lattice-based solutions. Further, we provide new formal definitions for RingCT-like protocols, where the real-world blockchain setting is captured more closely. The definitions are applicable in a generic setting, and thus are believed to contribute to the development of future confidential transaction protocols in general (not only in the lattice setting).

Prism: Deconstructing the Blockchain to Approach Physical Limits

  • Vivek Bagaria
  • Sreeram Kannan
  • David Tse
  • Giulia Fanti
  • Pramod Viswanath

The concept of a blockchain was invented by Satoshi Nakamoto to maintain a distributed ledger. In addition to its security, important performance measures of a blockchain protocol are its transaction throughput and confirmation latency. In a decentralized setting, these measures are limited by two underlying physical network attributes: communication capacity and speed-of-light propagation delay. In this work we introduce Prism, a new proof-of-work blockchain protocol, which can achieve 1) security against up to 50% adversarial hashing power; 2) optimal throughput up to the capacity C of the network; 3) confirmation latency for honest transactions proportional to the propagation delay D, with confirmation error probability exponentially small in the bandwidth-delay product CD; 4) eventual total ordering of all transactions. Our approach to the design of this protocol is based on deconstructing Nakamoto’s blockchain into its basic functionalities and systematically scaling up these functionalities to approach their physical limits.

SESSION: Session 3C: Secure Computing II

Securely Sampling Biased Coins with Applications to Differential Privacy

  • Jeffrey Champion
  • abhi shelat
  • Jonathan Ullman

We design an efficient method for sampling a large batch of d independent coins with a given bias p ∈ [0,1]. The folklore secure computation method for doing so requires O(lambda + log d) communication and computation per coin to achieve total statistical difference 2-lambda. We present an exponential improvement over the folklore method that uses just O(log(lambda+log d)) gates per coin when sampling d coins with total statistical difference 2-lambda. We present a variant of our work that also concretely beats the folklore method for lambda ≥ 60 which are parameters that are often used in practice. Our new technique relies on using specially designed oblivious data structures to achieve biased coin samples that take an expected 2 random bits to sample. Using our new sampling technique, we present an implementation of the differentially private report-noisy-max mechanism (a more practical implementation of the celebrated exponential mechanism) as a secure multi-party computation. Our benchmarks show that one can run this mechanism on a domain of size d=212 in 6 seconds and up to d=219 in 14 minutes. As far as we know, this is the first complete distributed implementation of either of these mechanisms.

Stormy: Statistics in Tor by Measuring Securely

  • Ryan Wails
  • Aaron Johnson
  • Daniel Starin
  • Arkady Yerukhimovich
  • S. Dov Gordon

Tor is a tool for Internet privacy with millions of daily users. The Tor system benefits in many ways from information gathered about the operation of its network. Measurements guide operators in diagnosing problems, direct the efforts of developers, educate users about the level of privacy they obtain, and inform policymakers about Tor’s impact. However, data collection and reporting can degrade user privacy, contradicting Tor’s goals. Existing approaches to measuring Tor have limited capabilities and security weaknesses. We present Stormy, a general-purpose, privacy-preserving measurement system that overcomes these limitations. Stormy uses secure multiparty computation (MPC) to compute any function of the observations made by Tor relays, while keeping those observations secret. Stormy makes use of existing efficient MPC protocols that are secure in the malicious model, and in addition it includes a novel input-sharing protocol that is secure, efficient, and fault tolerant. The protocol is non-interactive, which is consistent with how relays currently submit measurements, and it allows the relays to go offline after input submission, even while ensuring that an honest relay will not have its input excluded or modified. The input-sharing protocol is compatible with MPC protocols computing on authenticated values and may be of independent interest. We show how Stormy can be deployed in two realistic models: (1) run primarily by a small set of dedicated authorities, or (2) run decentralized across the relays in the Tor network. Stormy scales efficiently to Tor’s thousands of relays, tolerates network churn, and provides security depending only on either Tor’s existing trust assumption that at least one authority is honest (in the first model) or the existing assumption that a large fraction of relay bandwidth is honest (in the second model). We demonstrate how to use the system to compute two broadly-applicable statistics: the median of relay inputs and the cardinality of set-union across relays. We implement Stormy and experimentally evaluate system performance. When Stormy is run among authorities we can perform 151 median computations or 533 set-union cardinalities over 7,000 relay inputs in a single day. When run among the relays themselves, Stormy can perform 36 median computations or 134 set union cardinalities per day. Thus, both deployments enable non-trivial analytics to be securely computed in the Tor network.

Efficient Publicly Verifiable 2PC over a Blockchain with Applications to Financially-Secure Computations

  • Ruiyu Zhu
  • Changchang Ding
  • Yan Huang

We present a new efficient two-party secure computation protocol which allows the honest party to catch dishonest behavior (if any) with a publicly-verifiable, non-repudiable proof without sacrificing the honest party’s secret. Comparing to the best existing protocol of its kind, ours requires a substantially simpler judge algorithm and is able to process circuit evaluator’s input-wires two orders of magnitude faster. Further, we propose an automated, decentralized judge implemented as a blockchain smart-contract. As a killer application of combining our two-party PVC protocol with our decentralized judge, we proposed the concept of financially-secure computation, which can be useful in many practical scenarios where it suffices to consider rational adversaries. We experimentally evaluated our prototype implementation, demonstrated the 2PC protocol is highly efficient and the judge is very affordable to protect users against rational attackers.

SESSION: Session 3D: Formal Analysis I

A Formal Treatment of Deterministic Wallets

  • Poulami Das
  • Sebastian Faust
  • Julian Loss

In cryptocurrencies such as Bitcoin or Ethereum, users control funds via secret keys. To transfer funds from one user to another, the owner of the money signs a new transaction that transfers the funds to the new recipient. This makes secret keys a highly attractive target for attacks, and has lead to prominent examples where millions of dollars worth in cryptocurrency have been stolen. To protect against these attacks, a widely used approach are so-called hot/cold wallets. In a hot/cold wallet system, the hot wallet is permanently connected to the network, while the cold wallet stores the secret key and is kept without network connection. In this work, we propose the first comprehensive security model for hot/cold wallets and develop wallet schemes that are provable secure within these models. At the technical level our main contribution is to provide a new provably secure ECDSA-based hot/cold wallet scheme that can be integrated into legacy cryptocurrencies such as Bitcoin. Our construction and security analysis uses a modular approach, where we show how to generically build secure hot/cold wallets from signature schemes that exhibit a rerandomizing property of the keys.

5GReasoner: A Property-Directed Security and Privacy Analysis Framework for 5G Cellular Network Protocol

  • Syed Rafiul Hussain
  • Mitziu Echeverria
  • Imtiaz Karim
  • Omar Chowdhury
  • Elisa Bertino

The paper proposes 5GReasoner, a framework for property-guided formal verification of control-plane protocols spanning across multiple layers of the 5G protocol stack. The underlying analysis carried out by 5GReasoner can be viewed as an instance of the model checking problem with respect to an adversarial environment. Due to an effective use of behavior-specific abstraction in our manually extracted 5G protocol, 5GReasoner’s analysis generalizes prior analyses of cellular protocols by reasoning about properties not only regarding packet payload but also multi-layer protocol interactions. We instantiated 5GReasoner with two model checkers and a cryptographic protocol verifier, lazily combining them through the use of abstraction-refinement principle. Our analysis of the extracted 5G protocol model covering 6 key control-layer protocols spanning across two layers of the 5G protocol stack with 5GReasoner has identified 11 design weaknesses resulting in attacks having both security and privacy implications. Our analysis also discovered 5 previous design weaknesses that 5G inherits from 4G, and can be exploited to violate its security and privacy guarantees.

Verified Verifiers for Verifying Elections

  • Thomas Haines
  • Rajeev Goré
  • Mukesh Tiwari

The security and trustworthiness of elections is critical to democracy; alas, securing elections is notoriously hard. Powerful cryptographic techniques for verifying the integrity of electronic voting have been developed and are in increasingly common use. The claimed security guarantees of most of these techniques have been formally proved. However, implementing the cryptographic verifiers which utilize these techniques is a technical and error prone process, and often leads to critical errors appearing in the gap between the implementation and the formally verified design. We significantly reduce the gap between theory and practice by using machine checked proofs coupled with code extraction to produce cryptographic verifiers that are themselves formally verified. We demonstrate the feasibility of our technique by producing a formally verified verifier which we use to check the 2018 International Association for Cryptologic Research (IACR) directors election.

SESSION: Session 3E: Privacy II

Analyzing Subgraph Statistics from Extended Local Views with Decentralized Differential Privacy

  • Haipei Sun
  • Xiaokui Xiao
  • Issa Khalil
  • Yin Yang
  • Zhan Qin
  • Hui (Wendy) Wang
  • Ting Yu

Many real-world social networks are decentralized in nature, and the only way to analyze such a network is to collect local views of the social graph from individual participants. Since local views may contain sensitive information, it is often desirable to apply differential privacy in the data collection process, which provides strong and rigorous privacy guarantees. In many practical situations, the local view of a participant contains not only her own connections, but also those of her neighbors, which are private and sensitive for the neighbors, but not directly so for the participant herself. We call such information beyond direct connections an extended local view (ELV)</>, and study two fundamental problems related to ELVs: first, how do we correctly enforce differential privacy for all participants in the presence of ELVs? Second, how can the data collector utilize ELVs to obtain accurate estimates of global graph properties?

This paper points out that when collecting ELVs, it is insufficient to apply a straightforward adaptation of local differential privacy (LDP)</>, a commonly used scheme in practice, to protect the privacy of all network participants. The main problem is that an adversarial data collector can accumulate private information on a specific victim from multiple neighbors of the victim; even though the data collected from each neighbor is perturbed under LDP, their aggregate can still violate the victim’s privacy. To prevent this attack, we formulate a novel decentralized differential privacy (DDP)</> scheme, which requires that each participant consider not only her own privacy, but also that of her neighbors involved in her ELV.

The stringent privacy requirement of DDP, however, makes it challenging to design an effective mechanism for data collection. Towards this goal, we design a novel multi-phase framework under DDP</> that enables an analyst to accurately estimate subgraph counts, an important property of social graphs. The main idea is that instead of collecting subgraph counts directly, which would require excessively noise, the analyst first asks individuals about their respective minimum noise scale, which is private information since it depends on the local graph structure, and, thus, must be performed under DDP. For some types of subgraphs, this process is applied recursively</>, i.e., the analyst asks about the necessary noise to be injected into the private information on the minimum local noise scale required to protect subgraph counts under DDP. As case studies, we instantiate the proposed framework for three common subgraph patterns: triangles, three-hop paths, and k</>-cliques. Extensive experiments using real data demonstrate that the proposed scheme leads to accurate estimates of global subgraph counts, whereas baseline solutions fail to obtain meaningful result utility.

How to Accurately and Privately Identify Anomalies

  • Hafiz Asif
  • Periklis A. Papakonstantinou
  • Jaideep Vaidya

Identifying anomalies in data is central to the advancement of science, national security, and finance. However, privacy concerns restrict our ability to analyze data. Can we lift these restrictions and accurately identify anomalies without hurting the privacy of those who contribute their data? We address this question for the most practically relevant case, where a record is considered anomalous relative to other records. We make four contributions. First, we introduce the notion of sensitive privacy, which conceptualizes what it means to privately identify anomalies. Sensitive privacy generalizes the important concept of differential privacy and is amenable to analysis. Importantly, sensitive privacy admits algorithmic constructions that provide strong and practically meaningful privacy and utility guarantees. Second, we show that differential privacy is inherently incapable of accurately and privately identifying anomalies; in this sense, our generalization is necessary. Third, we provide a general compiler that takes as input a differentially private mechanism (which has bad utility for anomaly identification) and transforms it into a sensitively private one. This compiler, which is mostly of theoretical importance, is shown to output a mechanism whose utility greatly improves over the utility of the input mechanism. As our fourth contribution we propose mechanisms for a popular definition of anomaly ((β,r)-anomaly) that (i) are guaranteed to be sensitively private, (ii) come with provable utility guarantees, and (iii) are empirically shown to have an overwhelmingly accurate performance over a range of datasets and evaluation criteria.

Differentially Private Nonparametric Hypothesis Testing

  • Simon Couch
  • Zeki Kazan
  • Kaiyan Shi
  • Andrew Bray
  • Adam Groce

Hypothesis tests are a crucial statistical tool for data mining and are the workhorse of scientific research in many fields. Here we study differentially private tests of independence between a categorical and a continuous variable. We take as our starting point traditional nonparametric tests, which require no distributional assumption (e.g., normality) about the data distribution. We present private analogues of the Kruskal-Wallis, Mann-Whitney, and Wilcoxon signed-rank tests, as well as the parametric one-sample t-test. These tests use novel test statistics developed specifically for the private setting. We compare our tests to prior work, both on parametric and nonparametric tests. We find that in all cases our new nonparametric tests achieve large improvements in statistical power, even when the assumptions of parametric tests are met.

SESSION: Session 4A: Side Channels II

ZombieLoad: Cross-Privilege-Boundary Data Sampling

  • Michael Schwarz
  • Moritz Lipp
  • Daniel Moghimi
  • Jo Van Bulck
  • Julian Stecklina
  • Thomas Prescher
  • Daniel Gruss

In early 2018, Meltdown first showed how to read arbitrary kernel memory from user space by exploiting side-effects from transient instructions. While this attack has been mitigated through stronger isolation boundaries between user and kernel space, Meltdown inspired an entirely new class of fault-driven transient-execution attacks. Particularly, over the past year, Meltdown-type attacks have been extended to not only leak data from the L1 cache but also from various other microarchitectural structures, including the FPU register file and store buffer.

In this paper, we present the ZombieLoad attack which uncovers a novel Meltdown-type effect in the processor’s fill-buffer logic. Our analysis shows that faulting load instructions (i.e., loads that have to be re-issued) may transiently dereference unauthorized destinations previously brought into the fill buffer by the current or a sibling logical CPU. In contrast to concurrent attacks on the fill buffer, we are the first to report data leakage of recently loaded and stored stale values across logical cores even on Meltdown- and MDS-resistant processors. Hence, despite Intel’s claims, we show that the hardware fixes in new CPUs are not sufficient. We demonstrate ZombieLoad’s effectiveness in a multitude of practical attack scenarios across CPU privilege rings, OS processes, virtual machines, and SGX enclaves. We discuss both short and long-term mitigation approaches and arrive at the conclusion that disabling hyperthreading is the only possible workaround to prevent at least the most-powerful cross-hyperthread attack scenarios on current processors, as Intel’s software fixes are incomplete.

Fallout: Leaking Data on Meltdown-resistant CPUs

  • Claudio Canella
  • Daniel Genkin
  • Lukas Giner
  • Daniel Gruss
  • Moritz Lipp
  • Marina Minkin
  • Daniel Moghimi
  • Frank Piessens
  • Michael Schwarz
  • Berk Sunar
  • Jo Van Bulck
  • Yuval Yarom

Meltdown and Spectre enable arbitrary data leakage from memory via various side channels. Short-term software mitigations for Meltdown are only a temporary solution with a significant performance overhead. Due to hardware fixes, these mitigations are disabled on recent processors. In this paper, we show that Meltdown-like attacks are still possible on recent CPUs which are not vulnerable to Meltdown. We identify two behaviors of the store buffer, a microarchitectural resource to reduce the latency for data stores, that enable powerful attacks. The first behavior, Write Transient Forwarding forwards data from stores to subsequent loads even when the load address differs from that of the store. The second, Store-to-Leak exploits the interaction between the TLB and the store buffer to leak metadata on store addresses. Based on these, we develop multiple attacks and demonstrate data leakage, control flow recovery, and attacks on ASLR. Our paper shows that Meltdown-like attacks are still possible, and software fixes with potentially significant performance overheads are still necessary to ensure proper isolation between the kernel and user space.

SMoTherSpectre: Exploiting Speculative Execution through Port Contention

  • Atri Bhattacharyya
  • Alexandra Sandulescu
  • Matthias Neugschwandtner
  • Alessandro Sorniotti
  • Babak Falsafi
  • Mathias Payer
  • Anil Kurmus

Spectre, Meltdown, and related attacks have demonstrated that kernels, hypervisors, trusted execution environments, and browsers are prone to information disclosure through micro-architectural weaknesses. However, it remains unclear as to what extent other applications, in particular those that do not load attacker-provided code, may be impacted. It also remains unclear as to what extent these attacks are reliant on cache-based side channels. We introduce SMoTherSpectre, a speculative code-reuse attack that leverages port-contention in simultaneously multi-threaded processors (SMoTher) as a side channel to leak information from a victim process. SMoTher is a fine-grained side channel that detects contention based on a single victim instruction. To discover real-world gadgets, we describe a methodology and build a tool that locates SMoTher-gadgets in popular libraries. In an evaluation on glibc, we found hundreds of gadgets that can be used to leak information. Finally, we demonstrate proof-of-concept attacks against the OpenSSH server, creating oracles for determining four host key bits, and against an application performing encryption using the OpenSSL library, creating an oracle which can differentiate a bit of the plaintext through gadgets in libcrypto and glibc.

SESSION: Session 4B: Blockchain II

Atomic Multi-Channel Updates with Constant Collateral in Bitcoin-Compatible Payment-Channel Networks

  • Christoph Egger
  • Pedro Moreno-Sanchez
  • Matteo Maffei

Current cryptocurrencies provide a heavily limited transaction throughput that is clearly insufficient to cater their growing adoption. Payment-channel networks (PCNs) have emerged as an interesting solution to the scalability issue and are currently deployed by popular cryptocurrencies such as Bitcoin and Ethereum. While PCNs do increase the transaction throughput by processing payments off-chain and using the blockchain only as a dispute arbitrator, they unfortunately require high collateral (i.e., they lock coins for a non-constant time along the payment path) and are restricted to payments in a path from sender to receiver. These issues have severe consequences in practice. The high collateral enables denial-of-service attacks that hamper the throughput and utility of the PCN. Moreover, the limited functionality hinders the applicability of current PCNs in many important application scenarios. Unfortunately, current proposals do not solve either of these issues, or they require Turing-complete language support, which severely limit their applicability.

In this work, we present AMCU, the first protocol for atomic multi-channel updates and reduced collateral that is compatible with Bitcoin (and other cryptocurrencies with reduced scripting capabilities). We provide a formal model in the Universal Composability framework and show that AMCU realizes it, thus demonstrating that AMCU achieves atomicity and value privacy. Moreover, the reduced collateral mitigates the consequences of griefing attacks in PCNs while the (multi-payment) atomicity achieved by AMCU opens the door to new applications such as credit rebalancing and crowdfunding that are not possible otherwise. Moreover, our evaluation results demonstrate that AMCU has a performance in line with that of the Lightning Network (the most widely deployed PCN) and thus is ready to be deployed in practice.

Erlay: Efficient Transaction Relay for Bitcoin

  • Gleb Naumenko
  • Gregory Maxwell
  • Pieter Wuille
  • Alexandra Fedorova
  • Ivan Beschastnikh

Bitcoin is a top-ranked cryptocurrency that has experienced huge growth and survived numerous attacks. The protocols making up Bitcoin must therefore accommodate the growth of the network and ensure security.

Security of the Bitcoin network depends on connectivity between the nodes. Higher connectivity yields better security. In this paper we make two observations: (1) current connectivity in the Bitcoin network is too low for optimal security; (2) at the same time, increasing connectivity will substantially increase the bandwidth used by the transaction dissemination protocol, making it prohibitively expensive to operate a Bitcoin node. Half of the total bandwidth needed to operate a Bitcoin node is currently used to just announce transactions. Unlike block relay, transaction dissemination has received little attention in prior work.

We propose a new transaction dissemination protocol, Erlay, that not only reduces the bandwidth consumption by 40% assuming current connectivity, but also keeps the bandwidth use almost constant as the connectivity increases. In contrast, the existing protocol increases the bandwidth consumption linearly with the number of connections. By allowing more connections at a small cost, Erlay improves the security of the Bitcoin network. And, as we demonstrate, Erlay also hardens the network against attacks that attempt to learn the origin node of a transaction. Erlay is currently being investigated by the Bitcoin community for future use with the Bitcoin protocol.

Power Adjusting and Bribery Racing: Novel Mining Attacks in the Bitcoin System

  • Shang Gao
  • Zecheng Li
  • Zhe Peng
  • Bin Xiao

Mining attacks allow attackers to gain an unfair share of the mining reward by deviating from the honest mining strategy in the Bitcoin system. Among the most well-known are block withholding (BWH), fork after withholding (FAW), and selfish mining. In this paper, we propose two new strategies: power adjusting and bribery racing, and introduce two novel mining attacks, Power Adjusting Withholding (PAW) and Bribery Selfish Mining (BSM) adopting the new strategies. Both attacks can increase the reward of attackers. Furthermore, we show PAW can avoid the “miner’s dilemma” in BWH attacks. BSM introduces a new “venal miner’s dilemma”, which results in all targets (bribes) willing to help the attacker but getting less reward finally. Quantitative analyses and simulations are conducted to verify the effectiveness of our attacks. We propose some countermeasures to mitigate the new attacks, but a practical and efficient solution remains to be an open problem.

SESSION: Session 4C: Secure Computing III

A High-Assurance Evaluator for Machine-Checked Secure Multiparty Computation

  • Karim Eldefrawy
  • Vitor Pereira

Secure Multiparty Computation (MPC) enables a group of n</> distrusting parties to jointly compute a function using private inputs. MPC guarantees correctness of computation and confidentiality of inputs if no more than a threshold t</> of the parties are corrupted. Proactive MPC (PMPC) addresses the stronger threat model of a \emphmobile adversary that controls a changing set of parties (but only up to t</> at any instant), and may eventually corrupt all n</> parties over a long time. This paper takes a first stab at developing high-assurance implementations of (P)MPC. We formalize in \EasyCrypt, a tool-assisted framework for building high-confidence cryptographic proofs, several abstract and reusable variations of secret sharing and of (P)MPC protocols building on them. Using those, we prove a series of abstract theorems for the proactive setting. We implement and perform computer-checked security proofs of concrete instantiations of the required (abstract) protocols in \EasyCrypt. We also develop a new tool-chain to extract high-assurance executable implementations of protocols formalized and verified in \EasyCrypt. Our tool-chain uses \Why as an intermediate tool, and enables us to extract executable code from our (P)MPC formalizations. We conduct an evaluation of the extracted executables by comparing their performance to performance of manually implemented versions using \textsfPython -based \textsfCharm framework for prototyping cryptographic schemes. We argue that the small overhead of our high-assurance executables is a reasonable price to pay for the increased confidence about their correctness and security.

Practical Fully Secure Three-Party Computation via Sublinear Distributed Zero-Knowledge Proofs

  • Elette Boyle
  • Niv Gilboa
  • Yuval Ishai
  • Ariel Nof

Secure multiparty computation enables a set of parties to securely carry out a joint computation on their private inputs without revealing anything but the output. A particularly motivated setting is that of three parties with a single corruption (hereafter denoted 3PC). This 3PC setting is particularly appealing for two main reasons: (1) it admits more efficient MPC protocols than in other standard settings; (2) it allows in principle to achieve full security (and fairness). Highly efficient protocols exist within this setting with security against a semi-honest</> adversary; however, a significant gap remains between these and protocols with stronger security against a malicious</> adversary. In this paper, we narrow this gap within concretely efficient protocols. More explicitly, we have the following contributions: Concretely Efficient Malicious 3PC. We present an optimized 3PC protocol for arithmetic circuits over rings with (amortized) communication of 1 ring element per multiplication gate per party, matching the best semi-honest protocols. The protocol applies also to Boolean circuits, significantly improving over previous protocols even for small circuits. Our protocol builds on recent techniques of Boneh et al. (Crypto 2019) for sublinear zero-knowledge proofs on distributed data, together with an efficient semi-honest protocol based on replicated secret sharing (Araki et al., CCS 2016). We present a concrete analysis of communication and computation costs, including several optimizations. For example, for 40-bit statistical security, and Boolean circuit with a million (nonlinear) gates, the overhead on top of the semi-honest protocol can involve less than 0.5KB of communication for the entire circuit,</> while the computational overhead is dominated by roughly 30 multiplications per gate in the field F247. In addition, we implemented and benchmarked the protocol for varied circuit sizes. Full Security. We augment the 3PC protocol to further provide full security</> (with guaranteed output delivery) while maintaining amortized 1 ring element communication per party per multiplication gate, and with hardly any impact on concrete efficiency. This is contrasted with the best previous 3PC protocols from the literature, which allow a corrupt party to mount a denial-of-service attack without being detected.

HoneyBadgerMPC and AsynchroMix: Practical Asynchronous MPC and its Application to Anonymous Communication

  • Donghang Lu
  • Thomas Yurek
  • Samarth Kulshreshtha
  • Rahul Govind
  • Aniket Kate
  • Andrew Miller

Multiparty computation as a service (MPSaaS) is a promising approach for building privacy-preserving communication systems. However, in this paper, we argue that existing MPC implementations are inadequate for this application as they do not address fairness, let alone robustness. Even a single malicious server can cause the protocol to abort while seeing the output for itself, which in the context of an anonymous communication service would create a vulnerability to censorship and de-anonymization attacks. To remedy this we propose a new MPC implementation, HoneyBadgerMPC, that combines a robust online phase with an optimistic offline phase that is efficient enough to run continuously alongside the online phase. We use HoneyBadgerMPC to develop an application case study, called AsynchroMix, that provides an anonymous broadcast functionality. AsynchroMix features a novel MPC program that trades off between computation and communication, allowing for low-latency message mixing in varying settings. In a cloud-based distributed benchmark with 100 nodes, we demonstrate mixing a batch of 512 messages in around 20 seconds and up to 4096 messages in around two minutes.

SESSION: Session 4D: Formal Analysis II

Exploiting Symmetries When Proving Equivalence Properties for Security Protocols

  • Vincent Cheval
  • Steve Kremer
  • Itsaka Rakotonirina

Verification of privacy-type properties for cryptographic protocols in an active adversarial environment, modelled as a behavioural equivalence in concurrent-process calculi, exhibits a high computational complexity. While undecidable in general, for some classes of common cryptographic primitives the problem is coNEXP-complete when the number of honest participants is bounded.

In this paper we develop optimisation techniques for verifying equivalences, exploiting symmetries between the two processes under study. We demonstrate that they provide a significant (several orders of magnitude) speed-up in practice, thus increasing the size of the protocols that can be analysed fully automatically.

Are These Pairing Elements Correct?: Automated Verification and Applications

  • Susan Hohenberger
  • Satyanarayana Vusirikala

Using a set of pairing product equations (PPEs) to verify the correctness of an untrusted set of pairing elements with respect to another set of trusted elements has numerous cryptographic applications. These include the design of basic and structure-preserving signature schemes, building oblivious transfer schemes from “blind” IBE, finding new verifiable random functions and keeping the IBE/ABE authority “accountable” to the user.

A natural question to ask is: are all trusted-untrusted pairing element groups in the literature PPE testable? We provide original observations demonstrating that the answer is no, and moreover, it can be non-trivial to determine whether or not there exists a set of PPEs that can verify some pairing elements with respect to others. Many IBE schemes have PPE-testable private keys (with respect to the public parameters), while others, such as those based on dual-system encryption, provably do not.

To aid those wishing to use PPE-based element verification in their cryptosystems, we devised rules to systematically search for a set of PPEs that can verify untrusted elements with respect to a set of trusted elements. We prove the correctness of each rule and combine them into a main searching algorithm for which we also prove correctness. We implemented this algorithm in a new software tool, called AutoPPE. Tested on over two dozen case studies, AutoPPE found a set of PPEs (on schemes where they exist) usually in just a matter of seconds. This work represents an important step towards the larger goal of improving the speed and accuracy of pairing-based cryptographic design via computer automation.

Post-Collusion Security and Distance Bounding

  • Sjouke Mauw
  • Zach Smith
  • Jorge Toro-Pozo
  • Rolando Trujillo-Rasua

Verification of cryptographic protocols is traditionally built upon the assumption that participants have not revealed their long-term keys. However, in some cases, participants might collude to defeat some security goals, without revealing their long-term secrets.

We develop a model based on multiset rewriting to reason about collusion in security protocols. We introduce the notion of post-collusion security, which verifies security properties claimed in sessions initiated after the collusion occurred. We use post-collusion security to analyse terrorist fraud on protocols for securing physical proximity, known as distance-bounding protocols. In a terrorist fraud attack, agents collude to falsely prove proximity, whilst no further false proximity proof can be issued without further collusion.

Our definitions and the Tamarin prover are used to develop a modular framework for verification of distance-bounding protocols that accounts for all types of attack from literature. We perform a survey of over 25 protocols, which include industrial protocols such as Mastercard’s contactless payment PayPass and NXP’s MIFARE Plus with proximity check. For the industrial protocols we confirm attacks, propose fixes, and deliver computer-verifiable security proofs of the repaired versions.

SESSION: Session 4E: Privacy III

Five Years of the Right to be Forgotten

  • Theo Bertram
  • Elie Bursztein
  • Stephanie Caro
  • Hubert Chao
  • Rutledge Chin Feman
  • Peter Fleischer
  • Albin Gustafsson
  • Jess Hemerly
  • Chris Hibbert
  • Luca Invernizzi
  • Lanah Kammourieh Donnelly
  • Jason Ketover
  • Jay Laefer
  • Paul Nicholas
  • Yuan Niu
  • Harjinder Obhi
  • David Price
  • Andrew Strait
  • Kurt Thomas
  • Al Verney

The “Right to be Forgotten” is a privacy ruling that enables Europeans to delist certain URLs appearing in search results related to their name. In order to illuminate the effect this ruling has on information access, we conducted a retrospective measurement study of 3.2 million URLs that were requested for delisting from Google Search over five years. Our analysis reveals the countries and anonymized parties generating the largest volume of requests (just 1,000 requesters generated 16% of requests); the news, government, social media, and directory sites most frequently targeted for delisting (17% of removals relate to a requester’s legal history including crimes and wrongdoing); and the prevalence of extraterritorial requests. Our results dramatically increase transparency around the Right to be Forgotten and reveal the complexity of weighing personal privacy against public interest when resolving multi-party privacy conflicts that occur across the Internet. The results of our investigation have since been added to Google’s transparency report.

(Un)informed Consent: Studying GDPR Consent Notices in the Field

  • Christine Utz
  • Martin Degeling
  • Sascha Fahl
  • Florian Schaub
  • Thorsten Holz

Since the adoption of the General Data Protection Regulation (GDPR) in May 2018 more than 60 % of popular websites in Europe display cookie consent notices to their visitors. This has quickly led to users becoming fatigued with privacy notifications and contributed to the rise of both browser extensions that block these banners and demands for a solution that bundles consent across multiple websites or in the browser. In this work, we identify common properties of the graphical user interface of consent notices and conduct three experiments with more than 80,000 unique users on a German website to investigate the influence of notice position, type of choice, and content framing on consent. We find that users are more likely to interact with a notice shown in the lower (left) part of the screen. Given a binary choice, more users are willing to accept tracking compared to mechanisms that require them to allow cookie use for each category or company individually. We also show that the wide-spread practice of nudging has a large effect on the choices users make. Our experiments show that seemingly small implementation decisions can substantially impact whether and how people interact with consent notices. Our findings demonstrate the importance for regulation to not just require consent, but also provide clear requirements or guidance for how this consent has to be obtained in order to ensure that users can make free and informed choices.

Moving Beyond Set-It-And-Forget-It Privacy Settings on Social Media

  • Mainack Mondal
  • Günce Su Yilmaz
  • Noah Hirsch
  • Mohammad Taha Khan
  • Michael Tang
  • Christopher Tran
  • Chris Kanich
  • Blase Ur
  • Elena Zheleva

When users post on social media, they protect their privacy by choosing an access control setting that is rarely revisited. Changes in users’ lives and relationships, as well as social media platforms themselves, can cause mismatches between a post’s active privacy setting and the desired setting. The importance of managing this setting combined with the high volume of potential friend-post pairs needing evaluation necessitate a semi-automated approach. We attack this problem through a combination of a user study and the development of automated inference of potentially mismatched privacy settings. A total of 78 Facebook users reevaluated the privacy settings for five of their Facebook posts, also indicating whether a selection of friends should be able to access each post. They also explained their decision. With this user data, we designed a classifier to identify posts with currently incorrect sharing settings. This classifier shows a 317% improvement over a baseline classifier based on friend interaction. We also find that many of the most useful features can be collected without user intervention, and we identify directions for improving the classifier’s accuracy.

SESSION: Session 5A: Software Security

Binary Control-Flow Trimming

  • Masoud Ghaffarinia
  • Kevin W. Hamlen

A new method of automatically reducing the attack surfaces of binary software is introduced, affording code consumers the power to remove features that are unwanted or unused in a particular deployment context. The approach targets stripped binary native code with no source-derived metadata or symbols, can remove semantic features irrespective of whether they were intended and/or known to code developers, and anticipates consumers who can demonstrate desired features (e.g., via unit testing), but who may not know the existence of specific unwanted features, and who lack any formal specifications of the code’s semantics.

Through a combination of runtime tracing, machine learning, in-lined reference monitoring, and contextual control-flow integrity enforcement, it is demonstrated that automated code feature removal is nevertheless feasible under these constraints, even for complex programs such as compilers and servers. The approach additionally accommodates consumers whose demonstration of desired features is incomplete; a tunable entropy-based metric detects coverage lapses and conservatively preserves unexercised but probably desired flows. A prototype implementation for Intel x86-64 exhibits low runtime overhead for trimmed binaries (about 1.87%), and case studies show that consumer-side control-flow trimming can successfully eliminate zero-day vulnerabilities.

Program-mandering: Quantitative Privilege Separation

  • Shen Liu
  • Dongrui Zeng
  • Yongzhe Huang
  • Frank Capobianco
  • Stephen McCamant
  • Trent Jaeger
  • Gang Tan

Privilege separation is an effective technique to improve software security. However, past partitioning systems do not allow programmers to make quantitative tradeoffs between security and performance. In this paper, we describe our toolchain called PM. It can automatically find the optimal boundary in program partitioning. This is achieved by solving an integer-programming model that optimizes for a user-chosen metric while satisfying the remaining security and performance constraints on other metrics. We choose security metrics to reason about how well computed partitions enforce information flow control to: (1) protect the program from low-integrity inputs or (2) prevent leakage of program secrets. As a result, functions in the sensitive module that fall on the optimal partition boundaries automatically identify where declassification is necessary. We used PM to experiment on a set of real-world programs to protect confidentiality and integrity; results show that, with moderate user guidance, PM can find partitions that have better balance between security and performance than partitions found by a previous tool that requires manual declassification.

SESSION: Session 5B: Protocols

Flexible Byzantine Fault Tolerance

  • Dahlia Malkhi
  • Kartik Nayak
  • Ling Ren

This paper introduces Flexible BFT, a new approach for BFT consensus solution design revolving around two pillars, stronger resilience and diversity. The first pillar, stronger resilience, involves a new fault model called alive-but-corrupt faults. Alive-but-corrupt replicas may arbitrarily deviate from the protocol in an attempt to break safety of the protocol. However, if they cannot break safety, they will not try to prevent liveness of the protocol. Combining alive-but-corrupt faults into the model, Flexible BFT is resilient to higher corruption levels than possible in a pure Byzantine fault model. The second pillar, diversity, designs consensus solutions whose protocol transcript is used to draw different commit decisions under diverse beliefs. With this separation, the same Flexible BFT solution supports synchronous and asynchronous beliefs, as well as varying resilience threshold combinations of Byzantine and alive-but-corrupt faults. At a technical level, Flexible BFT achieves the above results using two new ideas. First, it introduces a synchronous BFT protocol in which only the commit step requires to know the network delay bound and thus replicas execute the protocol without any synchrony assumption. Second, it introduces a notion called Flexible Byzantine Quorums by dissecting the roles of different quorums in existing consensus protocols.

Distributed Vector-OLE: Improved Constructions and Implementation

  • Phillipp Schoppmann
  • Adrià Gascón
  • Leonie Reichert
  • Mariana Raykova

We investigate concretely efficient protocols for distributed oblivious linear evaluation over vectors (Vector-OLE). Boyle et al. (CCS 2018) proposed a protocol for secure distributed pseudorandom Vector-OLE generation using sublinear</>communication, but they did not provide an implementation. Their construction is based on a variant of the LPN assumption and assumes a distributed key generation protocol for single-point Function Secret Sharing (FSS), as well as an efficient batching scheme to obtain multi-point FSS. We show that this requirement can be relaxed, resulting in a weaker variant of FSS, for which we give an efficient protocol. This allows us to use efficient probabilistic batch codes that were also recently used for batched PIR by Angel et al. (S&P 2018). We construct a full Vector-OLE generator from our protocols, and compare it experimentally with alternative approaches. Our implementation parallelizes very well, and has low communication overhead in practice. For generating a VOLE of size $2^20 $, our implementation only takes $0.52$s on 32 cores.

SESSION: Session 5C: Cloud Security II

Houdini’s Escape: Breaking the Resource Rein of Linux Control Groups

  • Xing Gao
  • Zhongshu Gu
  • Zhengfa Li
  • Hani Jamjoom
  • Cong Wang

Linux Control Groups, i.e., cgroups, are the key building blocks to enable operating-system-level containerization. The cgroups mechanism partitions processes into hierarchical groups and applies different controllers to manage system resources, including CPU, memory, block I/O, etc. Newly spawned child processes automatically copy cgroups attributes from their parents to enforce resource control. Unfortunately, inherited cgroups confinement via process creation does not always guarantee consistent and fair resource accounting. In this paper, we devise a set of exploiting strategies to generate out-of-band</>workloads via de-associating processes from their original process groups. The system resources consumed by such workloads will not be charged to the appropriate cgroups. To further demonstrate the feasibility, we present five case studies within Docker containers to demonstrate how to break the resource rein of cgroups in realistic scenarios. Even worse, by exploiting those cgroups’ insufficiencies in a multi-tenant container environment, an adversarial container is able to greatly amplify the amount of consumed resources, significantly slow-down other containers on the same host, and gain extra unfair advantages on the system resources. We conduct extensive experiments on both a local testbed and an Amazon EC2 cloud dedicated server. The experimental results demonstrate that a container can consume system resources (e.g., CPU) as much as $200\times$ of its limit, and reduce both computing and I/O performance of particular workloads in other co-resident containers by 95%.

Insecure Until Proven Updated: Analyzing AMD SEV’s Remote Attestation

  • Robert Buhren
  • Christian Werling
  • Jean-Pierre Seifert

Cloud computing is one of the most prominent technologies to host Internet services that unfortunately leads to an increased risk of data theft. Customers of cloud services have to trust the cloud providers, as they control the building blocks that form the cloud. This includes the hypervisor enabling the sharing of a single hardware platform among multiple tenants. Executing in a higher-privileged CPU mode, the hypervisor has direct access to the memory of virtual machines. While data at rest can be protected using well-known disk encryption methods, data residing in main memory is still threatened by a potentially malicious cloud provider. AMD Secure Encrypted Virtualization (SEV) claims a new level of protection in such cloud scenarios. AMD SEV encrypts the main memory of virtual machines with VM-specific keys, thereby denying the higher-privileged hypervisor access to a guest’s memory. To enable the cloud customer to verify the correct deployment of his virtual machine, SEV additionally introduces a remote attestation protocol. This protocol is a crucial component of the SEV technology that can prove that SEV protection is in place and that the virtual machine was not subject to manipulation. This paper analyzes the firmware components that implement the SEV remote attestation protocol on the current AMD Epyc Naples CPU series. We demonstrate that it is possible to extract critical</> CPU-specific keys that are fundamental for the security of the remote attestation protocol. Building on the extracted keys, we propose attacks that allow a malicious cloud provider a complete circumvention of the SEV protection mechanisms. Although the underlying firmware issues were already fixed by AMD, we show that the current series of AMD Epyc CPUs, i.e., the Naples series, does not prevent the installation of previous firmware versions. We show that the severity of our proposed attacks is very high as no purely software-based mitigations are possible. This effectively renders the SEV technology on current AMD Epyc CPUs useless when confronted with an untrusted cloud provider. To overcome these issues, we also propose robust changes to the SEV design that allow future generations of the SEV technology to mitigate the proposed attacks.

SESSION: Session 5D: SDN Security

An In-depth Look Into SDN Topology Discovery Mechanisms: Novel Attacks and Practical Countermeasures

  • Eduard Marin
  • Nicola Bucciol
  • Mauro Conti

Software-Defined Networking (SDN) is a novel network approach that has revolutionised existent network architectures by decoupling the control plane from the data plane. Researchers have shown that SDN networks are highly vulnerable to security attacks. For instance, adversaries can tamper with the controller’s network topology view to hijack the hosts’ location or create fake inter-switch links. These attacks can be launched for various purposes, ranging from impersonating hosts to bypassing middleboxes or intercepting network traffic. Several countermeasures have been proposed to mitigate topology attacks but to date there has been no comprehensive analysis of the level of security they offer. A critical analysis is thus an important step towards better understanding the possible limitations of the existing solutions and building stronger defences against topology attacks.

In this paper, we evaluate the actual security of the existing mechanisms for network topology discovery in SDN. Our analysis reveals 6 vulnerabilities in the state-of-the-art countermeasures against topology attacks: TopoGuard,</> <>TopoGuard+,</>SPV</> and SecureBinder.</> We show that these vulnerabilities can be exploited in practice to manipulate the network topology view at the controller. Furthermore, we present 2 novel topology attacks, called Topology Freezing</> and Reverse Loop,</> that exploit vulnerabilities in the widely used Floodlight controller. We responsibly disclosed these vulnerabilities to Floodlight. While we show that it is difficult to fully eradicate these attacks, we propose fixes to mitigate them. In response to our findings, we conclude the paper by detailing practical ways of further improving the existing countermeasures.

Proof-Carrying Network Code

  • Christian Skalka
  • John Ring
  • David Darias
  • Minseok Kwon
  • Sahil Gupta
  • Kyle Diller
  • Steffen Smolka
  • Nate Foster

Computer networks often serve as the first line of defense against malicious attacks. Although there are a growing number of tools for defining and enforcing security policies in software-defined networks (SDNs), most assume a single point of control and are unable to handle the challenges that arise in networks with multiple administrative domains. For example, consumers may want want to allow their home IoT networks to be configured by device vendors, which raises security and privacy concerns. In this paper we propose a framework called Proof-Carrying Network Code (PCNC) for specifying and enforcing security in SDNs with interacting administrative domains. Like Proof-Carrying Authorization (PCA), PCNC provides methods for managing authorization domains, and like Proof-Carrying Code (PCC), PCNC provides methods for enforcing behavioral properties of network programs. We develop theoretical foundations for PCNC and evaluate it in simulated and real network settings, including a case study that considers security in IoT networks for home health monitoring.

SESSION: Session 5E: Fingerprinting

Triplet Fingerprinting: More Practical and Portable Website Fingerprinting with N-shot Learning

  • Payap Sirinam
  • Nate Mathews
  • Mohammad Saidur Rahman
  • Matthew Wright

Website Fingerprinting (WF) attacks pose a serious threat to users’ online privacy, including for users of the Tor anonymity system. By exploiting recent advances in deep learning, WF attacks like Deep Fingerprinting (DF) have reached up to 98% accuracy. The DF attack, however, requires large amounts of training data that needs to be updated regularly, making it less practical for the weaker attacker model typically assumed in WF. Moreover, research on WF attacks has been criticized for not demonstrating attack effectiveness under more realistic and more challenging scenarios. Most research on WF attacks assumes that the testing and training data have similar distributions and are collected from the same type of network at about the same time. In this paper, we examine how an attacker could leverage N-shot learning—a machine learning technique requiring just a few training samples to identify a given class—to reduce the effort of gathering and training with a large WF dataset as well as mitigate the adverse effects of dealing with different network conditions. In particular, we propose a new WF attack called Triplet Fingerprinting (TF) that uses triplet networks for N-shot learning. We evaluate this attack in challenging settings such as where the training and testing data are collected multiple years apart on different networks, and we find that the TF attack remains effective in such settings with 85% accuracy or better. We also show that the TF attack is also effective in the open world and outperforms traditional transfer learning. On top of that, the attack requires only five examples to recognize a website, making it dangerous in a wide variety of scenarios where gathering and training on a complete dataset would be impractical.

DeMiCPU: Device Fingerprinting with Magnetic Signals Radiated by CPU

  • Yushi Cheng
  • Xiaoyu Ji
  • Juchuan Zhang
  • Wenyuan Xu
  • Yi-Chao Chen

With the widespread use of smart devices, device authentication has received much attention. One popular method for device authentication is to utilize internally-measured device fingerprints, such as device ID, software or hardware-based characteristics. In this paper, we propose DeMiCPU, a stimulation-response-based device fingerprinting technique that relies on externally-measured information, i.e., magnetic induction (MI) signals emitted from the CPU module that consists of the CPU chip and its affiliated power supply circuits. The key insight of DeMiCPU is that hardware discrepancies essentially exist among CPU modules and thus the corresponding MI signals make promising device fingerprints, which are difficult to be modified or mimicked. We design a stimulation and a discrepancy extraction scheme and evaluate them with 90 mobile devices, including 70 laptops (among which 30 are of totally identical CPU and operating system) and 20 smartphones. The results show that DeMiCPU can achieve 99.1% precision and recall on average, and 98.6% precision and recall for the 30 identical devices, with a fingerprinting time of 0.6 s. In addition, the performance can be further improved to 99.9% with multi-round fingerprinting.

SESSION: Session 6A: Biometrics Security

Multisketches: Practical Secure Sketches Using Off-the-Shelf Biometric Matching Algorithms

  • Rahul Chatterjee
  • M. Sadegh Riazi
  • Tanmoy Chowdhury
  • Emanuela Marasco
  • Farinaz Koushanfar
  • Ari Juels

Biometric authentication is increasingly being used for large scale human authentication and identification, creating the risk of leaking the biometric secrets of millions of users in the case of database compromise. Powerful “fuzzy” cryptographic techniques for biometric template protection, such as secure sketches, could help in principle, but go unused in practice. This is because they would require new biometric matching algorithms with potentially much diminished accuracy. We introduce a new primitive called a multisketch that generalizes secure sketches. Multisketches can work with existing biometric matching algorithms to generate strong cryptographic keys from biometric data reliably. A multisketch works on a biometric database containing multiple biometrics — e.g., multiple fingerprints — of a moderately large population of users (say, thousands). It conceals the correspondence between users and their biometric templates, preventing an attacker from learning the biometric data of a user in the advent of a breach, but enabling derivation of user-specific secret keys upon successful user authentication. We design a multisketch over tenprints — fingerprints of ten fingers — called TenSketch. We report on a prototype implementation of TenSketch, showing its feasibility in practice. We explore several possible attacks against TenSketch database and show, via simulations with real tenprint datasets, that an attacker must perform a large amount of computation to learn any meaningful information from a stolen TenSketch database.

28 Blinks Later: Tackling Practical Challenges of Eye Movement Biometrics

  • Simon Eberz
  • Giulio Lovisotto
  • Kasper B. Rasmussen
  • Vincent Lenders
  • Ivan Martinovic

In this work we address three overlooked practical challenges of continuous authentication systems based on eye movement biometrics: (i) changes in lighting conditions, (ii) task dependent features and the (iii) need for an accurate calibration phase. We collect eye movement data from 22 participants. To measure the effect of the three challenges, we collect data while varying the experimental conditions: users perform four different tasks, lighting conditions change over the course of the session and we collect data related to both accurate (user-specific) and inaccurate (generic) calibrations. To address changing lighting conditions, we identify the two main sources of light, i.e., screen brightness and ambient light, and we propose a pupil diameter correction mechanism based on these. We find that such mechanism can accurately adjust for the pupil shrinking or expanding in relation to the varying amount of light reaching the eye. To account for inaccurate calibrations, we augment the previously known feature set with new features based on binocular tracking, where the left and the right eye are tracked separately. We show that these features can be extremely distinctive even when using a generic calibration. We further apply a cross-task mapping function based on population data which systematically accounts for the dependency of features to tasks (e.g., reading a text and browsing a website lead to different eye movement dynamics). Using these enhancements, even while relaxing assumptions about the experimental conditions, we show that our system achieves significantly lower error rates compared to previous work. For intra-task authentication, without user-specific calibration and in variable screen brightness and ambient lighting, we achieve an equal error rate of 3.93% with only two minutes of training data. For the same setup but with constant screen brightness (e.g., as for a reading task) we can achieve equal error rates as low as of 1.88%.

Velody: Nonlinear Vibration Challenge-Response for Resilient User Authentication

  • Jingjie Li
  • Kassem Fawaz
  • Younghyun Kim

Biometrics have been widely adopted for enhancing user authentication, benefiting usability by exploiting pervasive and collectible unique characteristics from physiological or behavioral traits of human. However, successful attacks on “static” biometrics such as fingerprints have been reported where an adversary acquires users’ biometrics stealthily and compromises non-resilient biometrics.

To mitigate the vulnerabilities of static biometrics, we leverage the unique and nonlinear hand-surface vibration response and design a system called Velody to defend against various attacks including replay and synthesis. The Velody system relies on two major properties in hand-surface vibration responses: uniqueness, contributed by physiological characteristics of human hands, and nonlinearity, whose complexity prevents attackers from predicting the response to an unseen challenge. Velody employs a challenge-response protocol. By changing the vibration challenge, the system elicits input-dependent nonlinear “symptoms” and unique spectrotemporal features in the vibration response, stopping both replay and synthesis attacks. Also, a large number of disposable challenge-response pairs can be collected during enrollment passively for daily authentication sessions.

We build a prototype of Velody with an off-the-shelf vibration speaker and accelerometers to verify its usability and security through a comprehensive user experiment. Our results show that Velody demonstrates both strong security and long-term consistency with a low equal error rate (EER) of 5.8% against impersonation attack while correctly rejecting all other attacks including replay and synthesis attacks using a very short vibration challenge.

The Catcher in the Field: A Fieldprint based Spoofing Detection for Text-Independent Speaker Verification

  • Chen Yan
  • Yan Long
  • Xiaoyu Ji
  • Wenyuan Xu

Verifying the identity of voice inputs is important as voices are increasingly used for sensitive operations. Traditional methods focus on differentiating individuals via the spectrographic features of voices (e.g., voiceprint), yet cannot cope with spoofing attacks, whereby a malicious attacker synthesizes the voice with almost the same voiceprint of a victim or simply replays it. This paper proposes CaField, a text-independent speaker verification method to detect loudspeaker-based voice spoofing attacks with the goal of achieving two seemingly conflicting requirements: usability and security. The key insight of CaField is to construct “fieldprint” with the acoustic biometrics embedded in sound fields, i.e., a physical field of acoustic energy created as the sound propagates over the air, as analogous to “voiceprint”. We find that fieldprints can be distinctive between speakers (either humans or loudspeakers), and thus we may detect the speakers being used for spoofing attacks from the authentic users. Our evaluation on a dataset of 20 people and 8 loudspeakers shows that by relying on two on-board microphones to sample sound fields while users talk to the smartphones, CaField achieves a detection accuracy of 99.16% and an equal error rate (EER) of 0.85% across multiple sessions and various voice inputs. CaField supports low audio sample rates at 8~kHz and is robust to various factors including phone displacement, user posture, recording environment, etc.

SESSION: Session 6B: ML Security II

QUOTIENT: Two-Party Secure Neural Network Training and Prediction

  • Nitin Agrawal
  • Ali Shahin Shamsabadi
  • Matt J. Kusner
  • Adrià Gascón

Recently, there has been a wealth of effort devoted to the design of secure protocols for machine learning tasks. Much of this is aimed at enabling secure prediction from highly-accurate Deep Neural Networks (DNNs). However, as DNNs are trained on data, a key question is how such models can be also trained securely. The few prior works on secure DNN training have focused either on designing custom protocols for existing training algorithms, or on developing tailored training algorithms and then applying generic secure protocols. In this work, we investigate the advantages of designing training algorithms alongside a novel secure protocol, incorporating optimizations on both fronts. We present QUOTIENT, a new method for discretized training of DNNs, along with a customized secure two-party protocol for it. QUOTIENT incorporates key components of state-of-the-art DNN training such as layer normalization and adaptive gradient methods, and improves upon the state-of-the-art in DNN training in two-party computation. Compared to prior work, we obtain an improvement of 50X in WAN time and 6% in absolute accuracy.

Quantitative Verification of Neural Networks and Its Security Applications

  • Teodora Baluta
  • Shiqi Shen
  • Shweta Shinde
  • Kuldeep S. Meel
  • Prateek Saxena

Neural networks are increasingly employed in safety-critical domains. This has prompted interest in verifying or certifying logically encoded properties of neural networks. Prior work has largely focused on checking existential properties, wherein the goal is to check whether there exists any input that violates a given property of interest. However, neural network training is a stochastic process, and many questions arising in their analysis require probabilistic and quantitative reasoning, i.e., estimating how many inputs satisfy a given property. To this end, our paper proposes a novel and principled framework to quantitative verification of logical properties specified over neural networks. Our framework is the first to provide PAC-style soundness guarantees, in that its quantitative estimates are within a controllable and bounded error from the true count. We instantiate our algorithmic framework by building a prototype tool called NPAQ that enables checking rich properties over binarized neural networks. We show how emerging security analyses can utilize our framework in 3 applications: quantifying robustness to adversarial inputs, efficacy of trojan attacks, and fairness/bias of given neural networks.

ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation

  • Yingqi Liu
  • Wen-Chuan Lee
  • Guanhong Tao
  • Shiqing Ma
  • Yousra Aafer
  • Xiangyu Zhang

This paper presents a technique to scan neural network based AI models to determine if they are trojaned. Pre-trained AI models may contain back-doors that are injected through training or by transforming inner neuron weights. These trojaned models operate normally when regular inputs are provided, and mis-classify to a specific output label when the input is stamped with some special pattern called trojan trigger. We develop a novel technique that analyzes inner neuron behaviors by determining how output activations change when we introduce different levels of stimulation to a neuron. The neurons that substantially elevate the activation of a particular output label regardless of the provided input is considered potentially compromised. Trojan trigger is then reverse-engineered through an optimization procedure using the stimulation analysis results, to confirm that a neuron is truly compromised. We evaluate our system ABS on 177 trojaned models that are trojaned with various attack methods that target both the input space and the feature space, and have various trojan trigger sizes and shapes, together with 144 benign models that are trained with different data and initial weight values. These models belong to 7 different model structures and 6 different datasets, including some complex ones such as ImageNet, VGG-Face and ResNet110. Our results show that ABS is highly effective, can achieve over 90% detection rate for most cases (and many 100%), when only one input sample is provided for each output label. It substantially out-performs the state-of-the-art technique Neural Cleanse that requires a lot of input samples and small trojan triggers to achieve good performance.

Lifelong Anomaly Detection Through Unlearning

  • Min Du
  • Zhi Chen
  • Chang Liu
  • Rajvardhan Oak
  • Dawn Song

Anomaly detection is essential towards ensuring system security and reliability. Powered by constantly generated system data, deep learning has been found both effective and flexible to use, with its ability to extract patterns without much domain knowledge. Existing anomaly detection research focuses on a scenario referred to as zero-positive, which means that the detection model is only trained for normal (i.e., negative) data. In a real application scenario, there may be additional manually inspected positive data provided after the system is deployed. We refer to this scenario as lifelong anomaly detection. However, we find that existing approaches are not easy to adopt such new knowledge to improve system performance. In this work, we are the first to explore the lifelong anomaly detection problem, and propose novel approaches to handle corresponding challenges. In particular, we propose a framework called unlearning, which can effectively correct the model when a false negative (or a false positive) is labeled. To this aim, we develop several novel techniques to tackle two challenges referred to as exploding loss and catastrophic forgetting. In addition, we abstract a theoretical framework based on generative models. Under this framework, our unlearning approach can be presented in a generic way to be applied to most zero-positive deep learning-based anomaly detection algorithms to turn them into corresponding lifelong anomaly detection solutions. We evaluate our approach using two state-of-the-art zero-positive deep learning anomaly detection architectures and three real-world tasks. The results show that the proposed approach is able to significantly reduce the number of false positives and false negatives through unlearning.

SESSION: Session 6C: Secure Computing VI

Transparency Logs via Append-Only Authenticated Dictionaries

  • Alin Tomescu
  • Vivek Bhupatiraju
  • Dimitrios Papadopoulos
  • Charalampos Papamanthou
  • Nikos Triandopoulos
  • Srinivas Devadas

Transparency logs allow users to audit a potentially malicious service, paving the way towards a more accountable Internet. For example, Certificate Transparency (CT) enables domain owners to audit Certificate Authorities (CAs) and detect impersonation attacks. Yet, to achieve their full potential, transparency logs must be bandwidth-efficient when queried by users. Specifically, everyone should be able to efficientlylook up log entries by their keyand efficiently verify that the log remainsappend-only. Unfortunately, without additional trust assumptions, current transparency logs cannot provide both small-sizedlookup proofs and small-sizedappend-only proofs. In fact, one of the proofs always requires bandwidth linear in the size of the log, making it expensive for everyone to query the log. In this paper, we address this gap with a new primitive called anappend-only authenticated dictionary (AAD). Our construction is the first to achieve (poly)logarithmic size for both proof types and helps reduce bandwidth consumption in transparency logs. This comes at the cost of increased append times and high memory usage, both of which remain to be improved to make practical deployment possible.

Probabilistic Data Structures in Adversarial Environments

  • David Clayton
  • Christopher Patton
  • Thomas Shrimpton

Probabilistic data structures use space-efficient representations of data in order to (approximately) respond to queries about the data. Traditionally, these structures are accompanied by probabilistic bounds on query-response errors. These bounds implicitly assume benign attack models, in which the data and the queries are inputs are chosen non-adaptively, and independent of the randomness used to construct the representation. Yet probabilistic data structures are increasingly used in settings where these assumptions may be violated. This work provides a provable security treatment of probabilistic data structures in adversarial environments. We give a syntax that captures a wide variety of in-use structures, and our security notions support development of error bounds in the presence of powerful attacks. Concretely, we primarily focus on examining the widely used Bloom filter, but also consider counting (Bloom) filters and count-min sketch data structures. For the traditional version of these, our security findings are largely negative; however, we show that simple embellishments (e.g., using salts, or secret keys) yields structures that provide provable security, and with little overhead.

Make Some ROOM for the Zeros: Data Sparsity in Secure Distributed Machine Learning

  • Phillipp Schoppmann
  • Adrià Gascón
  • Mariana Raykova
  • Benny Pinkas

Exploiting data sparsity is crucial for the scalability of many data analysis tasks. However, while there is an increasing interest in efficient secure computation protocols for distributed machine learning, data sparsity has so far not been considered in a principled way in that setting.

We propose sparse data structures together with their corresponding secure computation protocols to address common data analysis tasks while utilizing data sparsity. In particular, we define a Read-Only Oblivious Map primitive (ROOM) for accessing elements in sparse structures, and present several instantiations of this primitive with different trade-offs. Then, using ROOM as a building block, we propose protocols for basic linear algebra operations such as Gather, Scatter, and multiple variants of sparse matrix multiplication. Our protocols are easily composable by using secret sharing. We leverage this, at the highest level of abstraction, to build secure protocols for non-parametric models (k-nearest neighbors and naive Bayes classification) and parametric models (logistic regression) that enable secure analysis on high-dimensional datasets. The experimental evaluation of our protocol implementations demonstrates a manyfold improvement in the efficiency over state-of-the-art techniques across all applications.

Our system is designed and built mirroring the modular architecture in scientific computing and machine learning frameworks, and inspired by the Sparse BLAS standard.

PIEs: Public Incompressible Encodings for Decentralized Storage

  • Ethan Cecchetti
  • Ben Fisch
  • Ian Miers
  • Ari Juels

We present a new primitive supporting file replication in distributed storage networks (DSNs) called a Public Incompressible Encoding (PIE). PIEs operate in the challenging public DSN setting where files must be encoded and decoded with public randomness-i.e., without encryption-and retention of redundant data must be publicly verifiable. They prevent undetectable data compression, allowing DSNs to use monetary rewards or penalties in incentivizing economically rational servers to properly replicate data. Their definition also precludes critical, demonstrated attacks involving parallelism via ASICs and other custom hardware. Our PIE construction is the first to achieve experimentally validated near-optimal performance-within a factor of 4 of optimal by one metric. It also allows decoding orders of magnitude faster than encoding, unlike other comparable constructions. We achieve this high security and performance using a graph construction called a Dagwood Sandwich Graph (DSaG), built from a novel interleaving of depth-robust graphs and superconcentrators. PIEs’ performance makes them appealing for DSNs, such as the proposed Filecoin system and Ethereum data sharding. Conversely, their near-optimality establishes concerning bounds on the practical financial and energy costs of DSNs allowing arbitrary data.

SESSION: Session 6E: Passwords and Accounts

How to (not) Share a Password: Privacy Preserving Protocols for Finding Heavy Hitters with Adversarial Behavior

  • Moni Naor
  • Benny Pinkas
  • Eyal Ronen

Bad choices of passwords were and are a pervasive problem. Users choosing weak passwords do not only compromise themselves, but the whole ecosystem. E.g, common and default passwords in IoT devices were exploited by hackers to create botnets and mount severe attacks on large Internet services, such as the Mirai botnet DDoS attack. We present a method to help protect the Internet from such large scale attacks. Our method enables a server to identify popular passwords (heavy hitters), and publish a list of over-popular passwords that must be avoided. This filter ensures that no single password can be used to compromise a large percentage of the users. The list is dynamic and can be changed as new users are added or when current users change their passwords. We apply maliciously secure two-party computation and differential privacy to protect the users’ password privacy. Our solution does not require extra hardware or cost, and is transparent to the user. Our private heavy hitters construction is secure even against a malicious coalition of devices which tries to manipulate the protocol to hide the popularity of some password that the attacker is exploiting. It also ensures differential privacy under continual observation of the blacklist as it changes over time. As a reality check we conducted three tests: computed the guarantees that the system provides wrt a few publicly available databases, ran full simulations on those databases, and implemented and analyzed a proof-of-concept on an IoT device. Our construction can also be used in other settings to privately learn heavy hitters in the presence of an active malicious adversary. E.g., learning the most popular sites accessed by the Tor network.

Protocols for Checking Compromised Credentials

  • Lucy Li
  • Bijeeta Pal
  • Junade Ali
  • Nick Sullivan
  • Rahul Chatterjee
  • Thomas Ristenpart

To prevent credential stuffing attacks, industry best practice now proactively checks if user credentials are present in known data breaches. Recently, some web services, such as HaveIBeenPwned (HIBP) and Google Password Checkup (GPC), have started providing APIs to check for breached passwords. We refer to such services as compromised credential checking (C3) services. We give the first formal description of C3 services, detailing different settings and operational requirements, and we give relevant threat models. One key security requirement is the secrecy of a user’s passwords that are being checked. Current widely deployed C3 services have the user share a small prefix of a hash computed over the user’s password. We provide a framework for empirically analyzing the leakage of such protocols, showing that in some contexts knowing the hash prefixes leads to a 12x increase in the efficacy of remote guessing attacks. We propose two new protocols that provide stronger protection for users’ passwords, implement them, and show experimentally that they remain practical to deploy.

User Account Access Graphs

  • Sven Hammann
  • Saša Radomirović
  • Ralf Sasse
  • David Basin

The primary authentication method for a user account is rarely the only way to access that account. Accounts can often be accessed through other accounts, using recovery methods, password managers, or single sign-on. This increases each account’s attack surface, giving rise to subtle security problems. These problems cannot be detected by considering each account in isolation, but require analyzing the links between a user’s accounts. Furthermore, to accurately assess the security of accounts, the physical world must also be considered. For example, an attacker with access to a physical mailbox could obtain credentials sent by post.

Despite the manifest importance of understanding these interrelationships and the security problems they entail, no prior methods exist to perform an analysis thereof in a precise way. To address this need, we introduce account access graphs, the first formalism that enables a comprehensive modeling and analysis of a user’s entire setup, incorporating all connections between the user’s accounts, devices, credentials, keys, and documents. Account access graphs support systematically identifying both security vulnerabilities and lockout risks in a user’s accounts. We give analysis algorithms and illustrate their effectiveness in a case study, where we automatically detect significant weaknesses in a user’s setup and suggest improvement options.

Detecting Fake Accounts in Online Social Networks at the Time of Registrations

  • Dong Yuan
  • Yuanli Miao
  • Neil Zhenqiang Gong
  • Zheng Yang
  • Qi Li
  • Dawn Song
  • Qian Wang
  • Xiao Liang

Online social networks are plagued by fake information. In particu- lar, using massive fake accounts (also called Sybils), an attacker can disrupt the security and privacy of benign users by spreading spam, malware, and disinformation. Existing Sybil detection methods rely on rich content, behavior, and/or social graphs generated by Sybils. The key limitation of these methods is that they incur significant delays in catching Sybils, i.e., Sybils may have already performed many malicious activities when being detected. In this work, we propose Ianus, a Sybil detection method that leverages account registration information. Ianus aims to catch Sybils immediately after they are registered. First, using a real- world registration dataset with labeled Sybils from WeChat (the largest online social network in China), we perform a measurement study to characterize the registration patterns of Sybils and benign users. We find that Sybils tend to have synchronized and abnormal registration patterns. Second, based on our measurement results, we model Sybil detection as a graph inference problem, which allows us to integrate heterogeneous features. In particular, we extract synchronization and anomaly based features for each pair of accounts, use the features to build a graph in which Sybils are densely connected with each other while a benign user is isolated or sparsely connected with other benign users and Sybils, and finally detect Sybils via analyzing the structure of the graph. We evaluate Ianus using real-world registration datasets of WeChat. Moreover, WeChat has deployed Ianus on a daily basis, i.e., WeChat uses Ianus to analyze newly registered accounts on each day and detect Sybils. Via manual verification by the WeChat security team, we find that Ianus can detect around 400K per million new registered accounts each day and achieve a precision of over 96% on average.

SESSION: Session 7A: Internet of Things

Charting the Attack Surface of Trigger-Action IoT Platforms

  • Qi Wang
  • Pubali Datta
  • Wei Yang
  • Si Liu
  • Adam Bates
  • Carl A. Gunter

Internet of Things (IoT) deployments are becoming increasingly automated and vastly more complex. Facilitated by programming abstractions such as trigger-action rules, end-users can now easily create new functionalities by interconnecting their devices and other online services. However, when multiple rules are simultaneously enabled, complex system behaviors arise that are difficult to understand or diagnose. While history tells us that such conditions are ripe for exploitation, at present the security states of trigger-action IoT deployments are largely unknown. In this work, we conduct a comprehensive analysis of the interactions between trigger-action rules in order to identify their security risks. Using IFTTT as an exemplar platform, we first enumerate the space of inter-rule vulnerabilities that exist within trigger-action platforms. To aid users in the identification of these dangers, we go on to present iRuler, a system that performs Satisfiability Modulo Theories (SMT) solving and model checking to discover inter-rule vulnerabilities within IoT deployments. iRuler operates over an abstracted information flow model that represents the attack surface of an IoT deployment, but we discover in practice that such models are difficult to obtain given the closed nature of IoT platforms. To address this, we develop methods that assist in inferring trigger-action information flows based on Natural Language Processing. We develop a novel evaluative methodology for approximating plausible real-world IoT deployments based on the installation counts of 315,393 IFTTT applets, determining that 66% of the synthetic deployments in the IFTTT ecosystem exhibit the potential for inter-rule vulnerabilities. Combined, these efforts provide the insight into the real-world dangers of IoT deployment misconfigurations.

Peeves: Physical Event Verification in Smart Homes

  • Simon Birnbach
  • Simon Eberz
  • Ivan Martinovic

With the rising availability of smart devices (e.g., smart thermostats, lights, locks, etc.), they are increasingly combined into “smart homes”. A key component of smart homes are event sensors that report physical events (such as doors opening or the light turning on) which can be triggered automatically by the system or manually by the user. However, data from these sensors are not always trustworthy. Both faults in the event sensors and involvement of active attackers can lead to reporting of events that did not physically happen (event spoofing). This is particularly critical, as smart homes can trigger event chains (e.g., turning the radiator off when a window is opened) without involvement of the user. The goal of this paper is to verify physical events using data from an ensemble of sensors (such as accelerometers or air pressure sensors) that are commonly found in smart homes. This approach both protects against event sensor faults and sophisticated attackers. In order to validate our system’s performance, we set up a “smart home” in an office environment. We recognize 22 event types using 48 sensors over the course of two weeks. Using data from the physical sensors, we verify the event stream supplied by the event sensors. We consider two threat models: a zero-effort attacker who spoofs events at arbitrary times and an opportunistic attacker who has access to a live stream of sensor data to better time their attack. We achieve perfect classification for 9 out of 22 events and achieve a 0% false alarm rate at a detection rate exceeding 99.9% for 15 events. We also show that even a strong opportunistic attacker is inherently limited to spoofing few select events and that doing so involves lengthy waiting periods.

Automatic Fingerprinting of Vulnerable BLE IoT Devices with Static UUIDs from Mobile Apps

  • Chaoshun Zuo
  • Haohuang Wen
  • Zhiqiang Lin
  • Yinqian Zhang

Being an easy-to-deploy and cost-effective low power wireless solution, Bluetooth Low Energy (BLE) has been widely used by Internet-of-Things (IoT) devices. In a typical IoT scenario, an IoT device first needs to be connected with its companion mobile app which serves as a gateway for its Internet access. To establish a connection, a device first broadcasts advertisement packets with UUIDs to nearby smartphone apps. Leveraging these UUIDs, a companion app is able to identify the device, pairs and bonds with it, and allows further data communication. However, we show that there is a fundamental flaw in the current design and implementation of the communication protocols between a BLE device and its companion mobile app, which allows an attacker to precisely fingerprint a BLE device with static UUIDs from the apps. Meanwhile, we also discover that many BLE IoT devices adopt “just works” pairing, allowing attackers to actively connect with these devices if there is no app-level authentication. Even worse, this vulnerability can also be directly uncovered from mobile apps. Furthermore, we also identify that there is an alarming number of vulnerable app-level authentication apps, which means the devices connected by these apps can be directly controlled by attackers. To raise the public awareness of IoT device fingerprinting and also uncover these vulnerable BLE IoT devices before attackers, we develop an automated mobile app analysis tool BLESCOPE and evaluate it with all of the free BLE IoT apps in Google Play store. Our tool has identified 1,757 vulnerable mobile apps in total. We also performed a field test in a 1.28 square miles region, and identified 5,822 real BLE devices, among them 5,509 (94.6%) are fingerprintable by attackers, and 431 (7.4%) are vulnerable to unauthorized access. We have made responsible disclosures to the corresponding app developers, and also reported the fingerprinting issues to the Bluetooth Special Interest Group.

SESSION: Session 7B: Blockchain III

Balance: Dynamic Adjustment of Cryptocurrency Deposits

  • Dominik Harz
  • Lewis Gudgeon
  • Arthur Gervais
  • William J. Knottenbelt

Financial deposits are fundamental to the security of cryptoeconomic protocols as they serve as insurance against potential misbehaviour of agents. However, protocol designers and their agents face a trade-off when choosing the deposit size. While substantial deposits might increase the protocol security, for example by minimising the impact of adversarial behaviour or risks of currency fluctuations, locked-up capital incurs opportunity costs. Moreover, some protocols require over-collateralization in anticipation of future events and malicious intentions of agents. We present Balance, an application-agnostic system that reduces over-collateralization without compromising protocol security. In Balance, malicious agents receive no additional utility for cheating once their deposits are reduced. At the same time, honest and rational agents increase their utilities for behaving honestly as their opportunity costs for the locked-up deposits are reduced. Balance is a round-based mechanism in which agents need to continuously perform desired actions. Rather than treating agents’ incentives and behaviour as ancillary, we explicitly model agents’ utility, proving the conditions for incentive compatibility. Balance improves social welfare given a distribution of honest, rational, and malicious agents. Further, we integrate Balance with a cross-chain interoperability protocol, XCLAIM, reducing deposits by 10% while maintaining the same utility for behaving honestly. Our implementation allows any number of agents to be maintained for at most 55,287 gas (~ USD 0.07) to update all agents’ scores, and at a cost of 54,948 gas (~ USD 0.07) to update the assignment of all agents to layers.

TokenScope: Automatically Detecting Inconsistent Behaviors of Cryptocurrency Tokens in Ethereum

  • Ting Chen
  • Yufei Zhang
  • Zihao Li
  • Xiapu Luo
  • Ting Wang
  • Rong Cao
  • Xiuzhuo Xiao
  • Xiaosong Zhang

Motivated by the success of Bitcoin, lots of cryptocurrencies have been created, the majority of which were implemented as smart contracts running on Ethereum and called tokens. To regulate the interaction between these tokens and users as well as third-party tools (e.g., wallets, exchange markets, etc.), several standards have been proposed for the implementation of token contracts. Although existing tokens involve lots of money, little is known whether or not their behaviors are consistent with the standards. Inconsistent behaviors can lead to user confusion and financial loss, because users/third-party tools interact with token contracts by invoking standard interfaces and listening to standard events. In this work, we take the first step to investigate such inconsistent token behaviors with regard to ERC-20, the most popular token standard. We propose a novel approach to automatically detect such inconsistency by contrasting the behaviors derived from three different sources, including the manipulations of core data structures recording the token holders and their shares, the actions indicated by standard interfaces, and the behaviors suggested by standard events. We implement our approach in a new tool named TokenScope and use it to inspect all transactions sent to the deployed tokens. We detected 3,259,001 transactions that trigger inconsistent behaviors, and these behaviors resulted from 7,472 tokens. By manually examining all (2,353) open-source tokens having inconsistent behaviors, we found that the precision of TokenScope is above 99.9%. Moreover, we revealed 11 major reasons behind the inconsistency, e.g., flawed tokens, standard methods missing, lack of standard events, etc. In particular, we discovered 50 unreported flawed tokens.

Tesseract: Real-Time Cryptocurrency Exchange Using Trusted Hardware

  • Iddo Bentov
  • Yan Ji
  • Fan Zhang
  • Lorenz Breidenbach
  • Philip Daian
  • Ari Juels

We propose Tesseract, a secure real-time cryptocurrency exchange service. Existing centralized exchange designs are vulnerable to theft of funds, while decentralized exchanges cannot offer real-time cross-chain trades. All currently deployed exchanges are also vulnerable to frontrunning attacks. Tesseract overcomes these flaws and achieves a best-of-both-worlds design by using a trusted execution environment. The task of committing the recent trade data to independent cryptocurrency systems presents an all-or-nothing fairness problem, to which we present ideal theoretical solutions, as well as practical solutions. Tesseract supports not only real-time cross-chain cryptocurrency trades, but also secure tokenization of assets pegged to cryptocurrencies. For instance, Tesseract-tokenized bitcoins can circulate on the Ethereum blockchain for use in smart contracts. We provide a demo implementation of Tesseract that supports Bitcoin, Ethereum, and similar cryptocurrencies.

SESSION: Session 7C: Secure Computing V

Efficient MPC via Program Analysis: A Framework for Efficient Optimal Mixing

  • Muhammad Ishaq
  • Ana L. Milanova
  • Vassilis Zikas

Multi-party computation (MPC) protocols have been extensively optimized in an effort to bring this technology to practice, which has already started bearing fruits. The choice of which MPC protocol to use depends on the computation we are trying to perform. Protocol mixing is an effective black-box —with respect to the MPC protocols—approach to optimize performance. Despite, however, considerable progress in the recent years existing works are heuristic and either give no guarantee or require an exponential (brute-force) search to find the optimal assignment, a problem which was conjectured to be NP hard.

We provide a theoretically founded approach to optimal (MPC) protocol assignment, i.e., optimal mixing, and prove that under mild and natural assumptions, the problem is tractable both in theory and in practice for computing best two-out-of-three combinations. Concretely, for the case of two protocols, we utilize program analysis techniques—which we tailor to MPC—to define a new integer program, which we term the Optimal Protocol Assignment (in short, OPA) problem whose solution is the optimal (mixed) protocol assignment for these two protocols. Most importantly, we prove that the solution to the linear program corresponding to the relaxation of OPA is integral, and hence is also a solution to OPA. Since linear programming can be efficiently solved, this yields the first efficient protocol mixer. We showcase the quality of our OPA solver by applying it to standard benchmarks from the mixing literature. Our OPA solver can be applied on any two-out-of-three protocol combinations to obtain a best two-out-of-three protocol assignment.

Two-Thirds Honest-Majority MPC for Malicious Adversaries at Almost the Cost of Semi-Honest

  • Jun Furukawa
  • Yehuda Lindell

Secure multiparty computation (MPC) enables a set of parties to securely carry out a joint computation of their private inputs without revealing anything but the output. Protocols for semi-honest adversaries guarantee security as long as the corrupted parties run the specified protocol and ensure that nothing is leaked in the transcript. In contrast, protocols for malicious adversaries guarantee security in the presence of arbitrary adversaries who can run any attack strategy. Security for malicious adversaries is typically what is needed in practice (and is always preferred), but comes at a significant cost. In this paper, we present the first protocol for a two-thirds honest majority that achieves security in the presence of malicious adversariesat essentially the exact same cost as the best known protocols for semi-honest adversaries. Our construction is not a general transformation and thus it is possible that better semi-honest protocols will be constructed which do not support our transformation. Nevertheless, for the current state-of-the-art for many parties (based on Shamir sharing), our protocol invokes the best semi-honest multiplication protocol exactly once per multiplication gate (plus some additional local computation that is negligible to the overall cost). Concretely, the best version of our protocol requires each party to send on average of just 2 2/3 elements per multiplication gate (when the number of multiplication gates is at least the number of parties). This is four times faster than the previous-best protocol of Barak et al. (ACM CCS 2018) for small fields, and twice as fast as the previous-best protocol of Chida et al. (CRYPTO 2018) for large fields.

Fast Actively Secure Five-Party Computation with Security Beyond Abort

  • Megha Byali
  • Carmit Hazay
  • Arpita Patra
  • Swati Singla

Secure Multi-party Computation (MPC) with small population and honest majority has drawn focus specifically due to customization in techniques and resulting efficiency that the constructions can offer. In this work, we investigate a wide range of security notions in the five-party setting, tolerating two active corruptions. Being constant-round, our protocols are best suited for real-time, high latency networks such as the Internet. In a minimal setting of pairwise-private channels, we present efficient instantiations with unanimous abort (where either all honest parties obtain the output or none of them do) and fairness (where the adversary obtains its output only if all honest parties also receive it). With the presence of an additional broadcast channel (known to be necessary), we present a construction with guaranteed output delivery (where any adversarial behaviour cannot prevent the honest parties from receiving the output). The broadcast communication is minimal and independent of circuit size. In terms of performance (communication and run time), our protocols incur minimal overhead over the best known protocol of Chandran et al. (ACM CCS 2016) that achieves the least security of selective abort. Further, our protocols for fairness and unanimous abort can be extended to n-parties with at most √n corruptions, similar to Chandran et al. Going beyond the most popular honest-majority setting of three parties with one corruption, our results demonstrate feasibility of attaining stronger security notions for more than one active corruption at an expense not too far from the least desired security of selective abort.

SESSION: Session 7D: Formal Analysis III

Signed Cryptographic Program Verification with Typed CryptoLine

  • Yu-Fu Fu
  • Jiaxiang Liu
  • Xiaomu Shi
  • Ming-Hsien Tsai
  • Bow-Yaw Wang
  • Bo-Yin Yang

We develop an automated formal technique to specify and verify signed computation in cryptographic programs. In addition to new instructions, we introduce a type system to detect type errors in programs. A type inference algorithm is also provided to deduce types and instruction variants in cryptographic programs. In order to verify signed cryptographic C programs, we develop a translator from the GCC intermediate representation to our language. Using our technique, we have verified 82 C functions in cryptography libraries including NaCl, wolfSSL, bitcoin, OpenSSL, and BoringSSL.

Machine-Checked Proofs for Cryptographic Standards: Indifferentiability of Sponge and Secure High-Assurance Implementations of SHA-3

  • José Bacelar Almeida
  • Cécile Baritel-Ruet
  • Manuel Barbosa
  • Gilles Barthe
  • François Dupressoir
  • Benjamin Grégoire
  • Vincent Laporte
  • Tiago Oliveira
  • Alley Stoughton
  • Pierre-Yves Strub

We present a high-assurance and high-speed implementation of the SHA-3 hash function. Our implementation is written in the Jasmin programming language, and is formally verified for functional correctness, provable security and timing attack resistance in the EasyCrypt proof assistant. Our implementation is the first to achieve simultaneously the four desirable properties (efficiency, correctness, provable security, and side-channel protection) for a non-trivial cryptographic primitive. Concretely, our mechanized proofs show that: 1) the SHA-3 hash function is indifferentiable from a random oracle, and thus is resistant against collision, first and second preimage attacks; 2) the SHA-3 hash function is correctly implemented by a vectorized x86 implementation. Furthermore, the implementation is provably protected against timing attacks in an idealized model of timing leaks. The proofs include new EasyCrypt libraries of independent interest for programmable random oracles and modular indifferentiability proofs.

VeriSketch: Synthesizing Secure Hardware Designs with Timing-Sensitive Information Flow Properties

  • Armaiti Ardeshiricham
  • Yoshiki Takashima
  • Sicun Gao
  • Ryan Kastner

We present VeriSketch, a security-oriented program synthesis framework for developing hardware designs with formal guarantee of functional and security specifications. VeriSketch defines a synthesis language, a code instrumentation framework for specifying and inferring timing-sensitive information flow properties, and uses specialized constraint-based synthesis for generating HDL code that enforces the specifications. We show the power of VeriSketch through security-critical hardware design examples, including cache controllers, thread schedulers, and system-on-chip arbiters, with formal guarantee of security properties such as absence of timing side-channels, confidentiality, and isolation.

SESSION: Session 7E: Privacy-Preserving Techniques

SEEMless: Secure End-to-End Encrypted Messaging with less</> Trust

  • Melissa Chase
  • Apoorvaa Deshpande
  • Esha Ghosh
  • Harjasleen Malvai

End-to-end encrypted messaging (E2E) is only secure if participants have a way to retrieve the correct public key for the desired recipient. However, to make these systems usable, users must be able to replace their keys (e.g. when they lose or reset their devices, or reinstall their app), and we cannot assume any cryptographic means of authenticating the new keys. In the current E2E systems, the service provider manages the directory of public keys of its registered users; this allows a compromised or coerced service provider to introduce their own keys and execute a man in the middle attack. Building on the approach of CONIKS (Melara et al, USENIX Security ’15), we formalize the notion of a Privacy-Preserving Verifiable Key Directory (VKD): a system which allows users to monitor the keys that the service is distributing on their behalf. We then propose a new VKD scheme which we call SEEMless, which improves on prior work in terms of privacy and scalability. In particular, our new approach allows key changes to take effect almost immediately; we show experimentally that our scheme easily supports delays less than a minute, in contrast to previous work which proposes a delay of one hour.

PrivDPI: Privacy-Preserving Encrypted Traffic Inspection with Reusable Obfuscated Rules

  • Jianting Ning
  • Geong Sen Poh
  • Jia-Ch’ng Loh
  • Jason Chia
  • Ee-Chien Chang

Network middleboxes perform deep packet inspection (DPI) to detect anomalies and suspicious activities in network traffic. However, increasingly these traffic are encrypted and middleboxes can no longer make sense of them. A recent proposal by Sherry et al. (SIGCOMM 2015), named BlindBox, enables the middlebox to perform inspection in a privacy-preserving manner. BlindBox deploys garbled circuit to generate encrypted rules for the purpose of inspecting the encrypted traffic directly. However, the setup latency (which could be 97s on a ruleset of 3,000 as reported) and overhead size incurred by garbled circuit are high. Since communication can only be commenced after the encrypted rules being generated, such delay is intolerable in many real-time applications. In this work, we present PrivDPI, which reduces the setup delay while retaining similar privacy guarantee. Compared to BlindBox, for a ruleset of 3,000, our encrypted rule generation is 288x faster and requires 290,227x smaller overhead for the first session, and is even 1,036x faster and requires 3424,505x smaller overhead over 20 consecutive sessions. The performance gain is based on a new technique for generating encrypted rules as well as the idea of reusing intermediate results generated in previous sessions across subsequent sessions. This is in contrast to Blindbox which performs encrypted rule generation from scratch for every session. Nevertheless, PrivDPI is 6x slower in generating the encrypted traffic tokens, yet in our implementation, the token encryption rate of PrivDPI is more than 17,271 per second which is sufficient for many real-time applications. Moreover, the intermediate values generated in each session can be reused across subsequent sessions for repeated tokens, which could further speedup token encryption. Overall, our experiment shows that PrivDPI is practical and especially suitable for connections with short flows.

Updatable Anonymous Credentials and Applications to Incentive Systems

  • Johannes Blömer
  • Jan Bobolz
  • Denis Diemert
  • Fabian Eidens

We introduce updatable anonymous credential systems (UACS) and use them to construct a new privacy-preserving incentive system. In a UACS, a user holding a credential certifying some attributes can interact with the corresponding issuer to update his attributes. During this, the issuer knows which update function is run, but does not learn the user’s previous attributes. Hence the update process preserves anonymity of the user. One example for a class of update functions are additive updates of integer attributes, where the issuer increments an unknown integer attribute value v by some known value k. This kind of update is motivated by an application of UACS to incentive systems. Users in an incentive system can anonymously accumulate points, e.g. in a shop at checkout, and spend them later, e.g. for a discount. In this paper, we (1) formally define UACS and their security, (2) give a generic construction for UACS supporting arbitrary update functions, and (3) construct a new incentive system using UACS that is efficient while offering offline double-spending protection and partial spending.

SESSION: Keynote

Hardware-assisted Trusted Execution Environments: Look Back, Look Ahead

  • N. Asokan

Over the last two decades, hardware-based isolated execution environments, commonly known as “trusted execution environments” or TEEs, have become widely deployed [1,2,3,4]. However, concerns about vulnerabilities (like the Foreshadow attacks [5]), and potential for abuse have been persistent and have recently become increasingly pronounced. In this talk, I will discuss the history of (mobile) TEEs [6], what motivated their design and large-scale deployment, and how they have evolved during the last two decades. I will then discuss some of their shortcomings and potential approaches for overcoming them. I will also briefly touch on other types of hardware security primitives that are being rolled out by processor manufacturers and the opportunities they offer for securing computing

SESSION: Session 8A: Attack II

Gollum: Modular and Greybox Exploit Generation for Heap Overflows in Interpreters

  • Sean Heelan
  • Tom Melham
  • Daniel Kroening

We present the first approach to automatic exploit generation for heap overflows in interpreters. It is also the first approach to exploit generation in any class of program that integrates a solution for automatic heap layout manipulation. At the core of the approach is a novel method for discovering exploit primitives—inputs to the target program that result in a sensitive operation, such as a function call or a memory write, utilizing attacker-injected data. To produce an exploit primitive from a heap overflow vulnerability, one has to discover a target data structure to corrupt, ensure an instance of that data structure is adjacent to the source of the overflow on the heap, and ensure that the post-overflow corrupted data is used in a manner desired by the attacker. Our system addresses all three tasks in an automatic, greybox, and modular manner. Our implementation is called GOLLUM, and we demonstrate its capabilities by producing exploits from 10 unique vulnerabilities in the PHP and Python interpreters, 5 of which do not have existing public exploits.

SLAKE: Facilitating Slab Manipulation for Exploiting Vulnerabilities in the Linux Kernel

  • Yueqi Chen
  • Xinyu Xing

To determine the exploitability for a kernel vulnerability, a secu- rity analyst usually has to manipulate slab and thus demonstrate the capability of obtaining the control over a program counter or performing privilege escalation. However, this is a lengthy process because (1) an analyst typically has no clue about what objects and system calls are useful for kernel exploitation and (2) he lacks the knowledge of manipulating a slab and obtaining the desired layout. In the past, researchers have proposed various techniques to facilitate exploit development. Unfortunately, none of them can be easily applied to address these challenges. On the one hand, this is because of the complexity of the Linux kernel. On the other hand, this is due to the dynamics and non-deterministic of slab variations. In this work, we tackle the challenges above from two perspectives. First, we use static and dynamic analysis techniques to explore the kernel objects, and the corresponding system calls useful for exploitation. Second, we model commonly-adopted exploitation methods and develop a technical approach to facilitate the slab layout adjustment. By extending LLVM as well as Syzkaller, we implement our techniques and name their combination after SLAKE. We evaluate SLAKE by using 27 real-world kernel vulnerabilities, demonstrating that it could not only diversify the ways to perform kernel exploitation but also sometimes escalate the exploitability of kernel vulnerabilities.

SESSION: Session 8B: TEE I

SecTEE: A Software-based Approach to Secure Enclave Architecture Using TEE

  • Shijun Zhao
  • Qianying Zhang
  • Yu Qin
  • Wei Feng
  • Dengguo Feng

Secure enclaves provide a practical solution to secure computation, and current approaches to secure enclaves are implemented by extending hardware security mechanisms to the CPU architecture. Therefore, it is hard for a platform to offer secure computation if its CPU architecture is not equipped with any secure enclave features. Unfortunately, ARM CPUs, dominating mobile devices and having increasing momentum in cloud markets, do not provide any security mechanisms achieving the security equivalent to modern secure enclave architectures. In this paper, we propose SecTEE, a software-based secure enclave architecture which is based on the CPU’s isolation mechanism and does not require specialized security hardware of the CPU architecture such as memory encryption engines. SecTEE achieves a high level of security even compared with hardware-based secure enclave architectures: resistance to privileged host software attacks, lightweight physical attacks, and memory access based side-channel attacks. Besides, SecTEE provides rich trusted computing primitives for enclaves: integrity measurement, remote attestation, data sealing, secrets provisioning, and life cycle management. We implement a SecTEE prototype based on the ARM TrustZone technology, but our approach can be applied to other CPU architectures with isolation mechanisms. The evaluation results show that most overhead comes from the software encryption and the runtime overhead imposed by trusted computing primitives is acceptable.

A Tale of Two Worlds: Assessing the Vulnerability of Enclave Shielding Runtimes

  • Jo Van Bulck
  • David Oswald
  • Eduard Marin
  • Abdulla Aldoseri
  • Flavio D. Garcia
  • Frank Piessens

This paper analyzes the vulnerability space arising in Trusted Execution Environments (TEEs) when interfacing a trusted enclave application with untrusted, potentially malicious code. Considerable research and industry effort has gone into developing TEE runtime libraries with the purpose of transparently shielding enclave application code from an adversarial environment. However, our analysis reveals that shielding requirements are generally not well-understood in real-world TEE runtime implementations. We expose several sanitization vulnerabilities at the level of the Application Binary Interface (ABI) and the Application Programming Interface (API) that can lead to exploitable memory safety and side-channel vulnerabilities in the compiled enclave. Mitigation of these vulnerabilities is not as simple as ensuring that pointers are outside enclave memory. In fact, we demonstrate that state-of-the-art mitigation techniques such as Intel’s edger8r, Microsoft’s “deep copy marshalling”, or even memory-safe languages like Rust fail to fully eliminate this attack surface. Our analysis reveals 35 enclave interface sanitization vulnerabilities in 8 major open-source shielding frameworks for Intel SGX, RISC-V, and Sancus TEEs. We practically exploit these vulnerabilities in several attack scenarios to leak secret keys from the enclave or enable remote code reuse. We have responsibly disclosed our findings, leading to 5 designated CVE records and numerous security patches in the vulnerable open-source projects, including the Intel SGX-SDK, Microsoft Open Enclave, Google Asylo, and the Rust compiler.

SESSION: Session 8C: Blockchain VI

zkay: Specifying and Enforcing Data Privacy in Smart Contracts

  • Samuel Steffen
  • Benjamin Bichsel
  • Mario Gersbach
  • Noa Melchior
  • Petar Tsankov
  • Martin Vechev

Privacy concerns of smart contracts are a major roadblock preventing their wider adoption. A promising approach to protect private data is hiding it with cryptographic primitives and then enforcing correctness of state updates by Non-Interactive Zero-Knowledge (NIZK) proofs. Unfortunately, NIZK statements are less expressive than smart contracts, forcing developers to keep some functionality in the contract. This results in scattered logic, split across contract code and NIZK statements, with unclear privacy guarantees. To address these problems, we present the zkay language, which introduces privacy types defining owners of private values. zkay contracts are statically type checked to (i) ensure they are realizable using NIZK proofs and (ii) prevent unintended information leaks. Moreover, the logic of zkay contracts is easy to follow by just ignoring privacy types. To enforce zkay contracts, we automatically transform them into contracts equivalent in terms of privacy and functionality, yet executable on public blockchains. We evaluated our approach on a proof-of-concept implementation generating Solidity contracts and implemented 10 interesting example contracts in zkay. Our results indicate that zkay is practical: On-chain cost for executing the transformed contracts is around 1M gas per transaction (~0.50US$) and off-chain cost is moderate.

Log2vec: A Heterogeneous Graph Embedding Based Approach for Detecting Cyber Threats within Enterprise

  • Fucheng Liu
  • Yu Wen
  • Dongxue Zhang
  • Xihe Jiang
  • Xinyu Xing
  • Dan Meng

Conventional attacks of insider employees and emerging APT are both major threats for the organizational information system. Existing detections mainly concentrate on users’ behavior and usually analyze logs recording their operations in an information system. In general, most of these methods consider sequential relationship among log entries and model users’ sequential behavior. However, they ignore other relationships, inevitably leading to an unsatisfactory performance on various attack scenarios. We propose log2vec, a heterogeneous graph embedding based modularized method. First, it involves a heuristic approach that converts log entries into a heterogeneous graph in the light of diverse relationships among them. Next, it utilizes an improved graph embedding appropriate to the above heterogeneous graph, which can automatically represent each log entry into a low-dimension vector. The third component of log2vec is a practical detection algorithm capable of separating malicious and benign log entries into different clusters and identifying malicious ones. We implement a prototype of log2vec. Our evaluation demonstrates that log2vec remarkably outperforms state-of-the-art approaches, such as deep learning and hidden markov model (HMM). Besides, log2vec shows its capability to detect malicious events in various attack scenarios.

Privacy Aspects and Subliminal Channels in Zcash

  • Alex Biryukov
  • Daniel Feher
  • Giuseppe Vitto

In this paper we analyze two privacy and security issues for the privacy-oriented cryptocurrency Zcash. First we study shielded transactions and show ways to fingerprint user transactions, including active attacks. We introduce two new attacks which we call Danaan-gift attack and Dust attack. Following the recent Sapling update of Zcash protocol we study the interaction between the new and the old zk-SNARK protocols and the effects of their interaction on transaction privacy. In the second part of the paper we check for the presence of subliminal channels in the zk-SNARK protocol and in Pedersen Commitments. We show presence of efficient 70-bit channels which could be used for tagging of shielded transactions which would allow the attacker (malicious transaction verifier) to link transactions issued by a maliciously modified zk-SNARK prover, while would be indistinguishable from regular transactions for the honest verifier/user. We discuss countermeasures against both of these privacy issues.

POIROT: Aligning Attack Behavior with Kernel Audit Records for Cyber Threat Hunting

  • Sadegh M. Milajerdi
  • Birhanu Eshete
  • Rigel Gjomemo
  • V.N. Venkatakrishnan

Cyber threat intelligence (CTI) is being used to search for indicators of attacks that might have compromised an enterprise network for a long time without being discovered. To have a more effective analysis, CTI open standards have incorporated descriptive relationships showing how the indicators or observables are related to each other. However, these relationships are either completely overlooked in information gathering or not used for threat hunting. In this paper, we propose a system, called POIROT, which uses these correlations to uncover the steps of a successful attack campaign. We use kernel audits as a reliable source that covers all causal relations and information flows among system entities and model threat hunting as an inexact graph pattern matching problem. Our technical approach is based on a novel similarity metric which assesses an alignment between a query graph constructed out of CTI correlations and a provenance graph constructed out of kernel audit log records. We evaluate POIROT on publicly released real-world incident reports as well as reports of an adversarial engagement designed by DARPA, including ten distinct attack campaigns against different OS platforms such as Linux, FreeBSD, and Windows. Our evaluation results show that POIROT is capable of searching inside graphs containing millions of nodes and pinpoint the attacks in a few minutes, and the results serve to illustrate that CTI correlations could be used as robust and reliable artifacts for threat hunting.

Effective and Light-Weight Deobfuscation and Semantic-Aware Attack Detection for PowerShell Scripts

  • Zhenyuan Li
  • Qi Alfred Chen
  • Chunlin Xiong
  • Yan Chen
  • Tiantian Zhu
  • Hai Yang

In recent years, PowerShell is increasingly reported to appear in a variety of cyber attacks ranging from advanced persistent threat, ransomware, phishing emails, cryptojacking, financial threats, to fileless attacks. However, since the PowerShell language is dynamic by design and can construct script pieces at different levels, state-of-the-art static analysis based PowerShell attack detection approaches are inherently vulnerable to obfuscations. To overcome this challenge, in this paper we design the first effective and light-weight deobfuscation approach for PowerShell scripts. To address the challenge in precisely identifying the recoverable script pieces, we design a novel subtree-based deobfuscation method that performs obfuscation detection and emulation-based recovery at the level of subtrees in the abstract syntax tree of PowerShell scripts. Building upon the new deobfuscation method, we are able to further design the first semantic-aware PowerShell attack detection system. To enable semantic-based detection, we leverage the classic objective-oriented association mining algorithm and newly identify 31 semantic signatures for PowerShell attacks. We perform an evaluation on a collection of 2342 benign samples and 4141 malicious samples, and find that our deobfuscation method takes less than 0.5 seconds on average and meanwhile increases the similarity between the obfuscated and original scripts from only 0.5% to around 80%, which is thus both effective and light-weight. In addition, with our deobfuscation applied, the attack detection rates for Windows Defender and VirusTotal increase substantially from 0.3% and 2.65% to 75.0% and 90.0%, respectively. Furthermore, when our deobfuscation is applied, our semantic-aware attack detection system outperforms both Windows Defender and VirusTotal with a 92.3% true positive rate and a 0% false positive rate on average.

MalMax: Multi-Aspect Execution for Automated Dynamic Web Server Malware Analysis

  • Abbas Naderi-Afooshteh
  • Yonghwi Kwon
  • Anh Nguyen-Tuong
  • Ali Razmjoo-Qalaei
  • Mohammad-Reza Zamiri-Gourabi
  • Jack W. Davidson

This paper presents MalMax, a novel system to detect server-side malware that routinely employ sophisticated polymorphic evasive runtime code generation techniques. When MalMax encounters an execution point that presents multiple possible execution paths (e.g., via predicates and/or dynamic code), it explores these paths through counterfactual execution of code sandboxed within an isolated execution environment. Furthermore, a unique feature of MalMax is its cooperative isolated execution model in which unresolved artifacts (e.g., variables, functions, and classes) within one execution context can be concretized using values from other execution contexts. Such cooperation dramatically amplifies the reach of counterfactual execution. As an example, for WordPress, cooperation results in 63% additional code coverage. The combination of counterfactual execution and cooperative isolated execution enables MalMax to accurately and effectively identify malicious behavior. Using a large (1 terabyte) real-world dataset of PHP web applications collected from a commercial web hosting company, we performed an extensive evaluation of MalMax. We evaluated the effectiveness of MalMax by comparing its ability to detect malware against VirusTotal, a malware detector that aggregates many diverse scanners. Our evaluation results show that MalMax is highly effective in exposing malicious behavior in complicated polymorphic malware. MalMax was also able to identify 1,485 malware samples that are not detected by any existing state-of-the-art tool, even after 7 months in the wild.

SESSION: Session 8D: Language Security

Where Does It Go?: Refining Indirect-Call Targets with Multi-Layer Type Analysis

  • Kangjie Lu
  • Hong Hu

System software commonly uses indirect calls to realize dynamic program behaviors. However, indirect-calls also bring challenges to constructing a precise control-flow graph that is a standard pre-requisite for many static program-analysis and system-hardening techniques. Unfortunately, identifying indirect-call targets is a hard problem. In particular, modern compilers do not recognize indirect-call targets by default. Existing approaches identify indirect-call targets based on type analysis that matches the types of function pointers and the ones of address-taken functions. Such approaches, however, suffer from a high false-positive rate as many irrelevant functions may share the same types.

In this paper, we propose a new approach, namely Multi-Layer Type Analysis (MLTA), to effectively refine indirect-call targets for C/C++ programs. MLTA relies on an observation that function pointers are commonly stored into objects whose types have a multi-layer type hierarchy; before indirect calls, function pointers will be loaded from objects with the same type hierarchy “layer by layer”. By matching the multi-layer types of function pointers and functions, MLTA can dramatically refine indirect-call targets. MLTA is effective because multi-layer types are more restrictive than single-layer types. It does not introduce false negatives by conservatively tracking targets propagation between multi-layer types, and the layered design allows MLTA to safely fall back whenever the analysis for a layer becomes infeasible. We have implemented MLTA in a system, namely TypeDive, based on LLVM and extensively evaluated it with the Linux kernel, the FreeBSD kernel, and the Firefox browser. Evaluation results show that TypeDive can eliminate 86% to 98% more indirect-call targets than existing approaches do, without introducing new false negatives. We also demonstrate that TypeDive not only improves the scalability of static analysis but also benefits semantic-bug detection. With TypeDive, we have found 35 new deep semantic bugs in the Linux kernel.

Different is Good: Detecting the Use of Uninitialized Variables through Differential Replay

  • Mengchen Cao
  • Xiantong Hou
  • Tao Wang
  • Hunter Qu
  • Yajin Zhou
  • Xiaolong Bai
  • Fuwei Wang

The use of uninitialized variables is a common issue. It could cause kernel information leak, which defeats the widely deployed security defense, i.e., kernel address space layout randomization (KASLR). Though a recent system called Bochspwn Reloaded reported multiple memory leaks in Windows kernels, how to effectively detect this issue is still largely behind.

In this paper, we propose a new technique, i.e., differential replay, that could effectively detect the use of uninitialized variables. Specifically, it records and replays a program’s execution in multiple instances. One instance is with the vanilla memory, the other one changes (or poisons) values of variables allocated from the stack and the heap. Then it compares program states to find references to uninitialized variables. The idea is that if a variable is properly initialized, it will overwrite the poisoned value and program states in two running instances should be the same. After detecting the differences, our system leverages the symbolic taint analysis to further identify the location where the variable was allocated. This helps us to identify the root cause and facilitate the development of real exploits. We have implemented a prototype called TimePlayer. After applying it to both Windows 7 and Windows 10 kernels (x86/x64), it successfully identified 34 new issues and another 85 ones that had been patched (some of them were publicly unknown.) Among 34 new issues, 17 of them have been confirmed as zero-day vulnerabilities by Microsoft.

SESSION: Session 8E: Web Security

HideNoSeek: Camouflaging Malicious JavaScript in Benign ASTs

  • Aurore Fass
  • Michael Backes
  • Ben Stock

In the malware field, learning-based systems have become popular to detect new malicious variants. Nevertheless, attackers with specific and internal knowledge of a target system may be able to produce input samples which are misclassified. In practice, the assumption of strong attackers is not realistic as it implies access to insider information. We instead propose HideNoSeek, a novel and generic camouflage attack, which evades the entire class of detectors based on syntactic features, without needing any information about the system it is trying to evade. Our attack consists of changing the constructs of malicious JavaScript samples to reproduce a benign syntax. For this purpose, we automatically rewrite the Abstract Syntax Trees (ASTs) of malicious JavaScript inputs into existing benign ones. In particular, HideNoSeek uses malicious seeds and searches for isomorphic subgraphs between the seeds and traditional benign scripts. Specifically, it replaces benign sub-ASTs by their malicious equivalents (same syntactic structure) and adjusts the benign data dependencies–without changing the AST–so that the malicious semantics is kept. In practice, we leveraged 23 malicious seeds to generate 91,020 malicious scripts, which perfectly reproduce ASTs of Alexa top 10,000 web pages. Also, we can produce on average 14 different malicious samples with the same AST as each Alexa top 10. Overall, a standard trained classifier has 99.98% false negatives with HideNoSeek inputs, while a classifier trained on such samples has over 88.74% false positives, rendering the targeted static detectors unreliable.

Your Cache Has Fallen: Cache-Poisoned Denial-of-Service Attack

  • Hoai Viet Nguyen
  • Luigi Lo Iacono
  • Hannes Federrath

Web caching enables the reuse of HTTP responses with the aim to reduce the number of requests that reach the origin server, the volume of network traffic resulting from resource requests, and the user-perceived latency of resource access. For these reasons, a cache is a key component in modern distributed systems as it enables applications to scale at large. In addition to optimizing performance metrics, caches promote additional protection against Denial of Service (DoS) attacks. In this paper we introduce and analyze a new class of web cache poisoning attacks. By provoking an error on the origin server that is not detected by the intermediate caching system, the cache gets poisoned with the server-generated error page and instrumented to serve this useless content instead of the intended one, rendering the victim service unavailable. In an extensive study of fifteen web caching solutions we analyzed the negative impact of the CachePoisoned DoS (CPDoS) attack-as we coined it. We show the practical relevance by identifying one proxy cache product and five CDN services that are vulnerable to CPDoS. Amongst them are prominent solutions that in turn cache high-value websites. The consequences are severe as one simple request is sufficient to paralyze a victim website within a large geographical region. The awareness of the newly introduced CPDoS attack is highly valuable for researchers for obtaining a comprehensive understanding of causes and countermeasures as well as practitioners for implementing robust and secure distributed systems.

SESSION: Session 9A: User Study

"I don’t see why I would ever want to use it": Analyzing the Usability of Popular Smartphone Password Managers

  • Sunyoung Seiler-Hwang
  • Patricia Arias-Cabarcos
  • Andrés Marín
  • Florina Almenares
  • Daniel Díaz-Sánchez
  • Christian Becker

Passwords are an often unavoidable authentication mechanism, despite the availability of additional alternative means. In the case of smartphones, usability problems are aggravated because interaction happens through small screens and multilayer keyboards. While password managers (PMs) can improve this situation and contribute to hardening security, their adoption is far from widespread. To understand the underlying reasons, we conducted the first empirical usability study of mobile PMs, covering both quantitative and qualitative evaluations. Our findings show that popular PMs are barely acceptable according to the standard System Usability Scale, and that there are three key areas for improvement: integration with external applications, security, and user guidance and interaction. We build on the collected evidence to suggest recommendations that can fill this gap.

Matched and Mismatched SOCs: A Qualitative Study on Security Operations Center Issues

  • Faris Bugra Kokulu
  • Ananta Soneji
  • Tiffany Bao
  • Yan Shoshitaishvili
  • Ziming Zhao
  • Adam Doupé
  • Gail-Joon Ahn

Organizations, such as companies and governments, created Security Operations Centers (SOCs) to defend against computer security attacks. SOCs are central defense groups that focus on security incident management with capabilities such as monitoring, preventing, responding, and reporting. They are one of the most critical defense components of a modern organization’s defense. Despite their critical importance to organizations, and the high frequency of reported security incidents, only a few research studies focus on problems specific to SOCs. In this study, to understand and identify the issues of SOCs, we conducted 18 semi-structured interviews with SOC analysts and managers who work for organizations from different industry sectors. Through our analysis of the interview data, we identified technical and non-technical issues that exist in SOC. Moreover, we found inherent disagreements between SOC managers and their analysts that, if not addressed, could entail a risk to SOC efficiency and effectiveness. We distill these issues into takeaways that apply both to future academic research and to SOC management. We believe that research should focus on improving the efficiency and effectiveness of SOCs.

A Usability Evaluation of Let’s Encrypt and Certbot: Usable Security Done Right

  • Christian Tiefenau
  • Emanuel von Zezschwitz
  • Maximilian Häring
  • Katharina Krombholz
  • Matthew Smith

The correct configuration of HTTPS is a complex set of tasks, which many administrators have struggled with in the past. Let’s Encrypt and Electronic Frontier Foundation’s Certbot aim to improve the TLS ecosystem by offering free trusted certificates (Let’s Encrypt) and by providing user-friendly support to configure and harden TLS (Certbot). Although adoption rates have increased, to date, there has been only a little scientific evidence of the actual usability and security benefits of this semi-automated approach. Therefore, we conducted a randomized control trial to evaluate the usability of Let’s Encrypt and Certbot in comparison to the traditional certificate authority approach. We performed a within-subjects lab study with 31 participants. The study sheds light on the security and usability enhancements that Let’s Encrypt and Certbot provide. We highlight how usability improvements aimed at administrators can have a large impact on security and discuss takeaways for Certbot and other security-related tasks that experts struggle with.

SESSION: Session 9B: ML Security III

Seeing isn’t Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors

  • Yue Zhao
  • Hong Zhu
  • Ruigang Liang
  • Qintao Shen
  • Shengzhi Zhang
  • Kai Chen

Recently Adversarial Examples (AEs) that deceive deep learning models have been a topic of intense research interest. Compared with the AEs in the digital space, the physical adversarial attack is considered as a more severe threat to the applications like face recognition in authentication, objection detection in autonomous driving cars, etc. In particular, deceiving the object detectors practically, is more challenging since the relative position between the object and the detector may keep changing. Existing works attacking object detectors are still very limited in various scenarios, e.g., varying distance and angles, etc. In this paper, we presented systematic solutions to build robust and practical AEs against real world object detectors. Particularly, for Hiding Attack (HA), we proposed thefeature-interference reinforcement (FIR) method and theenhanced realistic constraints generation (ERG) to enhance robustness, and for Appearing Attack (AA), we proposed thenested-AE, which combines two AEs together to attack object detectors in both long and short distance. We also designed diverse styles of AEs to make AA more surreptitious. Evaluation results show that our AEs can attack the state-of-the-art real-time object detectors (i.e., YOLO V3 and faster-RCNN) at the success rate up to 92.4% with varying distance from 1m to 25m and angles from -60º to 60º. Our AEs are also demonstrated to be highly transferable, capable of attacking another three state-of-the-art black-box models with high success rate.

AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning

  • Florian Tramèr
  • Pascal Dupré
  • Gili Rusak
  • Giancarlo Pellegrino
  • Dan Boneh

Perceptual ad-blocking is a novel approach that detects online advertisements based on their visual content. Compared to traditional filter lists, the use of perceptual signals is believed to be less prone to an arms race with web publishers and ad networks. We demonstrate that this may not be the case. We describe attacks on multiple perceptual ad-blocking techniques, and unveil a new arms race that likely disfavors ad-blockers. Unexpectedly, perceptual ad-blocking can also introduce new vulnerabilities that let an attacker bypass web security boundaries and mount DDoS attacks. We first analyze the design space of perceptual ad-blockers and present a unified architecture that incorporates prior academic and commercial work. We then explore a variety of attacks on the ad-blocker’s detection pipeline, that enable publishers or ad networks to evade or detect ad-blocking, and at times even abuse its high privilege level to bypass web security boundaries. On one hand, we show that perceptual ad-blocking must visually classify rendered web content to escape an arms race centered on obfuscation of page markup. On the other, we present a concrete set of attacks on visual ad-blockers by constructing adversarial examples in a real web page context. For seven ad-detectors, we create perturbed ads, ad-disclosure logos, and native web content that misleads perceptual ad-blocking with 100% success rates. In one of our attacks, we demonstrate how a malicious user can upload adversarial content, such as a perturbed image in a Facebook post, that fools the ad-blocker into removing another users’ non-ad content. Moving beyond the Web and visual domain, we also build adversarial examples for AdblockRadio, an open source radio client that uses machine learning to detects ads in raw audio streams.

Attacking Graph-based Classification via Manipulating the Graph Structure

  • Binghui Wang
  • Neil Zhenqiang Gong

Graph-based classification methods are widely used for security analytics. Roughly speaking, graph-based classification methods include collective classification and graph neural network. Attacking a graph-based classification method enables an attacker to evade detection in security analytics. However, existing adversarial machine learning studies mainly focused on machine learning for non-graph data. Only a few recent studies touched adversarial graph-based classification methods. However, they focused on graph neural network, leaving collective classification largely unexplored. We aim to bridge this gap in this work. We consider an attacker’s goal is to evade detection via manipulating the graph structure. We formulate our attack as a graph-based optimization problem, solving which produces the edges that an attacker needs to manipulate to achieve its attack goal. However, it is computationally challenging to solve the optimization problem exactly. To address the challenge, we propose several approximation techniques to solve the optimization problem. We evaluate our attacks and compare them with a recent attack designed for graph neural networks using four graph datasets. Our results show that our attacks can effectively evade graph-based classification methods. Moreover, our attacks outperform the existing attack for evading collective classification methods and some graph neural network methods.

Latent Backdoor Attacks on Deep Neural Networks

  • Yuanshun Yao
  • Huiying Li
  • Haitao Zheng
  • Ben Y. Zhao

Recent work proposed the concept of backdoor attacks on deep neural networks (DNNs), where misclassification rules are hidden inside normal models, only to be triggered by very specific inputs. However, these “traditional” backdoors assume a context where users train their own models from scratch, which rarely occurs in practice. Instead, users typically customize “Teacher” models already pretrained by providers like Google, through a process called transfer learning. This customization process introduces significant changes to models and disrupts hidden backdoors, greatly reducing the actual impact of backdoors in practice. In this paper, we describe latent backdoors, a more powerful and stealthy variant of backdoor attacks that functions under transfer learning. Latent backdoors are incomplete backdoors embedded into a “Teacher” model, and automatically inherited by multiple “Student” models through transfer learning. If any Student models include the label targeted by the backdoor, then its customization process completes the backdoor and makes it active. We show that latent backdoors can be quite effective in a variety of application contexts, and validate its practicality through real-world attacks against traffic sign recognition, iris identification of volunteers, and facial recognition of public figures (politicians). Finally, we evaluate 4 potential defenses, and find that only one is effective in disrupting latent backdoors, but might incur a cost in classification accuracy as tradeoff.

SESSION: Session 9C: Zero-Knowledge Proofs

Succinct Arguments for Bilinear Group Arithmetic: Practical Structure-Preserving Cryptography

  • Russell W. F. Lai
  • Giulio Malavolta
  • Viktoria Ronge

In their celebrated work, Groth and Sahai [EUROCRYPT’08, SICOMP’ 12] constructed non-interactive zero-knowledge (NIZK) proofs for general bilinear group arithmetic relations, which spawned the entire subfield of structure-preserving cryptography. This branch of the theory of cryptography focuses on modular design of advanced cryptographic primitives. Although the proof systems of Groth and Sahai are a powerful toolkit, their efficiency hits a barrier when the size of the witness is large, as the proof size is linear in that of the witness. In this work, we revisit the problem of proving knowledge of general bilinear group arithmetic relations in zero-knowledge. Specifically, we construct a succinct zero-knowledge argument for such relations, where the communication complexity is logarithmic in the integer and source group components of the witness. Our argument has public-coin setup and verifier and can therefore be turned non-interactive using the Fiat-Shamir transformation in the random oracle model. For the special case of non-bilinear group arithmetic relations with only integer unknowns, our system can be instantiated in non-bilinear groups. In many applications, our argument system can serve as a drop-in replacement of Groth-Sahai proofs, turning existing advanced primitives in the vast literature of structure-preserving cryptography into practically efficient systems with short proofs.

LegoSNARK: Modular Design and Composition of Succinct Zero-Knowledge Proofs

  • Matteo Campanelli
  • Dario Fiore
  • Anaïs Querol

We study the problem of building non-interactive proof systems modularly by linking small specialized “gadget” SNARKs in a lightweight manner. Our motivation is both theoretical and practical. On the theoretical side, modular SNARK designs would be flexible and reusable. Also, previous works (e.g., Geppetto) consider They have been successfully employed in previous works.(cite prev papers ). These approaches, however, tend to be ad-hoc and to reinventing the wheel. We propose to fill this gap. In practice, specialized SNARKs have the potential to be more efficient than general-purpose schemes, on which most existing works have focused. If a computation naturally presents different “components” (e.g. one arithmetic circuit and one boolean circuit), a general-purpose scheme would homogenize them to a single representation with a subsequent cost in performance. Through a modular approach one could instead exploit the nuances of a computation and choose the best gadget for each component. Our contribution is LegoSNARK, a “toolbox” (or framework) for commit-and-prove zkSNARKs (CP-SNARKs) that includes: 1) General composition tools: build new CP-SNARKs from proof gadgets for basic relationssimply. Formalize notion of cc-SNARK. 2) A “lifting” tool: a compiler to add commit-and-prove capabilities to a broad class of existing zkSNARKsefficiently. This makes them interoperable (linkable) within the same computation. For example, one QAP-based scheme can be used prove one component; another GKR-based scheme can be used to prove another. 3) A collection of succinct proof gadgets for a variety of relations. Additionally, through our framework and gadgets, we are able to obtain new succinct proof systems. Notably: — LegoGro16, a commit-and-prove version of Groth16 zkSNARK, that operates over data committed with a classical Pedersen vector commitment, and that achieves a 5000× speedup in proving time. — LegoUAC, a pairing-based SNARK for arithmetic circuits that has a universal, circuit-independent, CRS, and proving time linear in the number of circuit gates (vs. the recent scheme of Groth et al. (CRYPTO’18) with quadratic CRS and quasilinear proving time). — LegoMM, a CP-SNARK for matrix multiplication that achieves optimal proving complexity.

Efficient Zero-Knowledge Arguments in the Discrete Log Setting, Revisited

  • Max Hoffmann
  • Michael Klooß
  • Andy Rupp

Zero-knowledge arguments have become practical, and widely used, especially in the world of Blockchain, for example in Zcash. This work revisits zero-knowledge proofs in the discrete logarithm setting. First, we identify and carve out basic techniques (partly being used implicitly before) to optimise proofs in this setting. In particular, the linear combination of protocols is a useful tool to obtain zero-knowledge and/or reduce communication. With these techniques, we are able to devise zero-knowledge variants of the logarithmic communication arguments by Bootle et al. (EUROCRYPT ’16) and Bünz et al. (S&P ’18) thereby introducing almost no overhead. We then construct a conceptually simple commit-and-prove argument for satisfiability of a set of quadratic equations. Unlike previous work, we are not restricted to rank 1 constraint systems (R1CS). This is, to the best of our knowledge, the first work demonstrating that general quadratic constraints, not just R1CS, are a natural relation in the dlog (or ideal linear commitment) setting. This enables new possibilities for optimisation, as, eg., any degree n2 polynomial f(X) can now be “evaluated” with at most 2n quadratic constraints. Our protocols are modular. We easily construct an efficient, logarithmic size shuffle proof, which can be used in electronic voting. Additionally, we take a closer look at quantitative security measures, eg. the efficiency of an extractor. We formalise short-circuit extraction, which allows us to give tighter bounds on the efficiency of an extractor.

Sonic: Zero-Knowledge SNARKs from Linear-Size Universal and Updatable Structured Reference Strings

  • Mary Maller
  • Sean Bowe
  • Markulf Kohlweiss
  • Sarah Meiklejohn

Ever since their introduction, zero-knowledge proofs have become an important tool for addressing privacy and scalability concerns in a variety of applications. In many systems each client downloads and verifies every new proof, and so proofs must be small and cheap to verify. The most practical schemes require either a trusted setup, as in (pre-processing) zk-SNARKs, or verification complexity that scales linearly with the complexity of the relation, as in Bulletproofs. The structured reference strings required by most zk-SNARK schemes can be constructed with multi-party computation protocols, but the resulting parameters are specific to an individual relation. Groth et al. discovered a zk-SNARK protocol with a universal structured reference string that is also updatable, but the string scales quadratically in the size of the supported relations.

Here we describe a zero-knowledge SNARK, Sonic, which supports a universal and continually updatable structured reference string that scales linearly in size. We also describe a generally useful technique in which untrusted “helpers” can compute advice that allows batches of proofs to be verified more efficiently. Sonic proofs are constant size, and in the “helped” batch verification context the marginal cost of verification is comparable with the most efficient SNARKs in the literature.

SESSION: Session 9D: Signatures

The SPHINCS+ Signature Framework

  • Daniel J. Bernstein
  • Andreas Hülsing
  • Stefan Kölbl
  • Ruben Niederhagen
  • Joost Rijneveld
  • Peter Schwabe

We introduce SPHINCS+, a stateless hash-based signature framework. SPHINCS+ has significant advantages over the state of the art in terms of speed, signature size, and security, and is among the nine remaining signature schemes in the second round of the NIST PQC standardization project. One of our main contributions in this context is a new few-time signature scheme that we call FORS. Our second main contribution is the introduction of tweakable hash functions and a demonstration how they allow for a unified security analysis of hash-based signature schemes. We give a security reduction for SPHINCS+ using this abstraction and derive secure parameters in accordance with the resulting bound. Finally, we present speed results for our optimized implementation of SPHINCS+ and compare to SPHINCS-256, Gravity-SPHINCS, and Picnic.

GALACTICS: Gaussian Sampling for Lattice-Based Constant- Time Implementation of Cryptographic Signatures, Revisited

  • Gilles Barthe
  • Sonia Belaïd
  • Thomas Espitau
  • Pierre-Alain Fouque
  • Mélissa Rossi
  • Mehdi Tibouchi

In this paper, we propose a constant-time implementation of the BLISS lattice-based signature scheme. BLISS is possibly the most efficient lattice-based signature scheme proposed so far, with a level of performance on par with widely used pre-quantum primitives like ECDSA. It is only one of the few postquantum signatures to have seen real-world deployment, as part of the strongSwan VPN software suite. The outstanding performance of the BLISS signature scheme stems in large part from its reliance on discrete Gaussian distributions, which allow for better parameters and security reductions. However, that advantage has also proved to be its Achilles’ heel, as discrete Gaussians pose serious challenges in terms of secure implementations. Implementations of BLISS so far have included secret-dependent branches and memory accesses, both as part of the discrete Gaussian sampling and of the essential rejection sampling step in signature generation. These defects have led to multiple devastating timing attacks, and were a key reason why BLISS was not submitted to the NIST postquantum standardization effort. In fact, almost all of the actual candidates chose to stay away from Gaussians despite their efficiency advantage, due to the serious concerns surrounding implementation security. Moreover, naive countermeasures will often not cut it: we show that a reasonable-looking countermeasure suggested in previous work to protect the BLISS rejection sampling can again be defeated using novel timing attacks, in which the timing information is fed to phase retrieval machine learning algorithm in order to achieve a full key recovery. Fortunately, we also present careful implementation techniques that allow us to describe an implementation of BLISS with complete timing attack protection, achieving the same level of efficiency as the original unprotected code, without resorting on floating point arithmetic or platform-specific optimizations like AVX intrinsics. These techniques, including a new approach to the polynomial approximation of transcendental function, can also be applied to the masking of the BLISS signature scheme, and will hopefully make more efficient and secure implementations of lattice-based cryptography possible going forward.

Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures

  • Dennis Jackson
  • Cas Cremers
  • Katriel Cohn-Gordon
  • Ralf Sasse

The standard definition of security for digital signatures – existential unforgeability – does not ensure certain properties that protocol designers might expect. For example, in many modern signature schemes, one signature may verify against multiple distinct public keys. It is left to protocol designers to ensure that the absence of these properties does not lead to attacks.

Modern automated protocol analysis tools are able to provably exclude large classes of attacks on complex real-world protocols such as TLS~1.3 and 5G. However, their abstraction of signatures (implicitly) assumes much more than existential unforgeability, thereby missing several classes of practical attacks.

We give a hierarchy of new formal models for signature schemes that captures these subtleties, and thereby allows us to analyse (often unexpected) behaviours of real-world protocols that were previously out of reach of symbolic analysis. We implement our models in the Tamarin Prover, yielding the first way to perform these analyses automatically, and validate them on several case studies. In the process, we find new attacks on DRKey and SOAP’s WS-Security, both protocols which were previously proven secure in traditional symbolic models.

Membership Privacy for Fully Dynamic Group Signatures

  • Michael Backes
  • Lucjan Hanzlik
  • Jonas Schneider-Bensch

Group signatures present a compromise between the traditional goals of digital signatures and the need for signer privacy, allowing for the creation of unforgeable signatures in the name of a group which reveal nothing about the actual signer’s identity beyond their group membership. An important consideration that is absent in prevalent models is that group membership itself may be sensitive information, especially if group membership is dynamic, i.e. membership status may change over time. We address this issue by introducing formal notions of membership privacy for fully dynamic group signature schemes, which can be easily integrated into the most expressive models of group signature security to date. We then propose a generic construction for a fully dynamic group signature scheme with membership privacy that is based on signatures with flexible public key (SFPK) and signatures on equivalence classes (SPSEQ). Finally, we devise novel techniques for SFPK to construct a highly efficient standard model scheme (i.e. without random oracles) that provides shorter signatures than even the non-private state-of-the-art from standard assumptions. This shows that, although the strictly stronger security notions we introduce have been completely unexplored in the study of fully dynamic group signatures so far, they do not come at an additional cost in practice.

SESSION: Session 9E: Web Censorship and Auditing

Geneva: Evolving Censorship Evasion Strategies

  • Kevin Bock
  • George Hughey
  • Xiao Qiang
  • Dave Levin

Researchers and censoring regimes have long engaged in a cat-and-mouse game, leading to increasingly sophisticated Internet-scale censorship techniques and methods to evade them. In this paper, we take a drastic departure from the previously manual evade-detect cycle by developing techniques to automate the discovery of censorship evasion strategies. We present Geneva, a novel genetic algorithm that evolves packet-manipulation-based censorship evasion strategies against nation-state level censors. Geneva composes, mutates, and evolves sophisticated strategies out of four basic packet manipulation primitives (drop, tamper headers, duplicate, and fragment). With experiments performed both in-lab and against several real censors (in China, India, and Kazakhstan), we demonstrate that Geneva is able to quickly and independently re-derive most strategies from prior work, and derive novel subspecies and altogether new species of packet manipulation strategies. Moreover, Geneva discovers successful strategies that prior work posited were not effective, and evolves extinct strategies into newly working variants. We analyze the novel strategies Geneva creates to infer previously unknown behavior in censors. Geneva is a first step towards automating censorship evasion; to this end, we have made our code and data publicly available.

Conjure: Summoning Proxies from Unused Address Space

  • Sergey Frolov
  • Jack Wampler
  • Sze Chuen Tan
  • J. Alex Halderman
  • Nikita Borisov
  • Eric Wustrow

Refraction Networking (formerly known as “Decoy Routing”) has emerged as a promising next-generation approach for circumventing Internet censorship. Rather than trying to hide individual circumvention proxy servers from censors, proxy functionality is implemented in the core of the network, at cooperating ISPs in friendly countries. Any connection that traverses these ISPs could be a conduit for the free flow of information, so censors cannot easily block access without also blocking many legitimate sites. While one Refraction scheme, TapDance, has recently been deployed at ISP-scale, it suffers from several problems: a limited number of “decoy” sites in realistic deployments, high technical complexity, and undesirable tradeoffs between performance and observability by the censor. These challenges may impede broader deployment and ultimately allow censors to block such techniques. We present Conjure, an improved Refraction Networking approach that overcomes these limitations by leveraging unused address space at deploying ISPs. Instead of using real websites as the decoy destinations for proxy connections, our scheme connects to IP addresses where no web server exists leveraging proxy functionality from the core of the network. These phantom hosts are difficult for a censor to distinguish from real ones, but can be used by clients as proxies. We define the Conjure protocol, analyze its security, and evaluate a prototype using an ISP testbed. Our results suggest that Conjure can be harder to block than TapDance, is simpler to maintain and deploy, and offers substantially better network performance.

You Shall Not Join: A Measurement Study of Cryptocurrency Peer-to-Peer Bootstrapping Techniques

  • Angelique Faye Loe
  • Elizabeth Anne Quaglia

Cryptocurrencies are digital assets which depend upon the use of distributed peer-to-peer networks. The method a new peer uses to initially join a peer-to-peer network is known as bootstrapping. The ability to bootstrap without the use of a centralized resource is an unresolved challenge. In this paper we survey the bootstrapping techniques used by 74 cryptocurrencies and find that censorship-prone methods such as DNS seeding and IP hard-coding are the most prevalent. In response to this finding, we test two other bootstrapping techniques less susceptible to censorship, Tor and ZMap, to determine if they are operationally feasible alternatives more resilient to censorship. We perform a global measurement study of DNS query responses for each the 92 DNS seeds discovered across 42 countries using the distributed RIPE Atlas network. This provides details of each cryptocurrencies’ peer-to-peer network topology and also highlights instances of DNS outages and query manipulation impacting the bootstrapping process. Our study also reveals that the source code of the cryptocurrencies researched comes from only five main repositories; hence accounting for the inheritance of legacy bootstrapping methods. Finally, we discuss the implications of our findings and provide recommendations to mitigate the risks exposed.

SAMPL: Scalable Auditability of Monitoring Processes using Public Ledgers

  • Gaurav Panwar
  • Roopa Vishwanathan
  • Satyajayant Misra
  • Austin Bos

Organized surveillance, especially by governments poses a major challenge to individual privacy, due to the resources governments have at their disposal, and the possibility of overreach. Given the impact of invasive monitoring, in most democratic countries, government surveillance is, in theory, monitored and subject to public oversight to guard against violations. In practice, there is a difficult fine balance between safeguarding individual’s privacy rights and not diluting the efficacy of national security investigations, as exemplified by reports on government surveillance programs that have caused public controversy, and have been challenged by civil and privacy rights organizations. Surveillance is generally conducted through a mechanism where federal agencies obtain a warrant from a federal or state judge (e.g., the US FISA court, Supreme Court in Canada) to subpoena a company or service-provider (e.g., Google, Microsoft) for their customers’ data. The courts provide annual statistics on the requests (accepted, rejected), while the companies provide annual transparency reports for public auditing. However, in practice, the statistical information provided by the courts and companies is at a very high level, generic, is released after-the-fact, and is inadequate for auditing the operations. Often this is attributed to the lack of scalable mechanisms for reporting and transparent auditing. In this paper, we present SAMPL, a novel auditing framework which leverages cryptographic mechanisms, such as zero knowledge proofs, Pedersen commitments, Merkle trees, and public ledgers to create a scalable mechanism for auditing electronic surveillance processes involving multiple actors. SAMPL is the first framework that can identify the actors (e.g., agencies and companies) that violate the purview of the court orders. We experimentally demonstrate the scalability for SAMPL for handling concurrent monitoring processes without undermining their secrecy and auditability.

SESSION: Session 10A: Cyberphysical Security

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

  • Yulong Cao
  • Chaowei Xiao
  • Benjamin Cyr
  • Yimeng Zhou
  • Won Park
  • Sara Rampazzi
  • Qi Alfred Chen
  • Kevin Fu
  • Z. Morley Mao

In Autonomous Vehicles (AVs), one fundamental pillar is perception,which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process.Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function.We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility.We also discuss defense directions at the AV system, sensor, and machine learning model levels.

LibreCAN: Automated CAN Message Translator

  • Mert D. Pesé
  • Troy Stacer
  • C. Andrés Campos
  • Eric Newberry
  • Dongyao Chen
  • Kang G. Shin

Modern Connected and Autonomous Vehicles (CAVs) are equipped with an increasing number of Electronic Control Units (ECUs), many of which produce large amounts of data. Data is exchanged between ECUs via an in-vehicle network, with the Controller Area Network (CAN) bus being the de facto standard in contemporary vehicles. Furthermore, CAVs have not only physical interfaces but also increased data connectivity to the Internet via their Telematic Control Units (TCUs), enabling remote access via mobile devices. It is also possible to tap into, and read/write data from/to the CAN bus, as data transmitted on the CAN bus is not encrypted. This naturally generates concerns about automotive cybersecurity. One commonality among most vehicular security attacks reported to date is that they ultimately require write access to the CAN bus. In order to cause targeted and intentional changes in vehicle behavior, malicious CAN injection attacks require knowledge of the CAN message format. However, since this format is proprietary to OEMs and can differ even among different models of a single make of vehicle, one must manually reverse-engineer the CAN message format of each vehicle they target — a time-consuming and tedious process that does not scale. To mitigate this difficulty, we develop LibreCAN, which can translate most CAN messages with minimal effort. Our extensive evaluation on multiple vehicles demonstrates LibreCAN’s efficiency in terms of accuracy, coverage, required manual effort and scalability to any vehicle.

Trick or Heat?: Manipulating Critical Temperature-Based Control Systems Using Rectification Attacks

  • Yazhou Tu
  • Sara Rampazzi
  • Bin Hao
  • Angel Rodriguez
  • Kevin Fu
  • Xiali Hei

Temperature sensing and control systems are widely used in the closed-loop control of critical processes such as maintaining the thermal stability of patients, or in alarm systems for detecting temperature-related hazards. However, the security of these systems has yet to be completely explored, leaving potential attack surfaces that can be exploited to take control over critical systems.

In this paper we investigate the reliability of temperature-based control systems from a security and safety perspective. We show how unexpected consequences and safety risks can be induced by physical-level attacks on analog temperature sensing components. For instance, we demonstrate that an adversary could remotely manipulate the temperature sensor measurements of an infant incubator to cause potential safety issues, without tampering with the victim system or triggering automatic temperature alarms. This attack exploits the unintended rectification effect that can be induced in operational and instrumentation amplifiers to control the sensor output, tricking the internal control loop of the victim system to heat up or cool down. Furthermore, we show how the exploit of this hardware-level vulnerability could affect different classes of analog sensors that share similar signal conditioning processes.

Our experimental results indicate that conventional defenses commonly deployed in these systems are not sufficient to mitigate the threat, so we propose a prototype design of a low-cost anomaly detector for critical applications to ensure the integrity of temperature sensor signals.

SESSION: Session 10B: TEE II

OPERA: Open Remote Attestation for Intel’s Secure Enclaves

  • Guoxing Chen
  • Yinqian Zhang
  • Ten-Hwang Lai

Intel Software Guard Extensions (SGX) remote attestation enables enclaves to authenticate hardware inside which they run, and attest the integrity of their enclave memory to the remote party. To enforce direct control of attestation, Intel mandates attestation to be verified by Intel’s attestation service. This Intel-centric attestation model, however, neither protects privacy nor performs efficiently when distributed and frequent attestation is required. This paper presents OPERA, an Open Platform for Enclave Remote Attestation. Without involving Intel’s attestation service while conducting attestation, OPERA is unchained from Intel, although it relies on Intel to establish a chain of trust whose anchor point is the secret rooted in SGX hardware. OPERA is open, as the implementation of its attestation service is completely open, allowing any enclave developer to run her own OPERA service, and its execution is publicly verifiable and hence trustworthy; OPERA is privacy-preserving, as the attestation service does not learn which enclave is being attested or when the attestation takes place; OPERA is performant, as it does not rely on a single-point-of-verification and also reduces the latency of verification.

Towards Memory Safe Enclave Programming with Rust-SGX

  • Huibo Wang
  • Pei Wang
  • Yu Ding
  • Mingshen Sun
  • Yiming Jing
  • Ran Duan
  • Long Li
  • Yulong Zhang
  • Tao Wei
  • Zhiqiang Lin

Intel Software Guard eXtension (SGX), a hardware supported trusted execution environment (TEE), is designed to protect security critical applications. However, it does not terminate traditional memory corruption vulnerabilities for the software running inside enclave, since enclave software is still developed with type unsafe languages such as C/C++. This paper presents RUST-SGX, an efficient and layered approach to exterminating memory corruption for software running inside SGX enclaves. The key idea is to enable the development of enclave programs with an efficient memory safe system language Rust with a RUST-SGX SDK by solving the key challenges of how to (1) make the SGX software memory safe and (2) meanwhile run as efficiently as with the SDK provided by Intel. We therefore propose to build RUST-SGX atop Intel SGX SDK, and tame unsafe components with formally proven memory safety. We have implemented RUST-SGX and tested with a series of benchmark programs. Our evaluation results show that RUST-SGX imposes little extra overhead (less than 5% with respect to the SGX specific features and services compared to software developed by Intel SGX SDK), and meanwhile have stronger memory safety.

LightBox: Full-stack Protected Stateful Middlebox at Lightning Speed

  • Huayi Duan
  • Cong Wang
  • Xingliang Yuan
  • Yajin Zhou
  • Qian Wang
  • Kui Ren

Running off-site software middleboxes at third-party service providers has been a popular practice. However, routing large volumes of raw traffic, which may carry sensitive information, to a remote site for processing raises severe security concerns. Prior solutions often abstract away important factors pertinent to real-world deployment. In particular, they overlook the significance of metadata protection and stateful processing. Unprotected traffic metadata like low-level headers, size and count, can be exploited to learn supposedly encrypted application contents. Meanwhile, tracking the states of 100,000s of flows concurrently is often indispensable in production-level middleboxes deployed at real networks.

We present LightBox, the first system that can drive off-site middleboxes at near-native speed with stateful processing and the most comprehensive protection to date. Built upon commodity trusted hardware, Intel SGX, LightBox is the product of our systematic investigation of how to overcome the inherent limitations of secure enclaves using domain knowledge and customization. First, we introduce an elegant virtual network interface that allows convenient access to fully protected packets at line rate without leaving the enclave, as if from the trusted source network. Second, we provide complete flow state management for efficient stateful processing, by tailoring a set of data structures and algorithms optimized for the highly constrained enclave space. Extensive evaluations demonstrate that LightBox, with all security benefits, can achieve 10Gbps packet I/O, and that with case studies on three stateful middleboxes, it can operate at near-native speed.

SESSION: Session 10C: Secret Sharing

CHURP: Dynamic-Committee Proactive Secret Sharing

  • Sai Krishna Deepak Maram
  • Fan Zhang
  • Lun Wang
  • Andrew Low
  • Yupeng Zhang
  • Ari Juels
  • Dawn Song

We introduce CHURP (CHUrn-Robust Proactive secret sharing). CHURP enables secure secret-sharing in dynamic settings, where the committee of nodes storing a secret changes over time. Designed for blockchains, CHURP has lower communication complexity than previous schemes: $O(n)$ on-chain and $O(n^2)$ off-chain in the optimistic case of no node failures. CHURP includes several technical innovations: An efficient new proactivization scheme of independent interest, a technique (using asymmetric bivariate polynomials) for efficiently changing secret-sharing thresholds, and a hedge against setup failures in an efficient polynomial commitment scheme. We also introduce a general new technique for inexpensive off-chain communication across the peer-to-peer networks of permissionless blockchains. We formally prove the security of CHURP, report on an implementation, and present performance measurements.

Efficient Verifiable Secret Sharing with Share Recovery in BFT Protocols

  • Soumya Basu
  • Alin Tomescu
  • Ittai Abraham
  • Dahlia Malkhi
  • Michael K. Reiter
  • Emin Gün Sirer

Byzantine fault tolerant state machine replication (SMR) provides powerful integrity guarantees, but fails to provide any privacy guarantee whatsoever. A natural way to add such privacy guarantees is to secret-share state instead of fully replicating it. Such a com- bination would enable simple solutions to difficult problems, such as a fair exchange or a distributed certification authority. However, incorporating secret shared state into traditional Byzantine fault tolerant (BFT) SMR protocols presents unique challenges. BFT protocols often use a network model that has some degree of asynchrony, making verifiable secret sharing (VSS) unsuitable. However, full asynchronous VSS (AVSS) is unnecessary as well since the BFT algorithm provides a broadcast channel. We first present the VSS with share recovery problem, which is the subproblem of AVSS required to incorporate secret shared state into a BFT engine. Then, we provide the first VSS with share recovery solution, KZG-VSSR, in which a failure-free sharing incurs only a constant number of cryptographic operations per replica. Finally, we show how to efficiently integrate any instantiation of VSSR into a BFT replication protocol while incurring only constant overhead. Instantiating VSSR with prior AVSS protocols would require a quadratic communication cost for a single shared value and incur a linear overhead when incorporated into BFT replication. We demonstrate our end-to-end solution via a a private key-value store built using BFT replication and two instantiations of VSSR, KZG-VSSR and Ped-VSSR, and present its evaluation.

Two-party Private Set Intersection with an Untrusted Third Party

  • Phi Hung Le
  • Samuel Ranellucci
  • S. Dov Gordon

We construct new protocols for two parties to securely compute on the items in their intersection. Our protocols make use of an untrusted third party that has no input. The use of this party allows us to construct highly efficient protocols that are secure against a single malicious corruption.

SESSION: Session 10D: Mobile Security

DeepIntent: Deep Icon-Behavior Learning for Detecting Intention-Behavior Discrepancy in Mobile Apps

  • Shengqu Xi
  • Shao Yang
  • Xusheng Xiao
  • Yuan Yao
  • Yayuan Xiong
  • Fengyuan Xu
  • Haoyu Wang
  • Peng Gao
  • Zhuotao Liu
  • Feng Xu
  • Jian Lu

Mobile apps have been an indispensable part in our daily life. However, there exist many potentially harmful apps that may exploit users’ privacy data, e.g., collecting the user’s information or sending messages in the background. Keeping these undesired apps away from the market is an ongoing challenge. While existing work provides techniques to determine what apps do, e.g., leaking information, little work has been done to answer, are the apps’ behaviors compatible with the intentions reflected by the app’s UI? In this work, we explore the synergistic cooperation of deep learning and program analysis as the first step to address this challenge. Specifically, we focus on the UI widgets that respond to user interactions and examine whether the intentions reflected by their UIs justify their permission uses. We present DeepIntent, a framework that uses novel deep icon-behavior learning to learn an icon-behavior model from a large number of popular apps and detect intention-behavior discrepancies. In particular, DeepIntent provides program analysis techniques to associate the intentions (i.e., icons and contextual texts) with UI widgets’ program behaviors, and infer the labels (i.e., permission uses) for the UI widgets based on the program behaviors, enabling the construction of a large-scale high-quality training dataset. Based on the results of the static analysis, DeepIntent uses deep learning techniques that jointly model icons and their contextual texts to learn an icon-behavior model, and detects intention-behavior discrepancies by computing the outlier scores based on the learned model. We evaluate DeepIntent on a large-scale dataset (9,891 benign apps and 16,262 malicious apps). With 80% of the benign apps for training and the remaining for evaluation, DeepIntent detects discrepancies with AUC scores 0.8656 and 0.8839 on benign apps and malicious apps, achieving 39.9% and 26.1% relative improvements over the state-of-the-art approaches.

The Art and Craft of Fraudulent App Promotion in Google Play

  • Mizanur Rahman
  • Nestor Hernandez
  • Ruben Recabarren
  • Syed Ishtiaque Ahmed
  • Bogdan Carbunar

Black Hat App Search Optimization (ASO) in the form of fake reviews and sockpuppet accounts, is prevalent in peer-opinion sites, e.g., app stores, with negative implications on the digital and real lives of their users. To detect and filter fraud, a growing body of research has provided insights into various aspects of fraud posting activities, and made assumptions about the working procedures of the fraudsters from online data. However, such assumptions often lack empirical evidence from the actual fraud perpetrators. To address this problem, in this paper, we present results of both a qualitative study with 18 ASO workers we recruited from 5 freelancing sites, concerning activities they performed on Google Play, and a quantitative investigation with fraud-related data collected from other 39 ASO workers. We reveal findings concerning various aspects of ASO worker capabilities and behaviors, including novel insights into their working patterns, and supporting evidence for several existing assumptions. Further, we found and report participant-revealed techniques to bypass Google-imposed verifications, concrete strategies to avoid detection, and even strategies that leverage fraud detection to enhance fraud efficacy. We report a Google site vulnerability that enabled us to infer the mobile device models used to post more than 198 million reviews in Google Play, including 9,942 fake reviews. We discuss the deeper implications of our findings, including their potential use to develop the next generation fraud detection and prevention systems.

CryptoGuard: High Precision Detection of Cryptographic Vulnerabilities in Massive-sized Java Projects

  • Sazzadur Rahaman
  • Ya Xiao
  • Sharmin Afrose
  • Fahad Shaon
  • Ke Tian
  • Miles Frantz
  • Murat Kantarcioglu
  • Danfeng (Daphne) Yao

Cryptographic API misuses, such as exposed secrets, predictable random numbers, and vulnerable certificate verification, seriously threaten software security. The vision of automatically screening cryptographic API calls in massive-sized (e.g., millions of LoC) programs is not new. However, hindered by the practical difficulty of reducing false positives without compromising analysis quality, this goal has not been accomplished. CryptoGuard is a set of detection algorithms that refine program slices by identifying language-specific irrelevant elements. The refinements reduce false alerts by 76% to 80% in our experiments. Running our tool, CryptoGuard, on 46 high-impact large-scale Apache projects and 6,181 Android apps generated many security insights. Our findings helped multiple popular Apache projects to harden their code, including Spark, Ranger, and Ofbiz. We also have made progress towards the science of analysis in this space, including manually analyzing 1,295 Apache alerts, confirming 1,277 true positives (98.61% precision), and in-depth comparison with leading solutions including CrySL, SpotBugs, and Coverity.

SESSION: Session 10E: Certificates

Let’s Encrypt: An Automated Certificate Authority to Encrypt the Entire Web

  • Josh Aas
  • Richard Barnes
  • Benton Case
  • Zakir Durumeric
  • Peter Eckersley
  • Alan Flores-López
  • J. Alex Halderman
  • Jacob Hoffman-Andrews
  • James Kasten
  • Eric Rescorla
  • Seth Schoen
  • Brad Warren

Let’s Encrypt is a free, open, and automated HTTPS certificate authority (CA) created to advance HTTPS adoption to the entire Web. Since its launch in late 2015, Let’s Encrypt has grown to become the world’s largest HTTPS CA, accounting for more currently valid certificates than all other browser-trusted CAs combined. By January 2019, it had issued over 538~million certificates for 223~million domain names. We describe how we built Let’s Encrypt, including the architecture of the CA software system (Boulder) and the structure of the organization that operates it (ISRG), and we discuss lessons learned from the experience. We also describe the design of ACME, the IETF-standard protocol we created to automate CA–server interactions and certificate issuance, and survey the diverse ecosystem of ACME clients, including Certbot, a software agent we created to automate HTTPS deployment. Finally, we measure Let’s Encrypt’s impact on the Web and the CA ecosystem. We hope that the success of Let’s Encrypt can provide a model for further enhancements to the Web PKI and for future Internet security infrastructure.

You Are Who You Appear to Be: A Longitudinal Study of Domain Impersonation in TLS Certificates

  • Richard Roberts
  • Yaelle Goldschlag
  • Rachel Walter
  • Taejoong Chung
  • Alan Mislove
  • Dave Levin

The public key infrastructure (PKI) provides the fundamental property of authentication: the means by which users can know with whom they are communicating online. The PKI ensures end-to-end authenticity insofar as it verifies a chain of certificates, but the true final step in end-to-end authentication comes when the user verifies that the website is what they expect. To this end, users are expected to evaluate domain names, but various “domain impersonation” attacks threaten their ability to do so. Indeed, if a user could be easily tricked into believing that amazon.com-offers.com is actually amazon.com, then, coupled with security indicators like a lock icon, users could believe that they have a secure connection to Amazon.

We study this threat to end-to-end authentication: (1) We introduce a new classification of an impersonation attack that we call target embedding. This embeds an entire target domain, unmodified, using one or more subdomains of the actual domain. (2) We perform a user study with the specific goal of understanding whether users fall for target embedding, and how its efficacy compares to other popular impersonation attacks (typosquatting, combosquatting, and homographs). We find that target embedding is the most effective against modern browsers. (3) Using all HTTPS certificates collected by Censys, we perform a longitudinal analysis of how target-embedding impersonation has evolved, who is responsible for issuing impersonating certificates, who hosts the domains, where the economic choke-points are, and more. We close with a discussion of counter-measures against this growing threat.

Certificate Transparency in the Wild: Exploring the Reliability of Monitors

  • Bingyu Li
  • Jingqiang Lin
  • Fengjun Li
  • Qiongxiao Wang
  • Qi Li
  • Jiwu Jing
  • Congli Wang

To detect fraudulent TLS server certificates and improve the accountability of certification authorities (CAs), certificate transparency (CT) is proposed to record certificates in publicly-visible logs, from which the monitors fetch all certificates and watch for suspicious ones. However, if the monitors, either domain owners themselves or third-party services, fail to return a complete set of certificates issued for a domain of interest, potentially fraudulent certificates may not be detected and then the CT framework becomes less reliable. This paper presents the first systematic study on CT monitors. We analyze the data in 88 public logs and the services of 5 active third-party monitors regarding 3,000,431 certificates of 6,000 selected Alexa Top-1M websites. We find that although CT allows ordinary domain owners to act as monitors, it is impractical for them to perform reliable processing by themselves, due to the rapidly increasing volume of certificates in public logs (e.g., on average 5 million records or 28.29 GB daily for the minimal set of logs that need to be monitored). Moreover, our study discloses that (a) none of the third-party monitors guarantees to return the complete set of certificates for a domain, and (b) for some domains, even the union of the certificates returned by the five third-party monitors can probably be incomplete. As a result, the certificates accepted by CT-enabled browsers are not absolutely visible to the claimed domain owners, even when CT is adopted with well-functioning logs. The risk of invisible fraudulent certificates in public logs raises doubts on the reliability of CT in practice.

SESSION: Posters

POSTER: Detecting Audio Adversarial Example through Audio Modification

  • Hyun Kwon
  • Hyunsoo Yoon
  • Ki-Woong Park

Deep neural networks (DNNs) perform well in the fields of image recognition, speech recognition, pattern analysis, and intrusion detection. However, DNNs are vulnerable to adversarial examples that add a small amount of noise to the original samples. These adversarial examples have mainly been studied in the field of images, but their effect on the audio field is currently of great interest. For example, adding small distortion that is difficult to identify by humans to the original sample can create audio adversarial examples that allow humans to hear without errors, but only to misunderstand the machine. Therefore, a defense method against audio adversarial examples is needed because it is a threat in this audio field. In this paper, we propose a method to detect audio adversarial examples. The key point of this method is to add a new low level distortion using audio modification, so that the classification result of the adversarial example changes sensitively. On the other hand, the original sample has little change in the classification result for low level distortion. Using this feature, we propose a method to detect audio adversarial examples. To verify the proposed method, we used the Mozilla Common Voice dataset and the DeepSpeech model as the target model. Based on the experimental results, it was found that the accuracy of the adversarial example decreased to 6.21% at approximately 12 dB. It can detect the audio adversarial example compared to the initial audio sample.

Poster: Fuzzing IoT Firmware via Multi-stage Message Generation

  • Bo Yu
  • Pengfei Wang
  • Tai Yue
  • Yong Tang

In this work, we present IoTHunter, the first grey-box fuzzer for fuzzing stateful protocols in IoT firmware. IoTHunter addresses the state scheduling problem based on a multi-stage message generation mechanism on runtime monitoring of IoT firmware. We evaluate IoTHunter with a set of real-world programs, and the result shows that IoTHunter outperforms black-box fuzzer boofuzz, which has a 2.2x, 2.0x, and 2.5x increase for function coverage, block coverage, and edge coverage, respectively. IoTHunter also found five new vulnerabilities in the firmware of home router Mikrotik, which have been reported to the vendor.

Snout: An Extensible IoT Pen-Testing Tool

  • John Mikulskis
  • Johannes K. Becker
  • Stefan Gvozdenovic
  • David Starobinski

Network mapping tools designed for IP-based networks generally do not provide access to non-IP based wireless protocols used by Internet of Things (IoT) devices, such as Zigbee and Bluetooth LE. We present Snout, a versatile and extensible software defined radio-based tool for IoT network mapping and penetration testing. Snout is geared towards the various IoT protocols that are not accessible with traditional network enumeration tools, such as Nmap. The tool allows for device enumeration, vulnerability assessment, as well as more offensive techniques such as packet replay and spoofing, which we demonstrate for the Zigbee protocol. Snout is built on an open-source stack, and is designed for extensibility towards other IoT protocols and capabilities.

POSTER: Traffic Splitting to Counter Website Fingerprinting

  • Wladimir De la Cadena
  • Asya Mitseva
  • Jan Pennekamp
  • Jens Hiller
  • Fabian Lanze
  • Thomas Engel
  • Klaus Wehrle
  • Andriy Panchenko

Website fingerprinting (WFP) is a special type of traffic analysis, which aims to infer the websites visited by a user. Recent studies have shown that WFP targeting Tor users is notably more effective than previously expected. Concurrently, state-of-the-art defenses have been proven to be less effective. In response, we present a novel WFP defense that splits traffic over multiple entry nodes to limit the data a single malicious entry can use. Here, we explore several traffic-splitting strategies to distribute user traffic. We establish that our weighted random strategy dramatically reduces the accuracy from nearly 95% to less than 35% for four state-of-the-art WFP attacks without adding any artificial delays or dummy traffic.

Force vs. Nudge: Comparing Users’ Pattern Choices on SysPal and TinPal

  • Harshal Tupsamudre
  • Sukanya Vaddepalli
  • Vijayanand Banahatti
  • Sachin Lodha

Android’s 3X3 graphical pattern lock scheme is one of the widely used authentication method on smartphone devices. However, users choose 3X3 patterns from a small subspace of all possible 389,112 patterns. The two recently proposed interfaces, SysPal by Cho et al. and TinPal by the authors, demonstrate that it is possible to influence users 3X3 pattern choices by making small modifications in the existing interface. While SysPal forces users to include one, two or three system-assigned random dots in their pattern, TinPal employs highlighting mechanism to inform users about the set of reachable dots from the current selected dot. Both interfaces improved the security of 3X3 patterns without affecting usability, but no comparison between SysPal and TinPal was presented. To address this gap, we conduct a new user study with 147 participants and collect patterns on three SysPal interfaces, 1-dot, 2-dot and 3-dot. We compare SysPal and TinPal patterns using a range of security and usability metrics including pattern length, stroke length, guessability, recall time and login attempts. Overall, we found that patterns created on TinPal were significantly longer and offered more resistance to guessing attacks.

Poster: Framework for Semi-Private Function Evaluation with Application to Secure Insurance Rate Calculation

  • Daniel Günther
  • Ágnes Kiss
  • Lukas Scheidel
  • Thomas Schneider

Private Function Evaluation (PFE) allows two parties to jointly compute a private function provided by one party on the secret input of the other party. However, in many applications it is not required to hide the whole function, which is called Semi-Private Function Evaluation (SPFE). In this work, we develop a framework for SPFE which allows to split a function into public and private parts. We show the practicability of using SPFE in a real world scenario by developing a car insurance application for computing user-specific tariffs. We evaluate the performance of our SPFE framework on this concrete example which results in a circuit consisting of 377032 AND gates which improves over PFE by a factor of 9x.

Poster: Deployment-quality and Accessible Solutions for Cryptography Code Development

  • Sazzadur Rahaman
  • Ya Xiao
  • Sharmin Afrose
  • Ke Tian
  • Miles Frantz
  • Na Meng
  • Barton P. Miller
  • Fahad Shaon
  • Murat Kantarcioglu
  • Danfeng (Daphne) Yao

Cryptographic API misuses seriously threaten software security. Automatic screening of cryptographic misuse vulnerabilities has been a popular and important line of research over the years. However, the vision of producing a scalable detection tool that developers can routinely use to screen millions of line of code has not been achieved yet. Our main technical goal is to attain a high precision and high throughput approach based on specialized program analysis. Specifically, we design inter-procedural program slicing on top of a new on-demand flow-, context- and field- sensitive data flow analysis. Our current prototype named CryptoGuard can detect a wide range of Java cryptographic API misuses with a precision of 98.61%,, when evaluated on 46 complex Apache Software Foundation projects (including, Spark, Ranger, and Ofbiz). Our evaluation on 6,181 Android apps also generated many security insights. We created a comprehensive benchmark named CryptoAPI-Bench with 40-unit basic cases and 131-unit advanced cases for in-depth comparison with leading solutions (e.g., SpotBugs, CrySL, Coverity). To make CryptoGuard widely accessible, we are in the process of integrating CryptoGuard with the Software Assurance Marketplace (SWAMP). SWAMP is a popular no-cost service for continuous software assurance and static code analysis.

Medical Protocol Security: DICOM Vulnerability Mining Based on Fuzzing Technology

  • Zhiqiang Wang
  • Quanqi Li
  • Yazhe Wang
  • Biao Liu
  • Jianyi Zhang
  • Qixu Liu

DICOM is an international standard for medical images and related information, and is a medical image format that can be used for data exchange. The agreement is widely used in medical fields such as radiology and cardiovascular imaging. However, since DICOM libraries have less security considerations in protocol implementation, they have a large number of security risks. Aiming at the security issue of DICOM libraries, the paper conducts research on vulnerability mining technology for DICOM open source libraries, proposes a vulnerability mining framework based on Fuzzing technology, and implements a prototype system named DICOM-Fuzzer, which includes initialization, test case generation, automatic test, exception monitoring and other modules. Finally, the open source library DCMTK was selected for testing, and it was found that data overflow would occur when the content of the received file was greater than 7080 lines. Found that there is a vulnerability that causes the PACS system to refuse service. In conclusion, the DICOM protocol does have risks, and its information security needs to be further improved.

Poster: A Proof-of-Stake (PoS) Blockchain Protocol using Fair and Dynamic Sharding Management

  • Daehwa Rayer Lee
  • Yunhee Jang
  • Hyoungshick Kim

Sharding-based consensus protocols were introduced to enable the parallelization of the consensus work and storage for blockchain systems. However, existing sharding-based consensus algorithms are not sufficiently designed for distributing miners and transactions to shards in a fair and secure manner, which would make the blockchain systems vulnerable to several attacks. To overcome such limitations of the existing sharding-based consensus protocols, we present a new sharding-based Proof-of-Stake (PoS) blockchain protocol using fair and dynamic sharding management. To show the security of the proposed consensus protocol, we numerically analyze attack probabilities and found that the proposed protocol is secure when the number of shards is less than or equal to 6. Moreover, the proposed protocol is approximately 186 times more efficient than Ethereum with the real parameter settings obtained by the Ethereum network.

Kerberoid: A Practical Android App Decompilation System with Multiple Decompilers

  • Heejun Jang
  • Beomjin Jin
  • Sangwon Hyun
  • Hyoungshick Kim

Decompilation is frequently used to analyze binary programs. In Android, however, decompilers all perform differently with varying apps due to their own characteristics. Obviously, there is no universal solution in all conditions. Based on this observation, we present a practical Android app decompilation system (called Kerberoid) that automatically stitches the results from multiple decompilers together to maximize the coverage and the accuracy of decompiled codes. We evaluate the performance of Kerberoid with 151 Android apps in which their corresponding source codes are publicly available. Kerberoid fully recovered all functions for 17% of the apps tested and gained a similarity score over 50% for 40% of the apps tested, increased by 7% and 9%, respectively, compared with the best existing decompiler.

Poster: A Reliable and Accountable Privacy-Preserving Federated Learning Framework using the Blockchain

  • Sana Awan
  • Fengjun Li
  • Bo Luo
  • Mei Liu

Federated learning (FL) is promising in supporting collaborative learning applications that involve large datasets, massively distributed data owners and unreliable network connectivity. To protect data privacy, existing FL approaches adopt (k,n)-threshold secret sharing schemes, based on the semi-honest assumption for clients, to enable secure multiparty computation in local model update exchange which deals with random client dropouts at the cost of increasing data size. These approaches adopt the semi-honest assumption for clients, therefore they are vulnerable to malicious clients. In this work, we propose a blockchain-based privacy-preserving federated learning (BC-based PPFL) framework, which leverages the immutability and decentralized trust properties of blockchain to provide provenance of model updates. Our proof-of-concept implementation of BC-based PPFL demonstrates it is practical for secure aggregation of local model updates in the federated setting.

Poster: Attacking Malware Classifiers by Crafting Gradient-Attacks that Preserve Functionality

  • Raphael Labaca-Castro
  • Battista Biggio
  • Gabi Dreo Rodosek

Machine learning has proved to be a promising technology to determine whether a piece of software is malicious or benign. However, the accuracy of this approach comes sometimes at the expense of its robustness and probing these systems against adversarial examples is not always a priority. In this work, we present a gradient-based approach that can carefully generate valid executable malicious files that are classified as benign by state-of-the-art detectors. Initial results demonstrate that our approach is able to automatically find optimal adversarial examples in a more efficient way, which can provide a good support for building more robust models in the future.

simFIDO: FIDO2 User Authentication with simTPM

  • Dhiman Chakraborty
  • Sven Bugiel

WebAuthn as part of FIDO2 is a new standard for two-factor and even password-less user authentication to web-services. Leading browsers, like Google Chrome, Microsoft Edge, and Mozilla Firefox, support the WebAuthn API. Unfortunately, the availability of hardware authenticators that support FIDO2 authentication is still focused heavily on desktop computers, while for mobile devices, only a limited choice of suitable authenticators is available to users (few roaming authenticators with wireless connectivity and even fewer built-in platform authenticators on mobile devices). This creates a void for users, in particular users of older device generations that lack platform authenticators and the right connectivity, to authenticate themselves with WebAuthn to web-services. In this poster, we present the idea ofsimFIDO, a FIDO2 setup using a recently developed simTPM as (platform) authenticator for mobile devices and even as roaming authenticator offered by mobile devices to connected computers. The move-ability property of the key storage of simTPM makes the users’ lives easier for credential portability between devices. In particular, a seamless integration of simTPM with non-mobile devices through phones will help to create a kind of universal authentication setup using FIDO2. Although we present the concrete design and implementation of a SIM card-based FIDO2 authenticator, we hope this poster will contribute to the discussion about how and in which form hardware authenticators can be made available to users.

pFilter: Retrofitting Legacy Applications for Data Privacy

  • Manish Shukla
  • Kumar Vidhani
  • Gangadhara Sirigireddy
  • Vijayanand Banahatti
  • Sachin Lodha

Enterprise needs to process customer data for providing tailored services to them, however, the data often includes sensitive and personally identifiable information. This leads to a difficult situation wherein the enterprise has to balance the necessity to process the sensitive data with the requirement to safeguard its privacy. The problem is more prominent in legacy applications with almost no privacy controls in place. A well-studied technique to retrofit legacy application is to mask sensitive content before it is rendered on the screen using path based methods. In this work we show the gap in the existing state of art and describe a dynamic system which utilizes a context to perform locality based searching and masking of sensitive content.

Poster: Towards a Framework for Assessing Vulnerabilities of Brainwave Authentication Systems

  • Karen Becker
  • Patricia Arias-Cabarcos
  • Thilo Habrich
  • Christian Becker

In the quest to devise new alternatives to password-based authentication, behavioral biometrics have become more and more appealing due to the improved usability that comes with their unobtrusiveness. One such type of biometric are brainwaves, which can be nowadays easily measured and used to prove a person’s identity. Given the potential for this technology to be adopted in the near future, it is paramount to analyze its security implications. Furthermore, recent advances in brain computer interfaces make feasible the usage of brainwaves to prove users’ identity. This work presents a comprehensive framework for assessing the vulnerabilities of brainwave authentication systems, incorporating new attack vectors that target specific features of brain biometrics. Resting on this theoretical groundwork, we analyze the existing literature on attacks and countermeasures, identifying gaps and providing a foundation for future research. Furthermore, we evaluated a subset of attacks identified through the framework and report our preliminary results.

Poster: Network Message Field Type Recognition

  • Stephan Kleber
  • Frank Kargl

Existing approaches to reverse engineer network protocols based on traffic traces lack comprehensive methods to determine the data type, e. g. float, timestamp, or addresses, of segments in messages of binary protocols. We propose a novel method for the analysis of unknown protocol messages to reveal the data types contained in these messages. Therefore, we split messages into segments of bytes and interpret these as vectors of byte values. Based on the vector interpretation, we can determine similarities and characteristics of specific data types. These can be used to classify segments into clusters of the same type and to identify their data type for previously trained data types. We performed first evaluations of different applications of our method that show promising results up the a data-type-recognition precision of 100,%.

Poster: Towards a Data Centric Approach for the Design and Verification of Cryptographic Protocols

  • Luca Arnaboldi
  • Roberto Metere

We propose MetaCP, a Meta Cryptography Protocol verification tool, as an automated tool simplifying the design of security protocols through a graphical interface. The graphical interface can be seen as a modern editor of a non-relational database whose data are protocols. The information of protocols are stored in XML, enjoying a fixed format and syntax aiming to contain all required information to specify any kind of protocol. This XML can be seen as an almost semanticless language, where different plugins confer strict semantics modelling the protocol into a variety of back-end verification languages. In this paper, we showcase the effectiveness of this novel approach by demonstrating how easy MetaCP makes it to design and verify a protocol going from the graphical design to formally verified protocol using a Tamarin prover plugin. Whilst similar approaches have been proposed in the past, most famously the AVISPA Tool, no previous approach provides such as small learning curve and ease of use even for non security professionals, combined with the flexibility to integrate with the state of the art verification tools.

ÆGIS: Smart Shielding of Smart Contracts

  • Christof Ferreira Torres
  • Mathis Baden
  • Robert Norvill
  • Hugo Jonker

In recent years, smart contracts have suffered major exploits, losing millions of dollars. Unlike traditional programs, smart contracts cannot be updated once deployed. Though various tools were proposed to detect vulnerable smart contracts, they all fail to protect contracts that have already been deployed on the blockchain. Moreover, they focus on vulnerabilities, but do not address scams (e.g., honeypots). In this work, we introduce Æ GIS, a tool that shields smart contracts and users on the blockchain from being exploited. To this end, ÆGIS reverts transactions in real-time based on pattern matching. These patterns encode the detection of malicious transactions that trigger exploits or scams. New patterns are voted upon and stored via a smart contract, thus leveraging the benefits of tamper-resistance and transparency provided by blockchain. By allowing its protection to be updated, the smart contract acts as a smart shield.

Nickel to Lego: Using Foolgle</> to Create Adversarial Examples to Fool Google Cloud Speech-to-Text API

  • Joon Kuy Han
  • Hyoungshick Kim
  • Simon S. Woo

Many companies offer automatic speech recognition or Speech-to-Text APIs for use in diverse applications. However, audio classification algorithms trained with deep neural networks (DNNs) can sometimes misclassify adversarial examples, posing a significant threat to critical applications. In this paper, we present a novel way to create adversarial audio examples using a genetic algorithm. Our algorithm creates adversarial examples by iteratively adding perturbations to the original audio signal. Unlike most white-box adversarial example generations, our approach does not require knowledge about the target DNN’s model and parameters (black-box) and heavy computational power of GPU resources. To show the feasibility of the proposed idea, we implement a tool called, Foolgle, using a genetic algorithm that performs untargeted attacks to create adversarial audio examples and evaluate those with the state-of-the-art Google Cloud Speech-to-Text API. Our preliminary experiment results show that Foolgle deceives the API with a success probability of 86%.

Poster: Using Generative Adversarial Networks for Secure Pseudorandom Number Generation

  • Rajvardhan Oak
  • Chaitanya Rahalkar
  • Dhaval Gujar

Generation of secure random numbers has always been a challenging issue in design and development of secure computer systems. Random numbers have important applications in the field of cryptography where the security of the scheme relies upon the random nature of the keys. It is not practically possible to achieve true randomness in a machine, and hence we rely upon Pseudo Random Number Generators (PRNGs) to produce near-true randomness. PRNGs use a mathematical function that relies upon a seed (a preset value required by the function to generate values) and it generates numbers which satisfy certain tests for randomness and appear to be random for a user having no knowledge of the generator function. These pseudorandom functions have their drawbacks due to them being derived from a mathematical function. To generate random numbers that can never be predicted by any observer, requires a causally non-deterministic process where events are not fully determined by prior states. Due to the physical impossibility of acquiring sufficient information to predict the outcome of such an event, its outcomes are guaranteed to be random to all. Various methods to generate pseudorandomness have been employed over the years which includes using mathematical functions, keyboard typing latency of the user, network latency, memory latency etc. as sources of generating random numbers. In this work, we propose a new way of generating pseudorandom numbers using generative adversarial networks. We demonstrate that a GAN can act as a Cryptographically Secure Pseudorandom Number Generator (CPRNG) passing 97% of National Institute of Standards and Technology (NIST) tests.

Poster: Proofs of Retrievability with Low Server Storage

  • Michael Hanling
  • Gaspard Anthoine
  • Jean-Guillaume Dumas
  • Aude Maignan
  • Clément Pernet
  • Daniel S. Roche

Proof of Retrievability (PoR) and Provable Data Possession (PDP) schemes have been proposed to ensure the integrity of stored data on untrusted servers. A successful PoR audit ensures, with high probability, that every piece of stored data is recoverable by the server. Most PoR schemes proposed have focused on bandwidth and computation cost, but in some deployment scenarios the size of remote storage can be the most expensive factor. We propose a simple PoR scheme which is a variant on some existing PDP work. Compared to existing audit routines, the server computation cost and bandwidth are higher, but the server storage cost is minimal. Our preliminary work indicates that deploying this scheme may be less costly in commercial cloud settings, depending on the cost structure and frequency of audits.

Data Quality for Security Challenges: Case Studies of Phishing, Malware and Intrusion Detection Datasets

  • Rakesh M. Verma
  • Victor Zeng
  • Houtan Faridi

Techniques from data science are increasingly being applied by researchers to security challenges. However, challenges unique to the security domain necessitate painstaking care for the models to be valid and robust. In this paper, we explain key dimensions of data quality relevant for security, illustrate them with several popular datasets for phishing, intrusion detection and malware, indicate operational methods for assuring data quality and seek to inspire the audience to generate high quality datasets for security challenges.

Poster: Finding JavaScript Name Conflicts on the Web

  • Mingxue Zhang
  • Wei Meng
  • Yi Wang

Including JavaScript code from many different hosts is a popular practice in developing web applications. For example, to include a social plugin like the Facebook Like button, a web developer needs to only include a script from facebook.net in her/his web page. However, in a web browser, all the identifiers (i.e., variable names and function names) in scripts loaded in the same frame share a single global namespace. Therefore, a script can overwrite any of the global variables and/or global functions defined in another script, causing unexpected behavior. In this work, we develop a browser-based dynamic analysis framework, that monitors and records any writes to JavaScript global variables and global functions. Our tool is able to cover all the code executed in the run time. We detected 778 conflicts across the Alexa top 1K websites. Our results show that global name conflicts can indeed expose web applications to security risks.

Poster: Towards Robust Open-World Detection of Deepfakes

  • Saniat Javid Sohrawardi
  • Akash Chintha
  • Bao Thai
  • Sovantharith Seng
  • Andrea Hickerson
  • Raymond Ptucha
  • Matthew Wright

There is heightened concern over deliberately inaccurate news. Recently, so-called deepfake videos and images that are modified by or generated by artificial intelligence techniques have become more realistic and easier to create. These techniques could be used to create fake announcements from public figures or videos of events that did not happen, misleading mass audiences in dangerous ways. Although some recent research has examined accurate detection of deepfakes, those methodologies do not generalize well to real-world scenarios and are not available to the public in a usable form. In this project, we propose a system that will robustly and efficiently enable users to determine whether or not a video posted online is a deepfake. We approach the problem from the journalists’ perspective and work towards developing a tool to fit seamlessly into their workflow. Results demonstrate accurate detection on both within and mismatched datasets.

Poster: Understanding User’s Decision to Interact with Potential Phishing Posts on Facebook using a Vignette Study

  • Sovantharith Seng
  • Huzeyfe Kocabas
  • Mahdi Nasrullah Al-Ameen
  • Matthew Wright

Facebook remains the largest social media platform on the Internet with over one billion active monthly users. A variety of personal and sensitive data is shared on the platform, which makes it a prime target for attackers. Increasingly, we see phishing attacks that take advantage of users’ lack of security knowledge, deceiving victims by using fake or compromised accounts to share malicious posts. These attacks may slip undetected by the Facebook defense system, exposing users to potentially be phished or have their devices infected with drive-by downloads and malware. Only a few studies have been conducted to date to understand how users interact with attacks like this in Facebook. In our prior work, we conducted a study to address this challenge using a simulated interface and think-aloud protocol. In this study, we aim to make further progress in understanding the impact of different factors on users’ clicking decision in social media through a vignette study that encourages participants to think about realistic scenarios that they might face.

Poster: Adversarial Examples for Hate Speech Classifiers

  • Rajvardhan Oak

With the advent of the Internet, social media platforms have become an increasingly popular medium of communication for people. Platforms like Twitter and Quora allow people to express their opinions on a large scale. These platforms are, however, plagued by the problem of hate speech and toxic content. Such content is generally sexist, homophobic or racist. Automatic text classification can filter out toxic content so some extent. In this paper, we discuss the adversarial attacks on hate speech classifiers. We demonstrate that by changing the text slightly, a classifier can be fooled to misclassifying a toxic comment as acceptable. We attack hate speech classifiers with known attacks as well as introduce four new attacks. We find that our method can degrade the performance of a Random Forest classifier by 20%. We hope that our work sheds light on the vulnerabilities of text classifiers, and opens doors for further research on this topic.

Poster: Evaluating Security Metrics for Website Fingerprinting

  • Nate Mathews
  • Mohammad Saidur Rahman
  • Matthew Wright

The website fingerprinting attack allows a low-resource attacker to compromise the privacy guarantees provided by privacy enhancing tools such as Tor. In response, researchers have proposed defenses aimed at confusing the classification tools used by attackers. As new, more powerful attacks are frequently developed, raw attack accuracy has proven inadequate as the sole metric used to evaluate these defenses. In response, two security metrics have been proposed that allow for evaluating defenses based on hand-crafted features often used in attacks. Recent state-of-the-art attacks, however, use deep learning models capable of automatically learning abstract feature representations, and thus the proposed metrics fall short once again. In this study we examine two security metrics and (1) show how these methods can be extended to evaluate deep learning-based website fingerprinting attacks, and (2) compare the security metrics and identify their shortcomings.

Poster: Video Fingerprinting in Tor

  • Mohammad Saidur Rahman
  • Nate Matthews
  • Matthew Wright

Over 8 million users rely on the Tor network each day to protect their anonymity online. Unfortunately, Tor has been shown to be vulnerable to the website fingerprinting attack, which allows an attacker to deduce the website a user is visiting based on patterns in their traffic. The state-of-the-art attacks leverage deep learning to achieve high classification accuracy using raw packet information. Work thus far, however, has examined only one type of media delivered over the Tor network: web pages, and mostly just home pages of sites. In this work, we instead investigate the fingerprintability of video content served over Tor. We collected a large new dataset of network traces for 50 YouTube videos of similar length. Our preliminary experiments utilizing a convolutional neural network model proposed in prior works has yielded promising classification results, achieving up to 55% accuracy. This shows the potential to unmask the individual videos that users are viewing over Tor, creating further privacy challenges to consider when defending against website fingerprinting attacks.

Poster: A First Look at the Privacy Risks of Voice Assistant Apps

  • Atsuko Natatsuka
  • Ryo Iijima
  • Takuya Watanabe
  • Mitsuaki Akiyama
  • Tetsuya Sakai
  • Tatsuya Mori

In this study, we conduct the first study on the analysis of voice assistant (VA) apps. We first collect the metadata of VA apps from the VA app directory and analyze them. Next, we call VA apps by the corresponding voice commands and examine how they identify users by analyzing the responses from the apps. We found that roughly half of the VA apps performed user identification by some means. We also found that several apps aim to acquire personal information such as birth date, age, or the blood type through voice conversations. As such data will be stored in the cloud, we need to have a mechanism to ensure that an end-user can check/control the data in a usable way.

Poster: Directed Hybrid Fuzzing on Binary Code

  • Juhwan Kim
  • Joobeom Yun

Hybrid fuzzers combine both fuzzing and concolic execution with the wish that the fuzzer will quickly explore input spaces and the concolic execution will solve the complex path conditions. However, existing hybrid fuzzers such as Driller cannot be effectively directed, for instance, towards unsafe system calls or suspicious locations, or towards functions in the call stack of a reported vulnerability that we wish to reproduce. In this poster, we propose DrillerGO, a directed hybrid fuzzing system, to mitigate this problem. It mainly consists of a static analysis and a dynamic analysis module. In the static analysis, it searches suspicious API call strings in the recovered control flow graph (CFG). After targeting some suspicious API call lines, it runs the concolic execution along with path guiding. The path guiding is helped by backward pathfinding, which is a novel technique to find paths backward from the target to the start of main(). Also, we will show that DrillerGo can find the crashes faster than Driller through experimental results.

Poster: On the Application of NLP to Discover Relationships between Malicious Network Entities

  • Giuseppe Siracusano
  • Martino Trevisan
  • Roberto Gonzalez
  • Roberto Bifulco

The increase in network traffic volumes challenges the scalability of security analysis tools. In this paper, we present NetLearn, a solution to identify potentially malicious network entities from large amounts of network traffic data. NetLearn applies recently developed natural language processing algorithms to discover security-relevant relationships between the observed network entities, e.g., domain names and IP addresses, without requiring external sources of information for its analysis.

Poster: SDN-based System to Filter Out DRDoS Amplification Traffic in ISP Networks

  • Priyanka Dodia
  • Yury Zhauniarovich

Distributed Reflected Denial of Service (DRDoS) attacks remain one of the most popular techniques to drain victim’s network bandwidth. Despite the goal of disrupting network services of a particular victim, indirectly these attacks affect a large number of benign Internet citizens. In particular, the owners of services vulnerable to amplification have to waste their resources to process incoming requests. Moreover, the voluminous attack traffic generated as a result of the amplification lavishes Internet Service Provider (ISP) resources, bandwidth and money, causing Quality of Service (QoS) degradation and subscription fee increase for the customers. In this work we demonstrate a Software Defined Networking (SDN) based system to filter out garbage traffic from an ISP network. We employ a special type of a honeypot developed to collect information about ongoing DRDoS attacks. The firewall rules derived from this data are used to block incoming amplification requests from reaching amplifiers located within the provider network rescuing vulnerable services from being abused. In its turn, this prevents garbage traffic generation saving ISP’s money and improving QoS.

Poster: GRANDPA Finality Gadget

  • Alistair Stewart

We present GRANDPA, a finality gadget, that is a protocol that can be used to provide provable finality for a blockchain. It works in addition to a block production mechanism and a chain selection rule, that on their own would only provide eventual consensus. The design of GRANDPA aims at separating these two protocols as cleanly as possible and obtain formal guarantees for the finality gadget. GRANDPA attempts to finalise the prefix of the chain that $2/3$ of voters agree on, whether that is one or thousands blocks. It has been implemented by Parity Technologies and deployed on large testnets for the Polkadot protocol. We also present properties GRANDPA achieves and review GRANDPA’s advantages in flexibility over comparable protocols.

Poster: Towards Characterizing and Limiting Information Exposure in DNN Layers

  • Fan Mo
  • Ali Shahin Shamsabadi
  • Kleomenis Katevas
  • Andrea Cavallaro
  • Hamed Haddadi

Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models. Based on the concept of generalization error, we propose a framework to measure the amount of sensitive information memorized in each layer of a DNN. Our results show that, when considered individually, the last layers encode a larger amount of information from the training data compared to the first layers. We find that the same DNN architecture trained with different datasets has similar exposure per layer. We evaluate an architecture to protect the most sensitive layers within an on-device Trusted Execution Environment (TEE) against potential white-box membership inference attacks without the significant computational overhead.

Poster: Recovering the Input of Neural Networks via Single Shot Side-channel Attacks

  • Lejla Batina
  • Shivam Bhasin
  • Dirmanto Jap
  • Stjepan Picek

The interplay between machine learning and security is becoming more prominent. New applications using machine learning also bring new security risks. Here, we show it is possible to reverse-engineer the inputs to a neural network with only a single-shot side-channel measurement assuming the attacker knows the neural network architecture being used.

Poster: Challenges of Accurately Measuring Churn in P2P Botnets

  • Leon Böck
  • Shankar Karuppayah
  • Kory Fong
  • Max Mühlhäuser
  • Emmanouil Vasilomanolakis

Peer-to-peer (P2P) botnets are known to be highly resilient to takedown attempts. Such attempts are usually carried out by exploiting vulnerabilities in the bots communication protocol. However, a failed takedown attempt may alert botmasters and allow them to patch their vulnerabilities to thwart subsequent attempts. As a promising solution, takedowns could be evaluated in simulation environments before attempting them in the real world. To ensure such simulations are as realistic as possible, the churn behavior of botnets must be understood and measured accurately. This paper discusses potential pitfalls when measuring churn in live P2P botnets and proposes a botnet monitoring framework for uniform data collection and churn measurement for P2P botnets.

Poster: TCLP: Enforcing Least Privileges to Prevent Containers from Kernel Vulnerabilities

  • Suyeol Lee
  • Junsik Seo
  • Jaehyun Nam
  • Seungwon Shin

While containerization has emerged as a lightweight approach to package, deploy, and run legacy applications in a resource-efficient manner, the shared kernel-resource model used by containers introduces critical security concerns. Specifically, the abuse of system calls by a compromised container can trigger the security vulnerabilities of a host kernel. Unfortunately, even though existing solutions provide powerful protection mechanisms against such issues, how to define the capabilities of containers is still up to operators. In this work, we thus introduce TCLP, a dynamic analysis system that helps operators configure the least capabilities of containers to protect not only themselves but also a host. TCLP monitors the system calls triggered by containers in run time and finds the least capabilities required to run the containers based on the collected system calls. Finally, operators configure the minimal capabilities discovered by TCLP for their containers, reducing the risk of kernel vulnerabilities.

Poster: Let History not Repeat Itself (this Time) — Tackling WebAuthn Developer Issues Early On

  • Aftab Alam
  • Katharina Krombholz
  • Sven Bugiel

The FIDO2 open authentication standard, developed jointly by the FIDO Alliance and the W3C, provides end-users with the means to use public-key cryptography in addition to or even instead of text-based passwords for authentication on the web. Its WebAuthn protocol has been adopted by all major browser vendors and recently also by major service providers (e.g., Google, GitHub, Dropbox, Microsoft, and others). Thus, FIDO2 is a very strong contender for finally tackling the problem of insecure user authentication on the web. However, there remain a number of open questions to be answered for FIDO2 to succeed as expected. In this poster, we focus specifically on the critical question of how well web-service developers can securely roll out WebAuthn in their own services and which issues have to be tackled to help developers in this task. The past has unfortunately shown that software developers struggle with correctly implementing or using security-critical APIs, such as TLS/SSL, password storage, or cryptographic APIs. We report here on ongoing work that investigates potential problem areas and concrete pitfalls for adopters of WebAuthn and tries to lay out a plan of how our community can help developers. We believe that raising awareness for foreseeable developer problems and calling for action to support developers early on is critical on the path for establishing FIDO2 as a de-facto authentication solution.

Poster: When Adversary Becomes the Guardian — Towards Side-channel Security With Adversarial Attacks

  • Stjepan Picek
  • Dirmanto Jap
  • Shivam Bhasin

Machine learning algorithms fall prey to adversarial examples. As profiling side-channel attacks are seeing rapid adoption of machine learning-based approaches that can even defeat commonly used side-channel countermeasures, we investigate the potential of adversarial example as a defense mechanism. We show that adversarial examples have the potential to serve as a countermeasure against machine learning-based side-channel attacks. Further, we exploit the transferability property to show that a common adversarial example can act as a countermeasure against a range of machine learning-based side-channel classifiers.

Poster: Towards Automated Quantitative Analysis and Forecasting of Vulnerability Discoveries in Debian GNU/Linux

  • Nikolaos Alexopoulos
  • Rolf Egert
  • Tim Grube
  • Max Mühlhäuser

Quantitative analysis and forecasting of software vulnerability discoveries is important for patching cost and time estimation, and as input to security metrics and risk assessment methodologies. However, as of now, quantitative studies (a) require considerable manual effort, (b) make use of noisy datasets, and (c) are especially challenging to reproduce. In this poster abstract we describe our ongoing work towards quantitative analysis of vulnerabilities in Debian GNU/Linux packages. We focus on the challenges of making the process as automated and reproducible as possible, while collecting good-quality data necessary for the analysis. We then state a number of interesting hypotheses that can be investigated, and present preliminary results.

Poster: Effective Layers in Coverage Metrics for Deep Neural Networks

  • Leo Hyun Park
  • Sangjin Oh
  • Jaeuk Kim
  • Soochang Chung
  • Taekyoung Kwon

Deep neural networks (DNNs) gained in popularity as an effective machine learning algorithm, but their high complexity leads to the lack of model interpretability and difficulty in the verification of deep learning. Fuzzing, which is an automated software testing technique, is recently applied to DNNs as an effort to address these problems by following the trend of coverage-based fuzzing. However, new coverage metrics on DNNs may bring out the question of which layer to measure the coverage in DNNs. In this poster, we empirically evaluate the performance of existing coverage metrics. By the comparative analysis of experimental results, we compile the most effective layer for each of coverage metrics and discuss a future direction of DNN fuzzing.

Poster: Detecting WebAssembly-based Cryptocurrency Mining

  • Weikang Bian
  • Wei Meng
  • Yi Wang

In-browser cryptojacking is an emerging threat to web users. The attackers can abuse the users’ computation resources to perform cryptocurrency mining without obtaining their consent. Moreover, the new web feature -WebAssembly (Wasm)- enables efficient in-browser cryptocurrency mining and has been commonly used in mining applications. In this work, we use the dynamic Wasm instruction execution trace to model the behavior of different Wasm applications. We observe that the cryptocurrency mining Wasm programs exhibit very different execution traces from other Wasm programs (e.g., games). Based on our findings, we propose a novel browser-based methodology to detect in-browser Wasm-based cryptojacking.

Poster: Evaluating Code Coverage for System Call Fuzzers

  • Seoyoung Kim
  • Seyeon Jeong
  • Mingi Cho
  • Soochang Chung
  • Taekyoung Kwon

The OS kernel, which has entire system privileges, is an attractive target of attackers. To reduce this threat, we need to find security bugs in the kernel prior to the attackers, and system call fuzzing is a widely used technique for this purpose. However, many system call fuzzers have not been evaluated for coverage performance which is an important indicator in fuzzing. In this poster, we propose a methodology to evaluate the code coverage performance of system call fuzzers with a strategy that combines virtualization and Intel Processor Trace (PT). First, we extract all the functions in the kernel that can be executed by system calls. Then we perform fuzzing with the target system call fuzzer on the guest OS, and record coverage information by leveraging the Intel PT. Finally, we evaluate system call fuzzers by comparing the list of functions related to system calls with the executed functions logged by Intel PT while fuzzing.

SESSION: Workshop Summaries

CCSW’19 Workshop Summary: 2019 Cloud Computing Security Workshop

  • Radu Sion
  • Charalampos Papamanthou

Clouds and massive-scale computing infrastructures are starting to dominate computing and will likely continue to do so for the foreseeable future. Major cloud operators are now comprising millions of cores hosting substantial fractions of corporate and government IT infrastructure. CCSW is the world’s premier forum bringing together researchers and practitioners in all security aspects of cloud-centric and outsourced computing. CCSW especially encouraged novel paradigms and controversial ideas that are not on the above list. The workshop has historically acted as a fertile ground for creative debate and interaction in security-sensitive areas of computing impacted by clouds. This year marked the 10th anniversary of CCSW. In the past decade, CCSW has had a significant impact in our research community. As of August 2019, in the Google Scholar Metrics entry for ACM CCS (which encompasses CCSW), 20% of the top 20 cited papers come from CCSW. One way to look at it is that authors are as likely or perhaps more likely to have a top-20 paper publishing in CCSW than in CCS! This year, CCSW received 40 submissions out of which 15 full papers (37%) and 2 blitz abstracts were accepted. CCSW Website: https://ccsw.io

CPS-SPC 2019: Fifth Workshop on Cyber-Physical Systems Security and PrivaCy

  • Nils Ole Tippenhauer
  • Avishai Wool

Cyber-Physical Systems (CPS) are becoming increasingly critical for the well-being of society (e.g., electricity generation and distribution, water treatment, implantable medical devices etc. ). While the convergence of computing, communications and physical control in such systems provides benefits in terms of efficiency and convenience, the attack surface resulting from this convergence poses unique security and privacy challenges. These systems represent the new frontier for cyber risk. CPS-SPC is an annual forum in its 5th edition this year, that aims to provide a focal point for the research community to begin addressing the security and privacy challenges of CPS in a comprehensive and multidisciplinary manner and, in tandem with other efforts, build a comprehensive research road map. Related Workshop Proceedings are available in the ACM DL at: https://dl.acm.org/citation.cfm?id=3338499

MTD 2019: The 6th ACM Workshop on Moving Target Defense

  • Zhuo Lu

The static nature of current computing systems has made them easy to attack and hard to defend. The idea of moving-target defense (MTD) is to impose the same asymmetric disadvantage on attackers by making systems dynamic and therefore harder to explore and predict. The workshop aims to address the MTD research challenges and methodologies to improve security. We have put together an exciting program, including eight papers and one invited keynote. Some texts in the summary are excerpted from the summaries of previous MTD editions. Related workshop proceedings are available in the ACM DL at: https://dl.acm.org/citation.cfm?id=3338468

SSR’19: The 5th Conference on Security Standardisation Research

  • Maryam Mehrnezhad
  • Thyla van der Merwe
  • Feng Hao

The 5th conference on Security Standardisation Research (SSR’19) is in London, UK, on 11 November 2019, co-located with the ACM Conference on Computer and Communications Security 2019 (CCS’19). This conference aims to provide a preferred venue for the discussion of all topics related to security standardisation, covering both theory and practice. This year’s program includes two invited keynote addresses to shed light on security standardisation from both industrial and academic perspectives, a panel discussion on blockchain standardisation and the presentation of seven original research papers selected from twenty submissions. The SSR’19 Conference Proceedings are available in the ACM DL at: https://dl.acm.org/citation.cfm?id=3338500.

TIS’19: Theory of Implementation Security Workshop 2019

  • Begül Bilgin
  • Svetla Nikova
  • Vincent Rijmen

In this workshop, we focus on physical attacks and their countermeasures. With the advent of the Internet of Things, the interest in embedded cryptographic systems and physical attacks on these systems is steadily increasing, both in academia and industry. Sophisticated security certification and evaluation methods have been established to give assurance about the security claims by independent evaluation and testing. The certification has a drawback that it is time consuming and expensive. There is a need for further developing provably secure protection methods and automated verification tools, but also improving the efficiency and quality of certification by integrating these tools and methods. All these challenges motivate even more research on the Theory of Implementation Security.

WAHC’19: 7th Workshop on Encrypted Computing & Applied Homomorphic Cryptograph

  • Michael Brenner
  • Tancrède Lepoint
  • Kurt Rohloff

The 7th Workshop on Encrypted Computing & Applied Homomorphic Cryptography (WAHC) will be held in London, United Kingdom, on November 11th, 2019, co-located with the ACM Conference on Computer and Communications Security (CCS). The purpose of the WAHC 2019 workshop is to bring together professionals, researchers and practitioners from academia, industry and government, to present, discuss and share the latest progress in the field of encrypted computing. Encrypted computing is a particular subfield of the area of computer security and applied cryptography, with an interest in practical applications of homomorphic encryption, multiparty computation, functional encryption, secure function evaluation, private information retrieval and searchable encryption. The workshop features an invited talk, that discusses how advanced cryptography is on its way to practice, a demonstration of an homomorphic encryption evaluation platform, and 6 exciting talks on different encrypting computing topics: homomorphic encryption standardization, multiparty computation, and applications.

18th Workshop on Privacy in the Electronic Society (WPES 2019)

  • Josep Domingo-Ferrer

The 18th Workshop on Privacy in the Electronic Society (WPES 2019) was held on November 11th, 2019, in conjunction with the 26th ACM Conference on Computer and Communications Security (CCS 2019) in London, United Kingdom. The goal of WPES is to bring together privacy researchers and practitioners to discuss the privacy problems that arise in an interconnected society and solutions to those problems. The program for the workshop contains 14 full papers and 5 short papers selected from a total of 67 submissions. Specific topics covered in the program include secure computation, secure communication, mobile-device privacy, genomic data privacy, social aspects of privacy, data anonymization, and privacy-enhancing techniques for the Internet of Things, blockchain and Web.

AISec’19: 12th ACM Workshop on Artificial Intelligence and Security

  • Sadia Afroz
  • Battista Biggio
  • Nicholas Carlini
  • Yuval Elovici
  • Asaf Shabtai

Recent years have seen a dramatic increase in applications of Artificial Intelligence (AI) and Machine Learning (ML) to security and privacy problems. The analytic tools and intelligent behavior provided by these techniques make AI and ML increasingly important for autonomous real-time analysis and decision making in domains with a wealth of data or that require quick reactions to constantly changing situations. The use of learning methods in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of deep-learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. In addition, data mining and machine learning techniques create a wealth of privacy issues, due to the abundance and accessibility of data. The 12th ACM Workshop on Artificial Intelligence and Security (AISec) is one of the historical, leading venues for presenting and discussing new developments in the intersection of security and privacy with AI and ML.

ASHES 2019: 3rd Workshop on Attacks and Solutions in Hardware Security

  • Chip Hong Chang
  • Daniel E. Holcomb
  • Francesco Regazzoni
  • Ulrich Rührmair
  • Patrick Schaumont

The workshop on “Attacks and Solutions in HardwarE Security” (ASHES) welcomes any theoretical and practical works on hardware security, including any attacks, solutions, countermeasures, proofs, classification, formalization, and implementations. Besides mainstream research, ASHES puts some focus on new and emerging scenarios: This includes the internet of things (IoT), nuclear weapons inspections, arms control, automotive security, consumer and infrastructure security, or supply chain security, among others. ASHES also welcomes dedicated works on special purpose hardware, such as lightweight, low-cost, and energy-efficient devices, or non-electronic security systems. The workshop hosts four different paper categories: Apart from regular and short papers, this includes works that systematize and structure a certain (sub-)area (so-called “Systematization of Knowledge” (SoK) papers), and so-termed “Wild-and-Crazy” (WaC) papers, which distribute seminal ideas at an early conceptual stage. This summary gives a brief overview of the third edition of the workshop, which took place at November 15, 2019 in London (UK), as a one-day post-conference satellite workshop of ACM CCS.

1st Workshop on Cyber-Security Arms Race (CYSARM 2019)

  • Thanassis Giannetsos
  • Daniele Sgandurra

The goal of CYSARM workshop is to foster collaboration among researchers and practitioners to discuss the various facets and trade-offs of cyber-security. In particular, how new technologies and algorithms might impact the cyber-security of existing or future models and systems.

IoT S&P 2019: 2nd Workshop on the Internet of Things Security and Privacy

  • Peng Liu
  • Yuqing Zhang

The Second Workshop on Internet of Things Security and Privacy is held in London, UK on November 15, 2019, co-located with the ACM Conference on Computer and Communications Security (CCS). The workshop aims to address the security and privacy challenges of the emerging Internet-of-Things landscape. The workshop aims to bring together academic and industrial researchers, and to that end, we have put together an exciting program offering a mix of current and potential challenges. The workshop will also features 8 papers, 2 posters, and an invited keynote.

PLAS 2019: ACM SIGSAC Workshop on Programming Languages and Analysis for Security

  • Piotr Mardziel
  • Niki Vazou

The 14th ACM SIGSAC Workshop on Programming Languages and Analysis for Security (PLAS 2019) is co-located with the 25th ACM Conference on Computer and Communications Security (ACM CCS 2019). PLAS provides a unique venue for researchers and practitioners to exchange ideas in programming language and program analysis techniques with the goal of improving the security of software systems.

PPML ’19: Privacy Preserving Machine Learning

  • Borja Balle
  • Adrià Gascón
  • Olya Ohrimenko
  • Mariana Raykova
  • Phillipp Schoppmmann
  • Carmela Troncoso

The area of privacy preserving machine learning has been of growing importance in practice, which has lead to an increased interest in this topic in both academia and industry. We have witnessed this through numerous papers and systems published and developed in the recent years to address challenges in this area. The solutions proposed in this space leverage many different approaches and techniques coming from machine learning, cryptography, and security. Thus, the workshop aims to be a forum to unify different perspectives and start a discussion about the relative merits of each approach. It will also serve as a venue for networking people from different communities interested in this problem, and hopefully foster fruitful long-term collaboration.

3rd International Workshop on Software Protection (SPRO 2019)

  • Paolo Falcarin
  • Michael MiZu Zunke

Software Protection techniques aim to defend the confidentiality and integrity of software applications that are exposed to an adversary that shares the execution host and access privileges of the application. This scenario is often denoted as protection against MATE (Man-At-The-End) attacks. This is an area of growing importance for industry: in many cases the deployment of such techniques is crucial to ensure business continuity. Following the second SPRO workshop co-located with CCS-2016 in Vienna, Austria, this third edition aims to establish a tradition where academics and industrial experts in software protection can meet to confront the challenges in designing stronger protections and in developing better integration with industrial software development life cycle requirements.