Annual Computer Security Applications Conference (ACSAC) 2021

Full Program »

Learning from Authoritative Security Experiment Results (LASER) Workshop

Tuesday, 7 December 2021
10:00 - 16:45

See the original Call for Participation

Workshop Overview

The LASER workshop series focuses on learning from and improving cybersecurity experiment results. The workshop strives to provide a highly interactive, collegial environment for discussing and learning from experimental methodologies, execution, and results. Ultimately, the workshop seeks to foster a dramatic change in the experimental paradigm for cybersecurity research, improving the overall quality and reporting of practiced science.

The LASER workshop invites broad participation by the community, including (1) authors of accepted papers from major cybersecurity conferences to present and discuss the experimental aspects of their work, and (2) others interested in contributing to and learning from such discussions and interaction.

Conference papers all too often must focus on research results and contain limited discussion of the experimental aspects of the work (maybe a small section with a few paragraphs at the end of the paper). LASER provides an opportunity to focus on and explore the experimental approaches and methodologies used to obtain the research results.

The LASER workshop not only provides authors of accepted papers the opportunity to present and discuss the experimental aspects of their work with other workshop participants, but also the option to write new published papers that expand on the experimental aspects of their work.

Workshop Format

The workshop will be structured as a true “workshop” in the sense that it will focus on discussion and interaction around the topic of experimental methodologies, execution, and results with the goal of encouraging improvements in experimental science in cybersecurity research. Authors will lead the group in a discussion of the experimental aspects of their work.

Areas of interest include, but are not limited to, the following:

As a group, participants will discuss these areas and answer interesting questions such as:

Preliminary Program


  09:45 – 10:00


  10:00 – 10:15

  Welcome, Goals, Organization     SLIDES

  10:15 – 11:45

  Session 1: Paper Discussions

Under the Hood of MARVEL
Antonio Ruggia

Methodological Challenges In Investigating the User Experience of Cyber 
Threat Intelligence Data Sharing Platforms

Gabriele Lenzini, Borce Stojkovski

  11:45 – 12:00


  12:00 – 13:00

  Session 2: Keynote Talk

Using Co-Simulation for Model Reuse and Experiment Reproducibility
Thomas Roth, NIST

  13:00 – 13:15


  13:15 – 14:45

  Session 3: Paper Discussions

Dissecting ARID: Implementing and Evaluating Security Solutions on Open-Source Drones
Pietro Tedeschi, Savio Sciancalepore, Roberto Di Pietro

Evaluating Fast Speech Based Adversarial Audio Attack
Edwin Yang

  14:45 – 15:00


  15:00 – 16:30

  Session 4: Paper Discussions

A Proof of Concept for Usability and Efficacy Evaluations as a Component of
IETF Standards Using MUD

Vafa Andalibi and Jayati Dev

An Experimental Approach to Evaluate the Security of Mobile Autofill
Frameworks on iOS and Android

Sean Oesch

  16:30 – 16:45

  Wrap-up     SLIDES


Workshop Papers

Participants in the LASER Workshop are invited to write new papers on their experimental work. The papers will be published in post-workshop proceedings. The new papers will be driven and guided, in part, by the discussions and interactions, and possibly even new collaborations, forged at the workshop.

Draft papers will be due approximately two months after the workshop. The program committee will review papers and provide notifications and feedback one month after submission. Final camera-ready papers will be due approximately one month later.

Important Dates

LASER Workshop @ ACSAC: December 7, 2021
Draft Papers Submitted: February 7, 2022
Paper Reviews and Feedback: March 7, 2022
Final Papers Submitted: April 7, 2022
Papers Published: May 7, 2022


David Balenson (SRI International)
Laura S. Tinnel (SRI International)
Terry Benzel (USC-ISI)

Further Information

Please see for more information about the LASER Workshop. Send questions to

Keynote Talk

Using Co-Simulation for Model Reuse and Experiment Reproducibility
Thomas Roth, Group Leader, NIST

Abstract: Internet of Things (IoT) systems are networked systems of humans and devices that both sense and actuate changes on a shared physical space for improved quality of life. Because IoT control decisions will impact the environment and its human occupants, such systems must be protected against both fault and malicious attack. However, the combination of cost, geographic scale, and inability to run destructive experiments on the operational system make it difficult to assess IoT trustworthiness. One simulation technique to address this challenge is to integrate multiple domain specific solutions together to execute a joint simulation, called a co-simulation. The integration of multiple simulators results in a system-of-systems where the combined execution more accurately represents the real-world conditions of deployed IoT. This presentation will introduce the concept of co-simulation and relate U.S. National Institute of Standards and Technology (NIST) research activities towards reusable models and reproducible co-simulations based on the NIST Cyber-Physical Systems Framework.


Speaker Bio:

Thomas Roth leads the IoT Devices and Infrastructure group at the U.S. National Institute of Standards and Technology (NIST). His research focuses on the model-based design and simulation of complex Internet of Things systems including the use of simulation to assess system trustworthiness. He is an expert on the IEEE High Level Architecture (HLA) co-simulation standard, and main developer and co-creator of the NIST co-simulation platform, the Universal CPS Environment for Federation (UCEF).

Detailed Paper & Author Information

Under the Hood of MARVEL

Abstract: A growing trend in repackaging attacks exploits the Android virtualization technique, in which malicious code can run together with the victim app in a virtual container. In such a scenario, the attacker can directly build a malicious container capable of hosting the victim app instead of tampering with it, thus neglecting any anti-repackaging protection developed so far. To mitigate such a problem, we proposed MARVEL, the first methodology that allows preventing both traditional and virtualization-based repackaging attacks. MARVEL is based on the virtualization technique to build a secure virtual environment where protected apps can run and be checked at runtime. To assess the viability and reliability of our protection schema, we implemented it in a tool, named MARVELoid.

In this talk, we detail the experimental campaign conducted to evaluate MARVELoid, highlighting the advantages of our approach. In particular, we discuss the different aspects that will be evaluated, how we addressed them, and, eventually, some open challenges. The first set of experiments aim to evaluate the number of protections injected by MARVELoid and its processing time. To do so, we protect 4000 apps with 24 different configurations of the MARVELoid protection parameters (i.e., 96k protection combinations). Then, we evaluated the runtime overhead introduced by our solution, measuring CPU and memory overhead.


Speaker Bio:

Antonio Ruggia is a Ph.D. student in Computer Engineering at the University of Genoa. He is interested in several security topics, including Mobile Security, with a specific interest in Android security, Android malware, and data protection. He graduated in October 2020 at the University of Genoa and participated in the 2019, an Italian practical competition for students in Cybersecurity. He participates in international Capture-The-Flag (CTF) competitions. Since 2018, he has been working as a full-stack developer in a multinational corporation.

Methodological Challenges In Investigating the User Experience of Cyber Threat Intelligence Data Sharing Platforms

Abstract: Cyber Threat Intelligence (CTI) data sharing platforms are today essential tools to increase resilience against cyber attacks. They enable connecting organizations, increasing awareness, offering tools of investigation and analysis, and supporting decision making. Such potential can be exploited in full only if CTI platforms offer a high level of user experience and usability, otherwise suffering from being misused at least, if not potential compromising the socio-technical confidentiality of the data. In this talk we describe our experience in gathering information from users of a particular CTI platform, called MISP. We discuss the challenges that we faced while devising a methodology to reach out MISP users, users who are generally unknown or prefer to remain in the shadow; we discuss our strategy to preserve the external validity; and we will discuss our final methodological choices, commenting also on the ones that we considered but discarded, on the ones that we eventually adopted, and on the ones that we could have adopted but did not for lack of time or for reason not depending on our wish.


Speaker Bios:

Prof. Gabriele Lenzini, head of the Interdisciplinary Research in Sociotechnical Cybersecurity (IRiSC) of the University of Luxembourg and of the Interdisciplinary Center for Security Reliability and Trust (SnT), is a security expert with more than 15 years of experience in design and analysis of security.

Borce Stojkovski is a late career PhD student member of IRiSC who has made the analysis of UX and its implication for systems security, the main mission of its doctoral research. He has expertise in systems security and in social science research methods, in the specific usability and user experience.

Dissecting ARID: Implementing and Evaluating Security Solutions on Open-Source Drones

Abstract: We provide insights into the implementation and performance evaluation of ARID, the first solution allowing anonymous remote identification of commercial drones. Our implementation, whose source code has been publicly released as open source, leverages popular libraries and tools, such as the Poky OS (a reference distribution of the Yocto Project), MAVLink, and OpenSSL, supported by the large majority of commercial drones. We describe the rationale driving the selection of the above-mentioned tools, their integration into a running proof-of-concept on the popular 3DR Solo drone, and the details of the performance evaluation of our solution. The out- standing results obtained through experimentation on our proof-of-concept contribute to enhancing the impact of ARID, demonstrating its deployability to improving the quality of the provided security services in real-world UAV systems.


Speaker Bio:

Pietro Tedeschi obtained a Ph.D. in Computer Science and Engineering at the Hamad Bin Khalifa University, Doha in 2021, Qatar, a B.Sc. in Computer and Automation Engineering in 2014, and a M.Sc. (with honors) in Computer Engineering both from the “Politecnico di Bari”, in 2017 with two theses in the Network Security field. From 2017 to 2018, he worked as Security Researcher at CNIT (Consorzio Nazionale Interuniversitario per le Telecomunicazioni), Italy, for the EU H2020 SymbIoTe project. His research interests include Unmanned Aerial Vehicles Security, Maritime Security, Wireless Security, Internet of Things (IoT), Applied Cryptography, Privacy Preserving Systems, and Cyber-Physical Systems Security.

Evaluating Fast Speech Based Adversarial Audio Attack

Abstract: Automatic Speech Recognition (ASR) systems can mistranslate human speech due to pronunciation or speech rate. In this research, we propose CommanderGabble, a universal attack against various ASR systems that exploits the impact of fast speech to generate adversarial audio with hidden voice commands. We implemented and evaluated a prototype of the proposed attack by launching the adversarial audio attack in real-life environments. In this talk, we discuss challenges in designing our experiments to evaluate the proposed attack's effectiveness and robustness. We also discuss the detailed experiment procedure of three attack scenarios, i.e., the over-the-wire attack against transcribing services, the over-the-air attack against commercial voice assistant devices, and the human comprehension test. The experiment results indicate that CommanderGabble achieves consistent performance under real-world environments, while none of the participants correctly recognized the played adversarial audios.


Speaker Bio:

Edwin Yang is a Ph.D. student in Computer Science at the University of Oklahoma. He received his M.S. degree from Yonsei University, Seoul, Korea, in 2017. His research interests are in the area of mobile system security and Internet-of-things (IoT) security.

 A Proof of Concept for Usability and Efficacy Evaluations as a Component of IETF Standards Using MUD

Abstract: The Internet Engineering Task Force (IETF) designed Manufacturer Usage Description (MUD) standard to protect IoT devices through an isolation-based defense enforced on the network. In practice, manufacturers define behavioral profiles in form of an access control list for their devices. This access control is embedded in a "MUD-File", which is transferred to the user's network during the onboarding process, and may contain from one to hundreds of rules.

Few IETF standards have been evaluated for usability or acceptability, and this lack of alignment is a cause for the slow adoption of protocols including RPKI and IPv6. Given the number and complexity of MUD rules, generating the files or validating them for multiple devices in the network can be a challenge, particularly when devices are interacting. In response, MUD-Visualizer was designed to simplify the validation of individual and interacting MUD-Files through straightforward visualizations.

Here we summarize the development, experiment methodology, and results for evaluating the efficacy of MUD-Visualizer. We measure its accuracy, dependency on computer knowledge, speed of correctly identifying MUD-Files, and usability of the MUD-Visualizer as a security tool. We relied on quantifiable usability tools like System Usability Scale (SUS), statistical tests, and regression testing. By illustrating a research-by-design approach for the MUD-Visualizer, we argue for the importance of integrating implicit requirements like security knowledge for the workforce in discussions of emerging standards.


Speaker Bios:

Vafa Andalibi is a computer science Ph.D. candidate at Luddy School of Informatics, Computing, and Engineering. His research focuses on applying machine learning techniques for protecting the security and privacy of end-users. He works on three main projects on three layers of the network. In the application layer, he works on browser fingerprinting defenses to protect users’ privacy. The other two projects focus on protecting IoT devices. In the network layer, his research focuses on a recent IETF standard called Manufacturer Usage Description (MUD), and (close to) physical layer, he works on developing methods and tools for facilitating the reverse engineering of the stripped IoT firmware binaries for vulnerability hunting.

Jayati Dev is a doctoral student in Security Informatics, with a minor in Human-Computer Interaction design at the School of Informatics, Computing, and Engineering at Indiana University Bloomington. Her undergraduate training was in Electronics and Communication Engineering where she worked on the cryptographic implementation of authenticated encryption algorithms on modern processors. Her current research focus is human-centered design for enhanced privacy and security in conversational applications and IoT devices, especially for culturally distinct populations.

An Experimental Approach to Evaluate the Security of Mobile Autofill Frameworks on iOS and Android

Abstract: Password managers help users more effectively manage their passwords, encouraging them to adopt stronger passwords across their many accounts. In contrast to desktop systems where password managers receive no system-level support, mobile operating systems provide autofill frameworks that are designed to integrate with password managers to provide secure and usable autofill for browsers and other apps installed on mobile devices. In this talk, we discuss the methodologies used in our holistic security evaluation of autofill frameworks on iOS and Android, examining whether they achieve substantive benefits over the ad-hoc desktop environment or become a problematic single point of failure. Our methodology was built on a systematic review of prior literature that identified the key properties of secure autofill. We then evaluated these properties on both iOS and Android using a custom website to iterate and expose common autofill vulnerabilities in the browser, and custom mobile apps built to evaluate specific properties of autofill in the app context. We replicated prior work that examined how credentials are mapped to apps and domains, and by enumerated autofill frameworks to understand the available attack surface. Our results find that while the frameworks address several common issues (e.g., requiring user interaction before autofill), they also enforce insecure behavior and fail to provide the password managers implemented using the frameworks with sufficient information to override this incorrect behavior.


Speaker Bio:

Dr. Sean Oesch is a software architect turned security researcher. During his time at Oak Ridge National Laboratory, he has worked on a wide variety of projects, from cyber forensics to using machine learning (ML) to optimize fuel efficiency at stoplights. His recent research focuses on applications of ML to cybersecurity, as well as usable security. He completed his PhD at the University of Tennessee, where he did his dissertation work on authentication technologies.



Powered by OpenConf®
Copyright©2002-2021 Zakon Group LLC