Skip to main content
Louis Chen

Abstract: The Benjamini-Hochberg (BH) procedure is widely used to control the false detection rate (FDR) in multiple testing. Applications of this control abound in drug discovery, forensics, anomaly detection, and, in particular, machine learning, ranging from nonparametric outlier detection to out-of-distribution detection and one-class classification methods. Considering this control could be relied upon in critical safety/security contexts, we investigate its adversarial robustness. More precisely, we study under what conditions BH does and does not exhibit adversarial robustness, we present a class of simple and easily implementable adversarial test-perturbation algorithms, and we perform computational experiments. With our algorithms, we demonstrate that there are conditions under which BH’s control can be significantly broken with relatively few (even just one) test score perturbation(s), and provide non-asymptotic guarantees on the expected adversarial-adjustment to FDR. Our technical analysis involves a combinatorial reframing of the BH procedure as a “balls into bins” process, and drawing a connection to generalized ballot problems to facilitate an information-theoretic approach for deriving non-asymptotic lower bounds.

Bio: Louis Chen is an Assistant Professor of Operations Research at the Naval Postgraduate School. He earned his Ph.D. in Operations Research from MIT in 2019, where he was advised by David Simchi-Levi. His research interests lie broadly in distributionally robust optimization, adversarial machine learning, and decision-making under uncertainty in domains like resource allocation, networks, and defense. His work focuses on developing tractable models and deriving analytical/operational insights by drawing on areas including convex analysis, (stochastic and combinatorial) optimization, and learning. His research has been supported by the Air Force Office of Scientific Research (AFOSR).

https://louislchen.github.io/

Skip to content