Learning Disease Progression Models That Capture Health Disparities
Erica Chiang,
Divya Shanmugam, Ashley Beecy, Gabriel Sayer, Deborah Estrin, Nikhil Garg, Emma Pierson
CHIL 2025
[
abstract]
[
pdf]
[
twitter]
Disease progression models are widely used to inform the diagnosis and treatment of many progressive diseases. However, a significant limitation of existing models is that they do not account for health disparities that can bias the observed data. To address this, we develop an interpretable Bayesian disease progression model that captures three key health disparities: certain patient populations may (1) start receiving care only when their disease is more severe, (2) experience faster disease progression even while receiving care, or (3) receive follow-up care less frequently conditional on disease severity. We show theoretically and empirically that failing to account for any of these disparities can result in biased estimates of severity (e.g., underestimating severity for disadvantaged groups). On a dataset of heart failure patients, we show that our model can identify groups that face each type of health disparity, and that accounting for these disparities while inferring disease severity meaningfully shifts which patients are considered high-risk.
SurgeProtector: Mitigating Temporal Algorithmic Complexity Attacks using Adversarial Scheduling
Nirav Atre, Hugo Sadok,
Erica Chiang,
Weina Wang, and Justine Sherry
ACM SIGCOMM 2022
[
abstract]
[
pdf]
Denial-of-Service (DoS) attacks are the bane of public-facing network deployments. Algorithmic complexity attacks (ACAs) are a class of DoS attacks where an attacker uses a small amount of adversarial traffic to induce a large amount of work in the target system, pushing the system into overload and causing it to drop packets from innocent users. ACAs are particularly dangerous because, unlike volumetric DoS attacks, ACAs don’t require a significant network bandwidth investment from the attacker. Today, network functions (NFs) on the Internet must be designed and engineered on a case-by-case basis to mitigate the debilitating impact of ACAs. Further, the resulting designs tend to be overly conservative in their attack mitigation strategy, limiting the innocent traffic that the NF can serve under common-case operation.
In this work, we propose a more general framework to make NFs resilient to ACAs. Our framework, SurgeProtector, uses the NF’s scheduler to mitigate the impact of ACAs using a very traditional scheduling algorithm: Weighted Shortest Job First (WSJF). To evaluate SurgeProtector, we propose a new metric of vulnerability called the Displacement Factor (DF), which quantifies the ‘harm per unit effort’ that an adversary can inflict on the system. We provide novel, adversarial analysis of WSJF and show that any system using this policy has a worst-case DF of only a small constant, where traditional schedulers place no upper bound on the DF. Illustrating that SurgeProtector is not only theoretically, but practically robust, we integrate SurgeProtector into an open source intrusion detection system (IDS). Under simulated attack, the SurgeProtector-augmented IDS suffers 90-99% lower innocent traffic loss than the original system.
Robust Heuristics: Attacks and Defenses on Job Size Estimation for WSJF Systems
Erica Chiang,
Nirav Atre, Hugo Sadok, Weina Wang, and Justine Sherry
ACM SIGCOMM 2022: Poster and Demo Session

ACM Student Research Competition 2nd Place
[
abstract]
[
poster]
[
pdf]
Packet scheduling algorithms control the order in which a system serves network packets, which can have significant impact on system performance. Recent work in adversarial scheduling has shown that Weighted Shortest Job First (WSJF) -- scheduling packets by the ratio of job size to packet size -- significantly mitigates a system’s susceptibility to algorithmic complexity attacks (ACAs), a particularly dangerous class of Denial-of-Service (DoS) attacks. WSJF relies on knowledge of a packet’s job size, information that is not available a priori. In this work, we explore the theoretical implications of using heuristics for job size estimation. Further, we consider preemption as another technique that may help protect systems when job sizes are unknown. We find that heuristics with certain properties (e.g., estimated job-size-to-packet-size ratios increasing monotonically with the actual ratios, step function-based job categorization) can protect systems against ACAs with varying guarantees, while preemption alone does not.