DiffPhysible: Automated Simulations of Adversarial Attacks on Arbitrary Objects in Realistic Scenes

M. Hull and D.H. Chau
Georgia Institute of Technology, Georgia, United States

Keywords: Adversarial Machine Learning, Differentiable Rendering, Simulation, DNN

Deep Learning models, such as those used in autonomous vehicles, are vulnerable to adversarial attacks where an attacker could place a perturbed object in the environment. Generating these adversarial objects in the digital space has been extensively studied; however, successfully transferring these attacks from the digital realm to the physical realm is challenging. These challenges arise due to the nature of creating such examples, which demands meticulous fabrication and control over environmental factors, such as physical constraints, viewpoint variations, background dynamics, and sensor-related intricacies, among others. In response to these limitations, we introduce DIFFPHYSIBLE, a scenario creation tool leveraging differentiable rendering to accelerate progress in producing physical adversarial artifacts. DIFFPHYSIBLE empowers researchers to swiftly explore various scenarios within the digital realm, offering a wide range of configurable options for designing experiments aimed at creating adversarial 3D objects. We will demonstrate and invite the audience to try DIFFPHYSIBLE to produce an adversarial texture on a chosen object while having control over various scene parameters. During this demonstration, we will customize the scene, assign a target label, and in real time, show how this altered texture can lead to the chosen object being mis-classified, emphasizing the potential of DIFFPHYSIBLE in real-world scenarios.