Learning and decision making under observer effects
Sarah Dean (Cornell University)
https://simons.berkeley.edu/talks/sarah-dean-cornell-university-2025-04-29
Theoretical Aspects of Trustworthy AI
In many modern engineering domains, the presence of "observer effects" creates interdependence between measurement and underlying quantities of interest. In such settings, decisions (also called actions or inputs) both impact the underlying system state and determine what information is observed. Accounting for this dual role is crucial for designing reliable algorithms for learning and decision making, for applications ranging from robotics to personalized recommendation systems. In this talk, I will discuss recent work in the setting of partially observed dynamical systems with linear state transitions and bilinear observations. Inspired by the rich line of work on learning and control for linear systems, our goal is to understand how much (and which) data is necessary for reliable decision-making.
First, we will consider learning from observations when the dynamics are unknown. The identification procedure involves heavy tailed and dependent covariates. Nevertheless, we provide finite data error bounds and a sample complexity analysis for inputs chosen according to a simple random design. Second, we consider the optimal control problem with the objective of minimizing a quadratic cost. Despite the similarity to standard linear quadratic Gaussian (LQG) control, neither does the separation principle (SP) hold, nor is the optimal policy affine in the estimated state. Under certain conditions, the SP-based controller locally maximizes the cost instead of minimizing it, and instability can result from a loss of observability. I will conclude with a discussion of open questions. Based on joint work with Yahya Sattar, Sunmook Choi, Yassir Jedra, and Maryam Fazel.