publications
* denotes equal contribution. You can also find my articles on Google Scholar.
2022
- On Controller Tuning with Time-Varying Bayesian OptimizationPaul Brunzema*, Alexander von Rohr*, and Sebastian TrimpeIn Proceedings of the IEEE Conference on Decision and Control 2022
Changing conditions or environments can cause system dynamics to vary over time. To ensure optimal control performance, controllers should adapt to these changes. When the underlying cause and time of change is unknown, we need to rely on online data for this adaptation. In this paper, we will use time-varying Bayesian optimization (TVBO) to tune controllers online in changing environments using appropriate prior knowledge on the control objective and its changes. Two properties are characteristic of many online controller tuning problems: First, they exhibit incremental and lasting changes in the objective due to changes to the system dynamics, e.g., through wear and tear. Second, the optimization problem is convex in the tuning parameters. Current TVBO methods do not explicitly account for these properties, resulting in poor tuning performance and many unstable controllers through over-exploration of the parameter space. We propose a novel TVBO forgetting strategy using Uncertainty-Injection (UI), which incorporates the assumption of incremental and lasting changes. The control objective is modeled as a spatio-temporal Gaussian process (GP) with UI through a Wiener process in the temporal domain. Further, we explicitly model the convexity assumptions in the spatial dimension through GP models with linear inequality constraints. In numerical experiments, we show that our model outperforms the state-of-the-art method in TVBO, exhibiting reduced regret and fewer unstable parameter configurations.
@inproceedings{brunzema2022controller, author = {Brunzema*, Paul and {von Rohr*}, Alexander and Trimpe, Sebastian}, title = {On Controller Tuning with Time-Varying Bayesian Optimization}, booktitle = {Proceedings of the IEEE Conference on Decision and Control}, pages = {4046-4052}, year = {2022}, doi = {10.1109/CDC51059.2022.9992649}, }
- Improving the Performance of Robust Control through Event-Triggered LearningAlexander von Rohr, Friedrich Solowjow, and Sebastian TrimpeIn Proceedings of the IEEE Conference on Decision and Control 2022
Robust controllers ensure stability in feedback loops designed under uncertainty but at the cost of performance. Model uncertainty in time-invariant systems can be reduced by recently proposed learning-based methods, thus improving the performance of robust controllers using data. However, in practice, many systems also exhibit uncertainty in the form of changes over time, e.g., due to weight shifts or wear and tear, leading to decreased performance or instability of the learning-based controller. We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem with rare or slow changes. Our key idea is to switch between robust and learned controllers. For learning, we first approximate the optimal length of the learning phase via Monte-Carlo estimations using a probabilistic model. We then design a statistical test for uncertain systems based on the moment-generating function of the LQR cost. The test detects changes in the system under control and triggers re-learning when control performance deteriorates due to system changes. We demonstrate improved performance over a robust controller baseline in a numerical example.
@inproceedings{rohr2022improving, author = {{von Rohr}, Alexander and Solowjow, Friedrich and Trimpe, Sebastian}, title = {Improving the Performance of Robust Control through Event-Triggered Learning}, booktitle = {Proceedings of the IEEE Conference on Decision and Control}, pages = {3424-3430}, year = {2022}, doi = {10.1109/CDC51059.2022.9993350}, }
- preprintEvent-Triggered Time-Varying Bayesian OptimizationPaul Brunzema, Alexander von Rohr, Friedrich Solowjow, and Sebastian TrimpearXiv 2022
We consider the problem of sequentially optimizing a time-varying objective function using time-varying Bayesian optimization (TVBO). Here, the key challenge is to cope with old data. Current approaches to TVBO require prior knowledge of a constant rate of change. However, the rate of change is usually neither known nor constant. We propose an event-triggered algorithm, ET-GP-UCB, that detects changes in the objective function online. The event-trigger is based on probabilistic uniform error bounds used in Gaussian process regression. The trigger automatically detects when significant change in the objective functions occurs. The algorithm then adapts to the temporal change by resetting the accumulated dataset. We provide regret bounds for ET-GP-UCB and show in numerical experiments that it is competitive with state-of-the-art algorithms even though it requires no knowledge about the temporal changes. Further, ET-GP-UCB outperforms these competitive baselines if the rate of change is misspecified and we demonstrate that it is readily applicable to various settings without tuning hyperparameters.
2021
- Probabilistic robust linear quadratic regulators with Gaussian processesAlexander von Rohr, Matthias Neumann-Brosig, and Sebastian TrimpeIn Proceedings of the 3rd Conference on Learning for Dynamics and Control 2021
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design. While learning-based control has the potential to yield superior performance in demanding applications, robustness to uncertainty remains an important challenge. Since Bayesian methods quantify uncertainty of the learning results, it is natural to incorporate these uncertainties in a robust design. In contrast to most state-of-the-art approaches that consider worst-case estimates, we leverage the learning methods’ posterior distribution in the controller synthesis. The result is a more informed and thus efficient trade-off between performance and robustness. We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin. The formulation is based on a recently proposed algorithm for linear quadratic control synthesis, which we extend by giving probabilistic robustness guarantees in the form of credibility bounds for the system’s stability. Comparisons to existing methods based on worst-case and certainty-equivalence designs reveal superior performance and robustness properties of the proposed method.
@inproceedings{vonrohr2021probabilistic, author = {{von Rohr}, Alexander and Neumann-Brosig, Matthias and Trimpe, Sebastian}, title = {Probabilistic robust linear quadratic regulators with Gaussian processes}, booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control}, pages = {324--335}, year = {2021}, editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.}, volume = {144}, series = {Proceedings of Machine Learning Research}, publisher = {PMLR}, }
- Local policy search with Bayesian optimizationSarah Müller*, Alexander von Rohr*, and Sebastian TrimpeIn Advances in Neural Information Processing Systems 2021
Reinforcement learning (RL) aims to find an optimal policy by interaction with an environment. Consequently, learning complex behavior requires a vast number of samples, which can be prohibitive in practice. Nevertheless, instead of systematically reasoning and actively choosing informative samples, policy gradients for local search are often obtained from random perturbations. These random samples yield high variance estimates and hence are sub-optimal in terms of sample complexity. Actively selecting informative samples is at the core of Bayesian optimization, which constructs a probabilistic surrogate of the objective from past samples to reason about informative subsequent ones. In this paper, we propose to join both worlds. We develop an algorithm utilizing a probabilistic model of the objective function and its gradient. Based on the model, the algorithm decides where to query a noisy zeroth-order oracle to improve the gradient estimates. The resulting algorithm is a novel type of policy search method, which we compare to existing black-box algorithms. The comparison reveals improved sample complexity and reduced variance in extensive empirical evaluations on synthetic objectives. Further, we highlight the benefits of active sampling on popular RL benchmarks.
@inproceedings{mueller2021local, author = {M\"{u}ller*, Sarah and {von Rohr*}, Alexander and Trimpe, Sebastian}, title = {Local policy search with Bayesian optimization}, booktitle = {Advances in Neural Information Processing Systems}, pages = {20708--20720}, year = {2021}, editor = {Ranzato, M. and Beygelzimer, A. and Dauphin, Y. and Liang, P.S. and Vaughan, J. Wortman}, publisher = {Curran Associates, Inc.}, volume = {34}, }
2020
- A Learnable Safety MeasureSteve Heim*, Alexander von Rohr*, Sebastian Trimpe, and Alexander Badri-SpröwitzIn Proceedings of the Conference on Robot Learning 2020
Failures are challenging for learning to control physical systems since they risk damage, time-consuming resets, and often provide little gradient information. Adding safety constraints to exploration typically requires a lot of prior knowledge and domain expertise. We present a safety measure which implicitly captures how the system dynamics relate to a set of failure states. Not only can this measure be used as a safety function, but also to directly compute the set of safe state-action pairs. Further, we show a model-free approach to learn this measure by active sampling using Gaussian processes. While safety can only be guaranteed after learning the safety measure, we show that failures can already be greatly reduced by using the estimated measure during learning.
@inproceedings{heim2020learnable, author = {Heim*, Steve and {von Rohr*}, Alexander and Trimpe, Sebastian and Badri-Spr\"{o}witz, Alexander}, title = {A Learnable Safety Measure}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {627--639}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, publisher = {PMLR}, }
- preprintExcursion Search for Constrained Bayesian Optimization under a Limited Budget of FailuresAlonso Marco, Alexander von Rohr, Dominik Baumann, José Miguel Hernández-Lobato, and Sebastian TrimpearXiv 2020
When learning to ride a bike, a child falls down a number of times before achieving the first success. As falling down usually has only mild consequences, it can be seen as a tolerable failure in exchange for a faster learning process, as it provides rich information about an undesired behavior. In the context of Bayesian optimization under unknown constraints (BOC), typical strategies for safe learning explore conservatively and avoid failures by all means. On the other side of the spectrum, non conservative BOC algorithms that allow failing may fail an unbounded number of times before reaching the optimum. In this work, we propose a novel decision maker grounded in control theory that controls the amount of risk we allow in the search as a function of a given budget of failures. Empirical validation shows that our algorithm uses the failures budget more efficiently in a variety of optimization experiments, and generally achieves lower regret, than state-of-the-art methods. In addition, we propose an original algorithm for unconstrained Bayesian optimization inspired by the notion of excursion sets in stochastic processes, upon which the failures-aware algorithm is built.
2018
- Gait learning for soft microrobots controlled by light fieldsAlexander von Rohr, Sebastian Trimpe, Alonso Marco, Peer Fischer, and Stefano PalagiIn Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems 2018
Soft microrobots based on photoresponsive materials and controlled by light fields can generate a variety of different gaits. This inherent flexibility can be exploited to maximize their locomotion performance in a given environment and used to adapt them to changing conditions. Albeit, because of the lack of accurate locomotion models, and given the intrinsic variability among microrobots, analytical control design is not possible. Common data-driven approaches, on the other hand, require running prohibitive numbers of experiments and lead to very sample-specific results. Here we propose a probabilistic learning approach for light-controlled soft microrobots based on Bayesian Optimization (BO) and Gaussian Processes (GPs). The proposed approach results in a learning scheme that is data-efficient, enabling gait optimization with a limited experimental budget, and robust against differences among microrobot samples. These features are obtained by designing the learning scheme through the comparison of different GP priors and BO settings on a semi-synthetic data set. The developed learning scheme is validated in microrobot experiments, resulting in a 115% improvement in a microrobot’s locomotion performance with an experimental budget of only 20 tests. These encouraging results lead the way toward self-adaptive microrobotic systems based on light-controlled soft microrobots and probabilistic learning control.
@inproceedings{vonrohr2018gait, author = {{von Rohr}, Alexander and Trimpe, Sebastian and Marco, Alonso and Fischer, Peer and Palagi, Stefano}, title = {Gait learning for soft microrobots controlled by light fields}, booktitle = {Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems}, pages = {6199-6206}, year = {2018}, doi = {10.1109/IROS.2018.8594092}, }