No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions

Publikationen: Beitrag in Buch/Bericht/KonferenzbandBeitrag in Konferenzband

Standard

No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions. / Jin, Tiancheng; Liu, Junyan; Rouyer, Chloe et al.
37th Conference on Neural Information Processing Systems (NeurIPS 2023). 2023.

Publikationen: Beitrag in Buch/Bericht/KonferenzbandBeitrag in Konferenzband

Harvard

Jin, T, Liu, J, Rouyer, C, Chang, W, Wei, C-Y & Luo, H 2023, No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions. in 37th Conference on Neural Information Processing Systems (NeurIPS 2023). <https://openreview.net/pdf?id=0WLMVDdvDF>

APA

Jin, T., Liu, J., Rouyer, C., Chang, W., Wei, C.-Y., & Luo, H. (2023). No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions. In 37th Conference on Neural Information Processing Systems (NeurIPS 2023) https://openreview.net/pdf?id=0WLMVDdvDF

Vancouver

Jin T, Liu J, Rouyer C, Chang W, Wei CY, Luo H. No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions. in 37th Conference on Neural Information Processing Systems (NeurIPS 2023). 2023

Author

Jin, Tiancheng ; Liu, Junyan ; Rouyer, Chloe et al. / No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions. 37th Conference on Neural Information Processing Systems (NeurIPS 2023). 2023.

Bibtex - Download

@inproceedings{48480fd23069472199ff135ccf110105,
title = "No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions",
abstract = "Existing online learning algorithms for adversarial Markov Decision Processes achieve O(√T) regret after T rounds of interactions even if the loss functions are chosen arbitrarily by an adversary, with the caveat that the transition function has to be fixed. This is because it has been shown that adversarial transition functions make no-regret learning impossible. Despite such impossibility results, in this work, we develop algorithms that can handle both adversarial losses and adversarial transitions, with regret increasing smoothly in the degree of maliciousness of the adversary. More concretely, we first propose an algorithm that enjoys O(√T + CP) regret, up to logarithmic factors, where CP measures how adversarial the transition functions are and can be at most O(T) . While this algorithm itself requires knowledge of CP, we further develop a black-box reduction approach that removes this requirement. Moreover, we also show that further refinements of the algorithm not only maintains the same regret bound, but also simultaneously adapts to easier environments (where losses are generated in a certain stochastically constrained manner as in [Jin et al. 2021]) and achieves O(U + √UCL + CP) regret up to logarithmic factors, where U is some standard gap-dependent coefficient and CL is the amount of corruption on losses.",
keywords = "Machine Learning, Reinforcement Learning, Adversarial Reinforcement Learning, Learning Theory, No-Regret, Reinforcement Learning with Corruptions",
author = "Tiancheng Jin and Junyan Liu and Chloe Rouyer and William Chang and Chen-Yu Wei and Haipeng Luo",
year = "2023",
language = "English",
booktitle = "37th Conference on Neural Information Processing Systems (NeurIPS 2023)",

}

RIS (suitable for import to EndNote) - Download

TY - GEN

T1 - No-Regret Online Reinforcement Learning with Adversarial Losses and Transitions

AU - Jin, Tiancheng

AU - Liu, Junyan

AU - Rouyer, Chloe

AU - Chang, William

AU - Wei, Chen-Yu

AU - Luo, Haipeng

PY - 2023

Y1 - 2023

N2 - Existing online learning algorithms for adversarial Markov Decision Processes achieve O(√T) regret after T rounds of interactions even if the loss functions are chosen arbitrarily by an adversary, with the caveat that the transition function has to be fixed. This is because it has been shown that adversarial transition functions make no-regret learning impossible. Despite such impossibility results, in this work, we develop algorithms that can handle both adversarial losses and adversarial transitions, with regret increasing smoothly in the degree of maliciousness of the adversary. More concretely, we first propose an algorithm that enjoys O(√T + CP) regret, up to logarithmic factors, where CP measures how adversarial the transition functions are and can be at most O(T) . While this algorithm itself requires knowledge of CP, we further develop a black-box reduction approach that removes this requirement. Moreover, we also show that further refinements of the algorithm not only maintains the same regret bound, but also simultaneously adapts to easier environments (where losses are generated in a certain stochastically constrained manner as in [Jin et al. 2021]) and achieves O(U + √UCL + CP) regret up to logarithmic factors, where U is some standard gap-dependent coefficient and CL is the amount of corruption on losses.

AB - Existing online learning algorithms for adversarial Markov Decision Processes achieve O(√T) regret after T rounds of interactions even if the loss functions are chosen arbitrarily by an adversary, with the caveat that the transition function has to be fixed. This is because it has been shown that adversarial transition functions make no-regret learning impossible. Despite such impossibility results, in this work, we develop algorithms that can handle both adversarial losses and adversarial transitions, with regret increasing smoothly in the degree of maliciousness of the adversary. More concretely, we first propose an algorithm that enjoys O(√T + CP) regret, up to logarithmic factors, where CP measures how adversarial the transition functions are and can be at most O(T) . While this algorithm itself requires knowledge of CP, we further develop a black-box reduction approach that removes this requirement. Moreover, we also show that further refinements of the algorithm not only maintains the same regret bound, but also simultaneously adapts to easier environments (where losses are generated in a certain stochastically constrained manner as in [Jin et al. 2021]) and achieves O(U + √UCL + CP) regret up to logarithmic factors, where U is some standard gap-dependent coefficient and CL is the amount of corruption on losses.

KW - Machine Learning

KW - Reinforcement Learning

KW - Adversarial Reinforcement Learning

KW - Learning Theory

KW - No-Regret

KW - Reinforcement Learning with Corruptions

M3 - Conference contribution

BT - 37th Conference on Neural Information Processing Systems (NeurIPS 2023)

ER -