Adaptive Algorithms for Meta-Induction
Research output: Contribution to journal › Article › Research › peer-review
Standard
In: Journal for general philosophy of science = Zeitschrift für allgemeine Wissenschaftstheorie, Vol. 54.2023, No. 3, 07.10.2023, p. 433–450.
Research output: Contribution to journal › Article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex - Download
}
RIS (suitable for import to EndNote) - Download
TY - JOUR
T1 - Adaptive Algorithms for Meta-Induction
AU - Ortner, Ronald
N1 - Publisher Copyright: © 2022, The Author(s).
PY - 2023/10/7
Y1 - 2023/10/7
N2 - Work in online learning traditionally considered induction-friendly (e.g. stochastic with a fixed distribution) and induction-hostile (adversarial) settings separately. While algorithms like Exp3 that have been developed for the adversarial setting are applicable to the stochastic setting as well, the guarantees that can be obtained are usually worse than those that are available for algorithms that are specifically designed for stochastic settings. Only recently, there is an increasing interest in algorithms that give (near-)optimal guarantees with respect to the underlying setting, even in case its nature is unknown to the learner. In this paper, we review various online learning algorithms that are able to adapt to the hardness of the underlying problem setting. While our focus lies on the application of adaptive algorithms as meta-inductive methods that combine given base methods, concerning theoretical properties we are also interested in guarantees that go beyond a comparison to the best fixed base learner.
AB - Work in online learning traditionally considered induction-friendly (e.g. stochastic with a fixed distribution) and induction-hostile (adversarial) settings separately. While algorithms like Exp3 that have been developed for the adversarial setting are applicable to the stochastic setting as well, the guarantees that can be obtained are usually worse than those that are available for algorithms that are specifically designed for stochastic settings. Only recently, there is an increasing interest in algorithms that give (near-)optimal guarantees with respect to the underlying setting, even in case its nature is unknown to the learner. In this paper, we review various online learning algorithms that are able to adapt to the hardness of the underlying problem setting. While our focus lies on the application of adaptive algorithms as meta-inductive methods that combine given base methods, concerning theoretical properties we are also interested in guarantees that go beyond a comparison to the best fixed base learner.
KW - Multi-armed bandit problem
KW - Online learning
KW - Prediction with expert advice
KW - Regret
UR - http://www.scopus.com/inward/record.url?scp=85139520747&partnerID=8YFLogxK
U2 - 10.1007/s10838-021-09590-2
DO - 10.1007/s10838-021-09590-2
M3 - Article
VL - 54.2023
SP - 433
EP - 450
JO - Journal for general philosophy of science = Zeitschrift für allgemeine Wissenschaftstheorie
JF - Journal for general philosophy of science = Zeitschrift für allgemeine Wissenschaftstheorie
IS - 3
ER -