Ronald Ortner

Research output

  1. 2016
  2. Published

    Optimal Behavior is Easier to Learn than the Truth

    Ortner, R., 2016, In: Minds and Machines. 26, 3, p. 243-252

    Research output: Contribution to journalArticleResearchpeer-review

  3. Published

    Pareto Front Identification from Stochastic Bandit Feedback

    Auer, P., Chiang, C.-K., Ortner, R. & Drugan, M., 2016, Proceedings of the Nineteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2016. p. 939-947 (JMLR Workshop and Conference Proceedings).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  4. 2015
  5. Published

    Improved Regret Bounds for Undiscounted Continuous Reinforcement Learning

    Kailasam, L., Ortner, R. & Ryabko, D., 7 Jul 2015.

    Research output: Contribution to conferencePosterResearchpeer-review

  6. Published

    Forcing Subarrangements in Complete Arrangements of Pseudocircles

    Ortner, R., 2015, In: Journal of Computational Geometry. 6, 1, p. 235-248

    Research output: Contribution to journalArticleResearchpeer-review

  7. Published

    Improved Regret Bounds for Undiscounted Continuous Reinforcement Learning

    Kailasam, L., Ortner, R. & Ryabko, D., 2015, Proceedings of The 32nd International Conference on Machine Learning.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  8. 2014
  9. Published

    Regret Bounds for Restless Markov Bandits

    Ortner, R., Ryabko, D., Auer, P. & Munos, R., 2014, In: Theoretical Computer Science. 558, p. 62-76

    Research output: Contribution to journalArticleResearchpeer-review

  10. Published

    Selecting Near-Optimal Approximate State Representations in Reinforcement Learning

    Ortner, R., Maillard, O.-A. & Ryabko, D., 2014, Algorithmic Learning Theory - 25th International Conference, ALT 2014, Bled, October 8-10, 2014. p. 140-154

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  11. 2013
  12. Published

    Adaptive Aggregation for Reinforcement Learning in Average Reward Markov Decision Processes

    Ortner, R., 2013, In: Annals of operations research. 208, p. 321-336

    Research output: Contribution to journalArticleResearchpeer-review

  13. Published

    Competing with an Infinite Set of Models in Reinforcement Learning

    Nguyen, P., Maillard, O.-A., Ryabko, D. & Ortner, R., 2013, JMLR Workshop and Conference Proceedings Volume 31 : Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics. p. 463-471

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  14. Published

    Optimal regret bounds for selecting the state representation in reinforcement learning.

    Maillard, O.-A., Nguyen, P., Ortner, R. & Ryabko, D., 2013, JMLR Workshop and Conference Proceedings Volume 28 : Proceedings of The 30th International Conference on Machine Learning. p. 543-551

    Research output: Chapter in Book/Report/Conference proceedingConference contribution