Online Anticipatory Algorithms for Scheduling Problems

Publikationen: Thesis / Studienabschlussarbeiten und HabilitationsschriftenMasterarbeit

Standard

Online Anticipatory Algorithms for Scheduling Problems. / Erler, Simon.
2021.

Publikationen: Thesis / Studienabschlussarbeiten und HabilitationsschriftenMasterarbeit

Harvard

Erler, S 2021, 'Online Anticipatory Algorithms for Scheduling Problems', Dipl.-Ing., Montanuniversität Leoben (000).

APA

Erler, S. (2021). Online Anticipatory Algorithms for Scheduling Problems. [Masterarbeit, Montanuniversität Leoben (000)].

Bibtex - Download

@mastersthesis{7d09bd5015b14094bbc7fca466c86e93,
title = "Online Anticipatory Algorithms for Scheduling Problems",
abstract = "This work considers an online packet scheduling problem where packets arrive independently over a discrete time horizon and the goal is to minimize the cumulative weighted packet loss. The significant challenge of this problem is that the arrival model is not known in advance and may underlie dynamic changes. An important practical application of this setting is the scheduling of arriving IP packets in computer networks. The focus lies on the definition of online anticipatory algorithms that achieve an improvement over the oblivious approach of the greedy algorithm when scheduling requests in an uncertain, dynamic environment. The concept of anticipation is developed in this context by incorporating information of the environment's history to predict certain aspects of the future. Two distinct approaches are presented within the scope of this work: reinforcement learning and online stochastic combinatorial optimization. The theoretical background of both concepts is discussed in detail and the performance of the developed algorithms is analysed on the online packet scheduling problem. The experimental analysis shows that online stochastic combinatorial optimization yields the smallest cumulative weighted loss in any setting if the input distribution is modelled by Markov chains. However, it also requires the significantly largest runtime for each decision. To cope with a non-Markovian environment, first a conservative approach for the Q-learning algorithm is proposed that compared to the greedy algorithm achieves a significant improvement for the 2-class and 3-class problem. When more packet classes are present, the classical Q-learning algorithm has been found to be the best approach. However, it was not able to outperform greedy for the n-packet problem within the simulated time horizon, for n>=4.",
keywords = "Online, Packet, Scheduling, Anticipatory, Algorithms, Anticipation, Greedy, Reinforcement, Learning, Stochastic, Optimization, Markov, Q-Learning, Sampling, Dynamic, Decision, Oblivious, Expectation, Consensus, Regret, Baum, Welch, Online, Paket, Scheduling, Allokation, Netzwerkverkehr, IP-Pakete, Lernen, Stochastische, Optimierung, Sampling, Markov, Modell",
author = "Simon Erler",
note = "embargoed until null",
year = "2021",
language = "English",
school = "Montanuniversitaet Leoben (000)",

}

RIS (suitable for import to EndNote) - Download

TY - THES

T1 - Online Anticipatory Algorithms for Scheduling Problems

AU - Erler, Simon

N1 - embargoed until null

PY - 2021

Y1 - 2021

N2 - This work considers an online packet scheduling problem where packets arrive independently over a discrete time horizon and the goal is to minimize the cumulative weighted packet loss. The significant challenge of this problem is that the arrival model is not known in advance and may underlie dynamic changes. An important practical application of this setting is the scheduling of arriving IP packets in computer networks. The focus lies on the definition of online anticipatory algorithms that achieve an improvement over the oblivious approach of the greedy algorithm when scheduling requests in an uncertain, dynamic environment. The concept of anticipation is developed in this context by incorporating information of the environment's history to predict certain aspects of the future. Two distinct approaches are presented within the scope of this work: reinforcement learning and online stochastic combinatorial optimization. The theoretical background of both concepts is discussed in detail and the performance of the developed algorithms is analysed on the online packet scheduling problem. The experimental analysis shows that online stochastic combinatorial optimization yields the smallest cumulative weighted loss in any setting if the input distribution is modelled by Markov chains. However, it also requires the significantly largest runtime for each decision. To cope with a non-Markovian environment, first a conservative approach for the Q-learning algorithm is proposed that compared to the greedy algorithm achieves a significant improvement for the 2-class and 3-class problem. When more packet classes are present, the classical Q-learning algorithm has been found to be the best approach. However, it was not able to outperform greedy for the n-packet problem within the simulated time horizon, for n>=4.

AB - This work considers an online packet scheduling problem where packets arrive independently over a discrete time horizon and the goal is to minimize the cumulative weighted packet loss. The significant challenge of this problem is that the arrival model is not known in advance and may underlie dynamic changes. An important practical application of this setting is the scheduling of arriving IP packets in computer networks. The focus lies on the definition of online anticipatory algorithms that achieve an improvement over the oblivious approach of the greedy algorithm when scheduling requests in an uncertain, dynamic environment. The concept of anticipation is developed in this context by incorporating information of the environment's history to predict certain aspects of the future. Two distinct approaches are presented within the scope of this work: reinforcement learning and online stochastic combinatorial optimization. The theoretical background of both concepts is discussed in detail and the performance of the developed algorithms is analysed on the online packet scheduling problem. The experimental analysis shows that online stochastic combinatorial optimization yields the smallest cumulative weighted loss in any setting if the input distribution is modelled by Markov chains. However, it also requires the significantly largest runtime for each decision. To cope with a non-Markovian environment, first a conservative approach for the Q-learning algorithm is proposed that compared to the greedy algorithm achieves a significant improvement for the 2-class and 3-class problem. When more packet classes are present, the classical Q-learning algorithm has been found to be the best approach. However, it was not able to outperform greedy for the n-packet problem within the simulated time horizon, for n>=4.

KW - Online

KW - Packet

KW - Scheduling

KW - Anticipatory

KW - Algorithms

KW - Anticipation

KW - Greedy

KW - Reinforcement

KW - Learning

KW - Stochastic

KW - Optimization

KW - Markov

KW - Q-Learning

KW - Sampling

KW - Dynamic

KW - Decision

KW - Oblivious

KW - Expectation

KW - Consensus

KW - Regret

KW - Baum

KW - Welch

KW - Online

KW - Paket

KW - Scheduling

KW - Allokation

KW - Netzwerkverkehr

KW - IP-Pakete

KW - Lernen

KW - Stochastische

KW - Optimierung

KW - Sampling

KW - Markov

KW - Modell

M3 - Master's Thesis

ER -