Title
Evaluating reinforcement learning methods for bundle routing control
Date Issued
01 June 2019
Access level
metadata only access
Resource Type
conference paper
Author(s)
Velusamy G.
University of Houston
Publisher(s)
Institute of Electrical and Electronics Engineers Inc.
Abstract
Cognitive networking applications continuously adapt actions according to observations of the environment and assigned performance goals. In this paper, one such cognitive networking application is evaluated where the aim is to route bundles over parallel links of different characteristics. Several machine learning algorithms may be suitable for the task. This research tested different reinforcement learning methods as potential enablers for this application: Q-Routing, Double Q-Learning, an actor-critic Learning Automata implementing the S-model, and the Cognitive Network Controller (CNC), which uses on a spiking neural network for Q-value prediction. All cases are evaluated under the same experimental conditions. Working with either a stable or time-varying environment with respect to the quality of the links, each routing method was evaluated with an identical number of bundle transmissions generated at a common rate. The measurements indicate that in general, the Cognitive Network Controller (CNC) produces better performance than the other methods followed by the Learning Automata. In the presented tests, the performance of Q-Routing and Double Q-Learning achieved similar performance to a non-learning round-robin approach. It is expect that these results will help to guide and improve the design of this and future cognitive networking applications.
Language
English
OCDE Knowledge area
Ciencias de la computación
IngenierÃa de sistemas y comunicaciones
Subjects
Scopus EID
2-s2.0-85075934108
ISBN of the container
9781728100487
Conference
2019 IEEE Cognitive Communications for Aerospace Applications Workshop, CCAAW 2019
Sources of information:
Directorio de Producción CientÃfica
Scopus