Ayuda
Ir al contenido

Dialnet


Resumen de Machine learning assisted qot estimation for optical networks optimization

Ankush Mahajan

  • The tremendous increase in data traffic has spurred a rapid evolution of the optical networks for a reliable, affordable, cost effective and scalable network infrastructure. To meet some of these requirements, network operators are pushing toward disaggregation. Network disaggregation focuses on decoupling the traditional monolithic optical transport hardware into independent functional blocks that interoperate. This enables a relatively free market where the network operators/owners could choose the best-in-class equipment from different vendors overcoming the vendor lock-in, at better prices. In this multi-vendor disaggregation context, the used equipment would impact the physical layer and the overall network behavior. This results in increasing the uncertainty on the performance when compared to a traditional single vendor aggregated approach.

    For effective optical network planning, operation and optimization, it is necessary to estimate the Quality of Transmission (QoT) of the connections. Network designers are interested in accurate and fast QoT estimation for services to be established in a future or existing network. Typically, QoT estimation is performed using a Physical Layer Model (PLM) which is included in the QoT estimation tool or Qtool. A design margin is generally included in a Qtool to account for the modeling and parameter inaccuracies, to re-assure an acceptable performance. PLM accuracy is highly important as modeling errors translate into a higher design margin which in turn translate into wasted capacity or unwanted regeneration. Recently monitoring and machine learning (ML) techniques have been proposed to account for the actual network conditions and improving the accuracy of the PLM in single vendor networks. This in turn results in more accurate QoT estimation.

    The first part of the thesis focuses on the ML assisted accurate QoT estimation techniques. In this regard, we developed a model that uses monitoring information from an operating network combined with supervised ML regression techniques to understand the network conditions. In particular, we model the generated penalties due to i). EDFA gain ripple effect, and ii). filter spectral shape uncertainties at ROADM nodes.

    Furthermore, with the aim of improving the Qtool estimation accuracy in multi-vendor networks, we propose PLM extensions. In particular, we introduce four TP vendor dependent performance factors that capture the performance variations of multi-vendor TPs. To verify the potential improvement, we studied the following two use cases with the proposed PLM, to: i) optimize the transponders (TPs) launch power; and ii) reduce design margin in incremental planning.

    In consequence, the last part of this thesis aims at investigating and solving the issue of accuracy limitation of Qtool in dynamic optimization tasks. To keep the models aligned to the real conditions, the digital twin (DT) concept is gaining significant attention in the research community. The DT is more than a model of the system; it includes an evolving set of data, and a means to dynamically adjust the model. Based on the DT fundamentals, we devised and implemented an iterative closed control loop process that, after several intermediate iterations of the optimization algorithm, configures the network, monitors, and retrains the Qtool. For the Qtool retraining, we adopt a ML-based nonlinear regression fitting technique. The key advantage of this novel scheme is that whilst the network operates, the Qtool parameters are retrained according to the monitored information with the adopted ML model. Hence, the Qtool tracks the projected states intermediately calculated by the algorithm. This reduces the optimization time as opposed to directly probing and monitoring the network.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus