MSc and BSc Theses

Overview of Master and Bachelor theses within the bwNET2.0 project

18. Bewertung der Messleistung und des Ressourcenverbrauchs von Flow-Exportern und Capture Technologien für Netzwerke über 100 Gbit/s

Pascal Kuppler | 2025 | B.Sc. Thesis | Supervisors: Gabriel Paradzik, Benjamin Steinert | Eberhard Karls Universität Tübingen

Netzwerke mit zweistelligen bis dreistelligen Gigabit-Datenraten stellen klassische Software-Flow-Exporter vor erhebliche Herausforderungen: Paketaufnahme, Flow-Caching, verlustarme Exportpfade und verlässliche Zeitmessung konkurrieren um CPU, Speicher und I/O. Herstellerangaben zu Leistungsgrenzen (10-400 Gbit/s) bleiben oft unscharf, weil transparente, reproduzierbare Messrahmen fehlen. Hier setzt die Arbeit an: Das vorhandene FlowTest-Framework wurde erweitert, um mehrere verbreitete Exporter (ipfixprobe, nProbe/Cento, YAF) unter unterschiedlichen Capture-Technologien (libpcap, PF_RING, PF_RING ZC, DPDK, raw) mit synthetischen Stressprofilen und realitätsnahen Traffic-Traces vergleichbar zu machen. Ein eventbasiertes Metriksystem (FlowStart, FlowEnd, Export, OnePacketFlow) sowie skalierbare Auswertungspipelines (Chunking, parallele Aggregation) ermöglichen feingranulare Analysen von Exportrate, Datenrate, Packetloss und Ressourcenverbrauch. Die Ergebnisse zeigen: Der dominierende Skalierungshebel ist die Capture-Technologie, nicht die interne Flow-Logik. Cento-ZC und Ipfixprobe-DPDK erreichen in realistischen Szenarien mittlere Datenraten von 50–67 Gb/s (Peaks nahe 90 Gb/s), während ipfixprobe im synthetischen Ein-Paket-Stress die höchste Flow-Exportrate (1,7M Flows/s) liefert. YAF bestätigt stabile Erfassung bis etwa 10 Gb/s, offenbart aber batchartige Exportcharakteristik.

17. Implementierung und Evaluierung einer Erweiterung für den YAF Flowmeter zur hochperformanten Erfassung von Paketmetadaten

Christian Hageloch | 2025 | B.Sc. Thesis | Supervisors: Gabriel Paradzik, Benjamin Steinert | Eberhard Karls Universität Tübingen

Die Überwachung von Hochgeschwindigkeitsnetzwerken gewinnt immer mehr an Relevanz. Bei ausreichend viel Netzwerkverkehr ist eine sinnvolle Klassifikation von Daten über herkömmliche Paket-basierte Methoden nicht mehr zeitgemäß, da es diesen Methoden an Performanz fehlt. Zudem ist Netzwerkverkehr zunehmen verschlüsselt, wodurch veraltete Systeme wie DPI an Bedeutung verlieren. Deswegen wird vermehrt auf Flow-basiertes Überwachungssysteme gewechselt. Hierbei werden Pakete in zugehörige Flows gruppiert. Ein Nachteil dieser Herangehensweise ist, dass bestimmte Paketmetadaten verloren gehen, die vor allem für die Sicherheitsanalyse von Netzwerken von Bedeutung sind. Deswegen wird in dieser Arbeit, in Koordination mit der aktuellen Wissenschaft, eine Erweiterung für den YAF Flowmeter zur hochperformanten Erfassung von Paketmetadaten vorgeschlagen. Es werden Paket-Zeitstempel und Paketgrößen der ersten N Pakete erfasst, und an den IPFIX Export des Flowmeters angefügt. Mit einem Performanz-Verlust zwischen 30 und 50 Prozent je nach Anzahl der exportierten Flows ist diese Erweiterung sehr performant. Um zusätzlich die Anwendung in Hochgeschwindigkeitsnetzwerken zu gewährleisten, ist die Möglichkeit des Exports von Hardware-Zeitstempeln mit eingebaut.

16. Entwicklung und Evaluation skalierbarer Methoden zur zeitreihenbasierten Modellierung, Vorhersage und Anomalieerkennung des Netzverhaltens von Campus-Netzen

Josef Müller | 2025 | M.Sc. Thesis | Supervisors: Benjamin Steinert, Gabriel Paradzik | Eberhard Karls Universität Tübingen

Im Jahr 2024 waren sechs deutsche Universitäten von Cyberattacken betroffen. Universitäten sind durch ihr kreatives und dynamisches Forschungsumfeld schwierig abzusichern, was die Notwendigkeit effizienter und skalierbarer Methoden zur Anomalieerkennung und Vorhersage von Netzverhalten im Netzverkehr verdeutlicht. In dieser Arbeit wird untersucht, wie diese Methoden mit einer wachsenden Menge an zu überwachenden Systemen und Daten skaliert werden können. Dafür werden flowbasierte Daten aus dem Netzwerk der Universität Tübingen erhoben, die als Zeitreihen behandelt werden. Ziel dieser Arbeit ist nicht die umfassende Evaluation zeitreihenbasierter Anomalie- und Vorhersagemodelle, da in diesem Bereich bereits zahlreiche Studien vorliegen. Stattdessen wird untersucht, wie bestehende Modelle mit zunehmender Datenmenge und Anzahl an Systemen skalierbar gemacht werden können. Skalierbare Methoden zeichnen sich einerseits durch Ressourceneffizienz und andererseits durch ein sublineares Wachstumsverhalten in Bezug auf die Datenmenge aus. In dieser Arbeit werden Zeitreihen gruppiert, um die Anzahl der benötigten Vorhersagemodelle zu reduzieren. Anstatt für jede Zeitreihe ein eigenes Modell zu trainieren, werden Vorhersagemodelle auf Durchschnittswerten gruppierter Zeitreihen trainiert, um die ursprünglichen Zeitreihen vorherzusagen. Außerdem werden Vorhersagemodelle zur Anomalieerkennung trainiert und Ensemblemethoden zur Verbesserung der Aussagenzuverlässigkeit eingesetzt.

15. Detektion volumetrischer DDoS-Angriffe durch aggregiertes Monitoring verteilter Eintrittsknoten

Tillmann Mörch | 2025 | M.Sc. Thesis | Supervisors: Samuel Kopmann, Timon Krack | Institute of Telematics, Karlsruhe Institute of Technology

Volumenbasierte Distributed-Denial-of-Service-(DDoS)-Angriffe stellen eine inhärente Bedrohung dar, indem sie gezielt durch die Überlastung von Ressourcen die Verfügbarkeit der Ziele einschränken. Mit HollywooDDoS wurde ein Detektionsverfahren entwickelt, das die Überwachung auf eine feste Anzahl von Quell- und Zielsubnetzen reduziert und keine Analyse der Verkehrsflüsse durchführt. In den bisherigen Untersuchungen zu HollywooDDoS wurde lediglich ein einzelner Eintrittsknoten betrachtet, der den gesamten Netzwerkverkehr empfängt. In größeren Netzen muss jedoch davon ausgegangen werden, dass mehrere Eintrittsknoten existieren und der Verkehr über die Knoten aufgeteilt ist. Infolgedessen hat ein einzelner Eintrittsknoten nur eine unvollständige Sicht auf das Netzwerk und empfängt nur einen Teil des Angriffsverkehrs. Im Rahmen der Masterarbeit wurde ein erweiterter Detektionsansatz für HollywooDDoS entwickelt. Dieser führt die Verkehrsdaten verteilter Eintrittsknoten zu einem Gesamtbild zusammen und wird gegenüber verschiedenen Ansätzen der Einzelklassifikation der individuellen Knoten evaluiert

14. Evaluating the Impact of Subsecond Traffic Engineering on Congestion Control

Antonius Idvorean | 2025 | M.Sc. Thesis | Supervisors: Benjamin Schichtholz, Michael König | Institute of Telematics, Karlsruhe Institute of Technology

Modern data center networks are often connected over long distances via wide area networks (WANs). Such inter-data center WANs must simultaneously handle different traffic patterns—from constant background traffic, such as for backups, to short-lived, burst-like data traffic. Two key mechanisms here are traffic engineering (TE), which distributes data traffic globally to avoid bottlenecks, and congestion control (CC), which prevents overloads at the transport level. While TE usually operates in minute or hour intervals, CC reacts within a few milliseconds. Current research shows that TE can react faster to traffic fluctuations in the sub-second range. However, this rapid adjustment can interfere with the response of CC and lead to instability. As part of this master's thesis, a simulation-based framework was developed to investigate the interactions between TE and CC at the packet level. Experiments were conducted using various topologies to analyze the effects of TE in the sub-second range on the performance of modern congestion control methods. The investigation revealed that although TE in the sub-second range offers advantages for latency-critical traffic, it can also impair the stability of delay-based congestion control methods and cause oscillations.

13. Performance Analysis of Alternative QUIC Implementations

Umut Sezen | 2025 | B.Sc. Thesis | Supervisor: Michael König | Institute of Telematics, Karlsruhe Institute of Technology

This thesis evaluates the performance of alternative QUIC implementations, specifically the linux-quic kernel module and s2n-quic with eXpress Data Path (XDP) support, to determine if moving the protocol out of user space can mitigate its inherent throughput limitations.Experimental results from 10 Gbit/s and 100 Gbit/s testbeds indicate that while kernel-integrated and bypass methods offer efficiency gains, QUIC's heavy packet processing and mandatory cryptography remain more significant bottlenecks than context switching alone. Specifically, the XDP version of s2n-quic demonstrated superior CPU efficiency, achieving a higher theoretical throughput per CPU cycle than the standard user-space version, yet both fell short of TCP’s performance at ultra-high speeds due to a lack of protocol-specific hardware offloading. Furthermore, the study found that hardware architecture and clock speeds heavily influence results, as older processors struggle to handle the cryptographic burden even when the networking stack is optimized. Ultimately, the research concludes that while alternative implementations reduce overhead, reaching line-rate speeds requires the development of dedicated, QUIC-specific kernel offloading techniques.

12. Intent-Based Configuration of Campus Firewalls with LLMs

Jonas Weßner | 2025 | M.Sc. Thesis | Supervisors: Prof. Dr. Björn Scheuermann, Prof. Dr. Frank Kargl | Technische Universität Darmstadt

Campus firewalls are essential for securing internal network segments and controlling access to sensitive resources. Traditionally, firewall policy management in such environments relies on manual processes, in which user requests are interpreted and translated into technical rules by specialized IT staff. This approach is time-consuming and difficult to scale. Intent-Based Networking (IBN) offers a promising alternative, where high-level, goal-oriented instructions are used to directly configure the network. In this thesis, we design an intent-based campus firewall system that leverages generative Large Language Models (LLMs) to translate natural-language user requests into firewall configuration updates. In a survey with network administrators, we identify several requirements for an intent-based campus firewall system, one of which is the ability to use institution-specific knowledge to interpret vague end-user requests. Based on these findings, we propose a modular framework that includes an LLM-based intent translation module for converting vague end-user requests into structured representations, as well as formal algorithms for updating firewall configurations accordingly. To evaluate our design, we construct a novel dataset for intent translation in a university network, developed in consultation with domain experts. The dataset incorporates institutional knowledge through a dedicated knowledge base, enabling the system to resolve complex, context-sensitive requests. A comprehensive evaluation shows that, with appropriate system design and model tuning, user requests of varying complexity and abstraction can be interpreted with over 95% accuracy. Our results demonstrate the potential of LLMs to bridge the gap between human-friendly communication and precise network policy specification, laying the foundation for more autonomous and user-centric firewall management.

11. Anonymization of NetFlow-based Monitoring

Paul Prechtel | 2025 | M.Sc. Thesis | Supervisors: Prof. Dr. Frank Kargl, Prof. Dr. Franz J. Hauck | Universität Ulm

Network monitoring with NetFlow and IP Flow Information Export (IPFIX) flow records is ubiquitous among ISPs to get insights on the network, for example for determining link utilization, popular remote ASNs or the frequency of specific failure situations. They also provide a lightweight approximation of the real network usage. However, few use privacy preserving measures beyond IP address pseudonymization, and research was approaching IPFIX flow record anonymization by applying hand-written anonymization rules for each data field. Unfortunately, this approach is not reliably protecting the privacy of end users, as deanonymization attacks have frequently shown. Although modern research uses differential privacy for its mathematically guaranteed worst-case information disclosure performance, they apply complicated variants with questionable usability for the general uses of this data by ISPs. To solve this problem, this thesis applies the relatively easy to understand bounded sum differential privacy method on aggregate byte statistics that are already in use by a medium sized ISP. This thesis integrates the application of differential privacy into a modified Prometheus exporter to effectively be a drop-in replacement for the existing software stack. It hereby adds Laplace or Gaussian noise to each byte value on each Prometheus export call, and by varying the epsilon, delta, upper byte threshold and scraping interval parameters can the amount of noise be controlled. The results are unfortunately underwhelming, with unsatisfactory usability caused by too much noise. Nonetheless, this approach is helpful to be explainable, easily integrable into existing IPFIX statistics pipelines, and to be able to store and publish these statistics without fear of privacy leakage or legal trouble.

10. Design of a Technical and Operational Concept for Holistic Information Security in a Biotech Laboratory

Tobias Ziefle | 2024 | M.Sc. Thesis | Supervisors: Dr. Georg Wolff, Benjamin Steinert | Eberhard Karls Universität Tübingen

The Biotechnology industry increasingly relies on interconnected, data-driven systems to accelerate research, drug development, and clinical trials. This dependence exposes Biotech organizations to significant cyber security threats, particularly given the high value of sensitive patient data and Intellectual Property. This thesis presents a comprehensive information security framework tailored to the unique operational and regulatory requirements of Biotech laboratories, with a focus on protecting legacy laboratory devices. The proposed security framework builds on the Zero Trust approach and is structured in five sections: A continuous Information Security Life-cycle, Identity and Access Management, Endpoint Protection, Network Security, and Backup and Disaster Recovery. Each component is specifically designed to safeguard valuable data assets and ensure operational resilience in an environment with limited IT resources. A SIEM system, based on the Elastic Stack, is implemented as part of the Endpoint Protection strategy to address vulnerabilities in legacy laboratory equipment. This system enables real-time threat detection and response, enhanced by CyberThreat Intelligence (CTI) integration for enriched data analysis. A demonstration of the SIEM’s capabilities in detecting a Meterpreter based malware attack showcases the practical effectiveness of the security framework.

9. Design and Implementation of a Zero Trust User-Agent Policy Enforcement Point

Janek Schoffit | 2024 | M.Sc. Thesis | Supervisors: Prof. Dr. Michael Menth, Prof. Dr. Frank Kargl | Eberhard Karls Universität Tübingen

This thesis focuses on enhancing client security within a zero trust architecture by designing a user-agent policy enforcement point capable of managing client-side processes and regulating their network requests, thereby mitigating the risk of compromise to both the client and the network. Therefore, a strategy is needed to prevent interference between processes and to control each network request independently, thus enhancing the overall security of the architecture. Previous research primarily focuses on network architecture through the development of zero trust service function chaining, without addressing client security explicitly, a gap this thesis aims to fill. Compartmentalization through isolation technologies is used to prevent process and storage interference, while enabling network segmentation, making it possible to authenticate and authorize network requests individually. Consequently, a design concept is devised and evaluated through a proof of concept, implementing a generalized framework for communication with isolation technologies and enforcing network policies via a proxy with respect on compartment and user identification. This approach minimizes the attack surface of malicious processes targeting the client or propagating within the network, as the user-agent policy enforcement point can manage compartment lifecycles based on request behaviour, thereby enhancing the overall security of the zero trust architecture.

8. Design and Implementation of a Modular High-Performance Threat Detection Pipeline using IPFIX Data

Janik Steegmüller | 2024 | M.Sc. Thesis | Supervisors: Gabriel Paradzik, Benjamin Steinert | Eberhard Karls Universität Tübingen

This thesis introduces a way of scaling the open-source intrusion detection system ’Maltrail’ using the Internet Protocol Flow Information Export (IPFIX) protocol. Moreover, this work embeds Maltrail into a performant and extensible threat detection pipeline. With the increasing intensity and complexity of cyberattacks, detecting malicious activity in private networks is becoming more important. Network intrusion detection systems can be used for this purpose, monitoring network traffic and detecting malicious or unusual network traffic. Most open-source Intrusion Detection System (IDS) software is sufficient for small-scale networks but lacks scaling capabilities for enterprise or university networks. Thus, Malfix is developed in the course of this thesis, which enables Maltrail to analyze IPFIX flows and leverage its scaling capabilities. For this, the IPFIX protocol is extended to support the transfer of Maltrail threat detection information. Additionally, using Malfix as core IDS, an intrusion detection pipeline based on an event streaming architecture is designed and implemented. A profiler is used to detect performance bottlenecks in Malfix and improvements are implemented to overcome them. Furthermore, the performance of Malfix is evaluated in terms of processing speed and efficiency by conducting various benchmarks. This shows the availability of potential production use at the University of Tübingen.

7. Centralized Detection of Shared Bottlenecks Between Competing Network Flows

Wilhelm Steffen | 2024 | B.Sc. Thesis | Supervisor: Michael König | Institute of Telematics, Karlsruhe Institute of Technology

This thesis addresses the challenge of proactively detecting shared bottlenecks in data centers to improve Coordinated Congestion Control (C3). Traditional end-to-end algorithms often react slowly to network changes because they rely on measuring performance characteristics only after congestion occurs, which is problematic for the short-lived flows typical in modern data centers. To mitigate this, this thesis presents a centralized algorithm that utilizes full knowledge of network topology and flow paths to identify competition for link capacity before packet loss occurs. The design follows a three-step process: first, it enumerates competing flows by identifying those sharing physical links; second, it calculates expected flow rate allocations based on either Proportional or Max-Min fairness schemes; and finally, it determines shared bottlenecks by identifying specific limiting links for each flow and the other flows they are shared with. Evaluation results on random, k-pod fat tree, and fabric topologies demonstrate that the algorithm implementation achieves a linear runtime trend for Proportional fairness, making it highly scalable as it depends on the number of flows rather than the total network size. While Max-Min fairness has a theoretical quadratic worst-case complexity, experiments on structured data center topologies revealed much more efficient sub-linear or linear performance trends. The findings suggest that centralized bottleneck detection is a viable approach for real-time congestion steering, though performance could be further optimized by transitioning from Python to a compiled language and implementing native multithreading.

6. Performance Evaluations of TCP in High Bandwidth Environments

Valentin Gretchenliev | 2024 | B.Sc. Thesis | Supervisor: Michael König | Institute of Telematics, Karlsruhe Institute of Technology

This thesis evaluates the performance of the Transmission Control Protocol (TCP) within high-bandwidth 100 Gbit/s network environments. The research investigates the efficacy of various congestion control algorithms, specifically CUBIC, BBRv1, and BBRv3, to determine if modern Linux kernel optimizations can effectively utilize such high speeds 'out of the box' or if extensive tuning is required. By comparing current results with proven research from 2019, the study identifies that while newer kernels offer better default optimizations, achieving maximum throughput still necessitates adjusting parameters like the Maximum Transmission Unit (MTU) to jumbo frames and scaling kernel memory buffers.The experimental analysis explores host-local bottlenecks, revealing that sender-side CPU utilization often becomes a limiting factor at these speeds. A significant performance drop of approximately 20% was observed in fully tuned scenarios when traffic was processed by a CPU socket not directly connected to the Network Interface Card (NIC), highlighting the impact of inter-socket latency and internal overhead. While BBR generally demonstrates higher resilience to packet loss and better median performance in non-tuned settings, CUBIC achieved superior results in a fully tuned testbed. Ultimately, the findings suggest that while protocol and kernel evolutions have improved high-speed data transfers, the host's hardware architecture and specific parameter configurations remain critical for full bandwidth utilization.

5. C3 Security: Protecting C3 Communication and Detecting Suspicious Sending Behavior

Daniel Hamann | 2024 | M.Sc. Thesis | Supervisor: Michael König | Institute of Telematics, Karlsruhe Institute of Technology

This thesis analyzes the transition from traditional end-to-end congestion control to a centralized coordinator model using 'Coordinated Congestion Control (C3)' and the inherent risks of this architecture. Because the C3 coordinator relies on a global view of the network to produce better performance decisions, it becomes a high-value target for attackers. The research identifies that in an insecure network, a Dolev-Yao attacker can read, modify, or suppress control messages to arbitrarily manipulate the sending behavior of hosts. One major vulnerability is a Denial of Service attack via INIT_FLOW messages, where an adversary reports many nonexistent flows to the coordinator, tricking it into unfairly reducing the bandwidth allocated to legitimate users. To address these threats, the thesis proposes a design that protects control messages using TLS or message authentication codes to ensure integrity and confidentiality. Additionally, the work implemented a more robust network model for the coordinator that verifies host reports against flow information collected directly from trusted intermediate systems. Evaluation in a Mininet environment confirmed that while unencrypted C3 communication is easily compromised by man-in-the-middle attacks, the integration of transport encryption and router-based flow verification effectively prevents both message tampering and fake flow attacks.

4. Examining QUIC Implementation Performance Through Unified Traffic Generation

Mihai Tanase | 2024 | B.Sc. Thesis | Supervisor: Michael König | Institute of Telematics, Karlsruhe Institute of Technology

This thesis evaluates the sustained throughput of various IETF QUIC implementations on high-bandwidth 10 Gbit/s links to determine if the protocol can effectively compete with TCP under diverse network conditions. Using an extended version of the universal 'meta' traffic generator 'quicperf' alongside standalone generators, the study analyzes implementations including lsquic, ngtcp2, quiche, and picoquic across scenarios involving artificial delay, packet loss, and bandwidth restrictions. The findings indicate that while QUIC performance has significantly improved over time, it remains primarily constrained by CPU overhead related to packet processing and decryption in user-space. Under standard conditions, TCP continues to outperform QUIC in high-bandwidth environments. However, the research demonstrates that utilizing jumbo frames can mitigate CPU limitations, allowing certain QUIC implementations to saturate the full 10 Gbit/s bandwidth and achieve performance levels competitive with default transport protocols.

3. Evaluation and Comparison of Block Lists Based on Public Threat Intelligence Feeds Using Network Traffic of the University of Tübingen in 2024

Emily List | 2024 | B.Sc. Thesis | Supervisor: Benjamin Steinert | Eberhard Karls Universität Tübingen

This thesis evaluates and compares twenty-one free text-based IP address threat intelligence feeds within a university context. It explores the topic of Cyber Threat Intelligence (CTI) and the use of TI feeds as block lists for proactive defence against cyber threats. The methodology involves selecting feeds based on timeliness, accuracy, reliability, usability, and effectiveness, combining qualitative web research with statistical analyses to evaluate and compare performance. The findings reveal that the selected feeds are heterogeneous in volume and accuracy, and some lack transparency in data acquisition. Usability analysis confirms that all feeds are manageable in size and format by modern hardware firewalls. The effectiveness analysis shows that over half of the feeds have a significant number of IP address hits within the university network. While most feeds are updated frequently, with high download reliability and good hit rates, many are not timely enough for proactive defence. The best-performing feeds can be integrated into a combined block list to deploy in the university network. Acknowledging limitations, the thesis suggests future research to expand the scope and validate findings across additional feeds and contexts.

2. TCP-C3: Accelerating TCP Congestion Control with C3

Jan Koppenhagen | 2024 | B.Sc. Thesis | Supervisor: Michael König | Institute of Telematics, Karlsruhe Institute of Technology

Reliable internet connectivity relies on effective congestion control, yet traditional algorithms often underutilize bottleneck links due to incomplete network state information. To address this, the Coordinated Congestion Control (C3) approach delegates decisions to a logically centralized coordinator with a superior view of the network. This thesis presents the design and implementation of a C3 Sender Component for end-to-end TCP connections on Linux systems: TCP-C3. Utilizing eBPF, the system implements a congestion control algorithm wrapper that enables precise, remote window adjustments. The architecture allows for the dynamic configuration of minimum, maximum, initial, and target congestion windows for specific flows or process groups. By leveraging eBPF struct-ops, the component can wrap established algorithms like TCP Reno or TCP Cubic without requiring their full re-implementation. Experimental evaluation in a dumbbell topology confirms that the component accurately adheres to coordinator signals with fast response times, a significant improvement over previous socket-attribute methods. Furthermore, the implementation maintains a low resource footprint, with a measured CPU overhead of approximately 2% and a negligible added end-to-end delay of less than 2ms, demonstrating its scalability for high-performance network environments.

1. Test and Deployment Considerations of Distributed Active Performance Measurement Techniques in an ISP Backbone Network with Segment Routing Capabilities

Yannick Huber | 2024 | M.Sc. Thesis | Supervisors: Marco Häberle, Benjamin Steinert | Eberhard Karls Universität Tübingen

Active measurements are an essential tool for collecting end to end performance metrics inside data networks. However, the execution of active measurement tests can be costly and time-consuming. This thesis evaluates multiple ways of improving the use of active network performance measurements. This is done by exploring three different aspects. Firstly, the thesis examines the use of new SR-MPLS capabilities to measure the bandwidth of specific routes inside a network using only one host. Secondly, the use of the perfSONAR network measurement toolkit is evaluated for automating distributed measurements. Finally, the bandwidth limitations of existing browser-based speed tests are inspected. The use of new SR-MPLS capabilities for performance measurements is explored by developing a new speed test concept based on SR-MPLS. This concept allows a host to test the network by running bandwidth tests on a circular path to itself. An implementation of this concept shows that it reaches average bandwidth speeds of up to 30,54 Gbps. The capabilities and challenges of the perfSONAR toolkit are explored in a lab environment to validate the usability of perfSONAR for automating distributed measurements. Examinations conducted in this lab setup are then used to show how perfSONAR can be used in a centrally managed deployment. For the browser-based speed tests, the bandwidth limitations of existing tests are explored in a 100 Gbps test setup. Measuring the maximum achievable bandwidth speeds shows that the current limiting factor for browser-based speed tests are the browsers themselves. By using the librespeed-cli tool, the capabilities of the LibreSpeed test are probed without the limitations introduced by using a browser. It is shown that LibreSpeed can reach an average total upload speed of 68,47 Gbps and an average download speed of 56,89 Gbps to a single host. These speeds are achieved by running an optimal amount of parallel client instances of the librespeed-cli tool. It is also shown that the LibreSpeed server can reach an accumulative download speed of 90 Gbps when testing multiple parallel running clients distributed over different hosts.