Cookies Psst! Do you accept cookies?

We use cookies to enhance and personalise your experience.
Please accept our cookies. Checkout our Cookie Policy for more information.

An Exploration of Software-defined networks in video streaming, Part Three: Performance of a streaming system over a SDN

Hello! Welcome to the third and last part in this series where we've been discussing the research and experiments I conducted on the topic of the use of software defined networks on streaming services.

In the previous entry, I detailed what software-defined networks are, how they work, how they compare to traditional networks, and the benefits they bring to the modern data center. Now that we have covered the essentials of both topics relevant to my work, this entry will be focused entirely on detailing my research and experiments. I'll describe the main objective of this project, detail the tests conducted and how they were set up, present the obtained data, and draw conclusions from it.

While I'll still try to keep the information accessible for everyone as much as possible, for this entry, I'll assume::

  • the reader has gone through and understood the information I presented in the previous two entries
  • has basic notions of computer networks, Python and Linux concepts and commands.

So, without further ado, let's get to it!

General project data

The main purpose of this project was to evaluate the performance of an adaptive multimedia streaming system over a software-defined network. In this case, performance is measured as the time a video can be streamed in the highest quality through a multimedia client without presenting any interruptions or noticeable degradation when the network is under stress. The system is adaptive as both the media player and server work together to adjust the video quality on-the-fly, allowing playback to continue even under suboptimal network conditions.

The basis for the main experiment and test design was the paper Emulation of HTTP Adaptive Video Streaming over SDN by Liotou et al. This paper presents a solution to implement a multimedia streaming system over an SDN, consisting of a streaming system established over a virtualized software-defined network. The authors use this system to evaluate the observable Quality of Experience (QoE) when the network is stressed by UDP traffic. QoE encompasses factors such as smooth playback, image clarity, and minimal buffering interruptions, contributing to the overall satisfaction and enjoyment of users while streaming videos.

To complement the data obtained by the authors and provide a comprehensive understanding of the potential benefits such a system could bring to large-scale streaming services, I designed an additional test to stress the network with TCP traffic.

With this, I wanted to observe how both types of traffic, streaming traffic and external TCP/UDP traffic, interact inside a software-defined network, and whether TCP or UDP traffic has any kind of inherent effect over the HTTP transmission. Tests like this could prove useful when optimizing flows for a streaming service.

In the following sections, I'll show the system built to run the tests, what each test consisted of, the results obtained and the conclusions I drew from them.

Test environment setup

To execute our tests, it is necessary to have a network that allows using not only the devices that constitute our streaming system but also external devices that will generate additional traffic to stress the network. Evidently, its design should follow the structure proposed by the SDN paradigm; that is, it should include a controller and SDN-ready network devices. Building a network like this with physical computers would be costly; thus, Liotou et al. propose a software-only solution with two main components:

  • Mininet: Mininet is an open source platform that can be used to create virtual networks simply and efficiently. With this tool, users can emulate complex networks through software, increasing ease of development, testing and experimentation in network environments. Mininet is compatible with controllers and devices that follow the software-defined network paradigm, specifically, the OpenFlow protocol. Additionally, Mininet makes network topology set up a simple task through the use of Python scripts. This significantly extends customization and behaviour and component programming for any network topology.
  • OpenDaylight: OpenDaylight is an open source project focused on developing a centralized network control platform. This platform offers tools and APIs that allow scalable and flexible network programming, managing and behavior automation. OpenDaylight is used by organizations and developers to create and manage software-defined networks (SDN), facilitating the implementation of more agile and efficient network solutions. Configuring OpenDaylight's behavior involves interacting with its various components, modules, and APIs to customize its functionalities according to specific network requirements. OpenDaylight offers a modular architecture that allows it to adapt to different network environments.

Mininet provides the infrastructure necessary to emulate an SDN-ready network, allowing us to add or remove as many virtual computers as we want, as well as to define the behavior of the devices inside the network. All of this is done while ensuring compatibility with SDN protocols and devices. OpenDaylight acts as the controller for the entire system.

The system's adaptive capability is given by MPEG-DASH. It refers to a series of media formats that can be used in conjunction with the HTTP protocol to serve adaptive multimedia streams. Let's examine briefly how it works:

  1. Server-side, each video file is transcoded into lower quality versions of itself, known as representations. Then, each representation is split into smaller segments for both audio and video according to a specified period, which is a discrete time interval used to delimit the length of each segment.
  2. A manifest file known as MPD (Media Presentation Description) is generated. It's an XML file that, for our original video, lists the available representations, the segments that make up each representation, the period for each segment and their format. Additionally, it lists unique URLs for each segment.
  3. MPEG-DASH requires a multimedia player enabled to work with adaptive HTTP streams. This player reads the MPD file from the server and begins requesting segments through the URLs provided by the manifest. If the player detects anomalies in the connection used to receive the stream, it can preemptively request segments from a lower quality representation and store them in the buffer without needing to pause the playback.

Test computers system specs

The proposed virtual network was set up on two computers running the Manjaro operating system version 23.1.4, which is an Arch Linux distro, and are running Linux kernel 6.6.25-1. Both systems are a full installation with Plasma desktop performed with a Live USB using image 240416.

System A:

  • CPU: Intel Core i5-7300HQ
  • GPU: Nvidia GTX 1050 4GB
  • RAM: 16GB DDR4 @ 2800MHz
  • Storage: SATA SSD drive

System B:

  • CPU: AMD Ryzen 7 7800X3D
  • GPU: Nvidia RTX 4070 Ti SUPER
  • RAM: 32GB DDR5 @ 6000MHz
  • Storage: PCIe SSD drive for operating system, SATA SSD drive for file system

NOTE: For users of any other distro or operating system, I recommend using virtual machines to complete the setup. These blog posts by Brian Linkletter are fantastic and should get you up and running quickly.

Required software

Following is a list of required software for the setup and the versions used for running the tests. Provided links will redirect to either the official Arch package list or the AUR (Arch User Repository). For readers setting up the network following the guides by Brian Linkletter, you don't need to worry about this section: follow the instructions he provides.

Additionally, a 4K video in MP4 format is also required. It should have a minimum runtime of 5 minutes. This video should be named "video_4k.mp4" to ensure compatibility with the files used for this project. This video will be downscaled to 1080, 720, 480, 360, and 240p resolutions.

Setup steps

The following steps assume all packages will be installed through a graphic package manager like the one included with Manajaro. However, if your Arch distro does not contain one, I recommend consulting the following page to learn how to install packages using a terminal. The links I provided in the previous section should help you with this.

  1. Install the latest version of Python and Java 8
    • Type java or python in a command terminal to check if both programs were installed correctly.
    • It is very important that you set Java version 8 as default in case you also have other newer releases installed. OpenDaylight won't work with any other Java version.
  2. Install HTTPRangeServer and the Mininet Python library with command pip install HTTPRangeServer mininet
    • If there's an error during installation, you might want to install an environment manager like Miniconda.
  3. Install xterm, Wireshark, VLC Media Player, GPAC, ffmpeg, CMake, gcc, ninja and git.
  4. Install byacc, flex, libcgroup, patch, autoconf, automake and pkgconf
  5. Install Mininet
  6. Ensure Mininet has been installed correctly by running command sudo mn. Mininet should automatically deploy a default network. When done, type exit in the Mininet command prompt, then run sudo mn -c.
    • sudo mn -c cleans up any network created by the software, and should be run any time after Mininet is closed.
  7. Run the following comands in superuser mode

    # systemctl enable ovs-vswitchd
    # systemctl start ovs-vswitchd
    
  8. Create a new directory with command

    mkdir streaming-sdn
    cd streaming-sdn
    
  9. Clone this project's repository with command

    git clone https://github.com/cardcathouse/SDN-Streaming-Performance.git
    
  10. Download OpenDaylight with command

     wget https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/integration/karaf/0.8.4/karaf-0.8.4.zip
    
  11. Unzip the contents of the downloaded file. You should now have a folder labeled karaf-0.8.4

  12. Run ./od.sh to start OpenDaylight.

  13. When OpenDaylight has finished booting up, copy and paste the following command in the terminal to install the required modules:

     feature:install odl-openflowplugin-drop-test odl-openflowplugin-nxm-extensions odl-openflowplugin-app-bulk-o-matic odl-openflowplugin-app-lldp-speaker odl-openflowplugin-app-southbound-cli odl-openflowplugin-flow-services-rest odl-openflowplugin-app-table-miss-enforcer odl-openflowplugin-app-topology-lldp-discovery odl-dluxapps-nodes odl-dluxapps-yangui odl-dluxapps-yangman odl-dluxapps-topology odl-dluxapps-yangutils odl-dluxapps-applications odl-dluxapps-yangvisualizer odl-restconf-all odl-openflowplugin-app-topology-manager odl-openflowplugin-app-notifications odl-openflowplugin-onf-extensions odl-openflowplugin-app-forwardingrules-sync odl-l2switch-all odl-mdsal-all odl-yanglib odl-dlux-core
    
  14. When module installation has finished, close OpenDaylight by pressing Ctrl+D.

  15. Put your video in the streaming-cdn folder. Run ./video_prepare.sh. The folder should now have a considerable amount of new files with the .m4s file extension. Do not rename or modify these files!

Test 1: Stressing the network with UDP traffic

The first test performed was designed to measure the system's performance under stress from UDP traffic. To this end, a virtual topology under Mininet as the one shown in the following illustration was built:

UDP test topology diagram as implemented in Mininet

UDP test topology diagram as implemented in Mininet

The test involved running four different versions of the topology, where the bandwidth between the video server and the directly connected switch is limited. The bandwidth limits used were 1, 3, 5, and 10 Mbps, as shown in the paper by Liotou et al. UDP traffic is introduced between two computers in the topology 30 seconds after playback begins. One of the computers acts as a UDP server and sends packets of a specific size, which are received by the UDP client computer. Each computer is connected to a different switch to simulate traffic on all links used for the transmission.

Originally, the paper suggests introducing UDP traffic at 490 Mb/s. However, initial tests revealed that this didn't allow me to obtain any usable results, as the traffic exceeded the network's capabilities almost instantly due to the imposed bandwidth limits. Therefore, the test design was modified so that the initial traffic is 10% of the total bandwidth assigned to the server-switch link, increasing it by 20% every 30 seconds with a 10-second break between increments until reaching 120% of the bandwidth. The aim is to observe the relationship between the amount of traffic and changes in video resolution, as well as the amount of time each resolution is displayed under network stress.

Component breakdown in the used topology is as follows:

  • Host 1 runs HTTPRangeServer to setup our HTTP server from which the video will be streamed
  • Host 2 runs VLC Media Player. The player needs additional setup which will be explained further below. This host also runs Wireshark, which monitors traffic activity inside the network.
  • Host 3 and 4 act as UDP server and client respectively using the iperf program.
  • S1 and S2 are OpenFlow networks switches. They are connected to each other and to the controller. S1 is connected to hosts 1 and 3, while S2 is to hosts 2 and 4.

Reproduction steps

  1. Open a terminal in the streaming-sdn folder. All commands should run on this folder.
  2. Run OpenDaylight with ./od.sh and wait for boot process to finish
  3. Run the topology with command sudo python udp_test_Xm.py, where X is the bandwith limit (1, 3, 5 or 10 Mbps).
  4. The network will start running and execute a script that automates some tasks in the setup process. VLC and Wireshark will open. A couple of manual configurations are needed, and the script will give you 60 seconds to make them.
    1. On VLC, press Ctrl+P to open the 'Preferences' menu.
    2. Look for the 'Show Settings' option in the bottom left corner. Select 'All'. A new menu will appear.
    3. On the menu at the left side of the window, navigate to the "Input/Codecs" section. Click on 'Demuxers' twice to show the dropdown options. Click on 'Adaptive' and change the video streaming algorithm to 'Adaptive bandwidth'.
    4. On the same "Input/Codecs" section, click on 'Stream Filters' once. Select "HTTP Dyanmic Streaming" from the list
    5. Click on 'Save'.
      1. It's likely VLC will tell you the settings can't be written to the config file. You can ignore this message, although these steps will have to be done each time VLC is opened under Mininet.
    6. Move to the Wireshark window. Select h2-eth0 from the menu displayed on-screen.
    7. On the new screen, select the bar on the upper part of the interface that says "Apply a display filter" and type http, then click on the arrow button.
    8. Select 'Statistics' from the top menu, then the 'I/O Graphs' option.
    9. On the new screen, deselect every option except for the one that has 'http' in the "Display Filter" column.
    10. Click on the "+" button to add a new row. Select the blank space at the "Display Filter" column of this new row and type udp, then hit Enter.
    11. Return to the VLC window, and from the "Medium" menu located at the top of the screen, choose "Open network stream".
    12. Type in http://localhost:8000/video4k.mpd as the input HTTP URL, but don't click play yet!
  5. When the 60 seconds for initial config have passed, a new 30 second timer will start. You can now click the 'Play' button on the network stream window on VLC. Make sure to click play during this 30 second window.
  6. After this timer, the test will start. Monitor the behavior of the traffic on Wireshark and observe the changes to playback on VLC.

Results

I will now present the results obtained from running the test with the four suggested topologies. These results were obtained with System 1, and corroborated on System 2. Two types of graphs are presented:

  • A time in seconds vs. UDP and HTTP traffic graph generated by Wireshark's "I/O Graph" function
  • A time in seconds vs. resolution graph generated by Python's matplotlib and pandas based on a dataset exported by Wireshark that contains all filtered HTTP requests made by the player on Host 2

1Mbps bandwidth limit

Wireshark graph showing HTTP traffic (in blue) vs. TCP traffic (in red) for the UDP 1Mbps test

Wireshark graph showing HTTP traffic (in blue) vs. TCP traffic (in red) for the UDP 1Mbps test

Graph showing time vs. playback resolution for the UDP 1Mbps test

Graph showing time vs. playback resolution for the UDP 1Mbps test

In this test, the initial resolution was set to 360p, indicating that the network was already constrained by the bandwidth limit. No changes to this initial condition or interruptions to playback were observed until reaching 80% of the total bandwidth in UDP traffic. At this point, playback presented long interruptions with sporadic 1-second plays. This behavior persisted with the remaining traffic percentages until reaching a 120% traffic load, which made playback unresponsive.

3Mbps bandwidth limit

Wireshark graph showing HTTP traffic (in blue) vs. TCP traffic (in red) for the UDP 3Mbps test

Wireshark graph showing HTTP traffic (in blue) vs. TCP traffic (in red) for the UDP 3Mbps test

Graph showing time vs. playback resolution for the UDP 3Mbps test

Graph showing time vs. playback resolution for the UDP 3Mbps test

Initial resolution for this test was 360p. Unlike the previous test, this bandwidth limit allowed this resolution to play uninterrupted until the network came under stress when UDP traffic reached 110% of the total bandwidth limit. Under these conditions, playback presented sporadic pauses but could still be considered watchable. When UDP traffic reached 120% of the bandwidth limit, playback completely stopped.

5Mbps bandwidth limit

Wireshark graph showing HTTP traffic (in blue) vs. TCP traffic (in red) for the UDP 5Mbps test

Wireshark graph showing HTTP traffic (in blue) vs. TCP traffic (in red) for the UDP 5Mbps test

Graph showing time vs. playback resolution for the UDP 5Mbps test

Graph showing time vs. playback resolution for the UDP 5Mbps test

For this test, the video started playing at a resolution of 720p. The general behavior observed was a resolution drop to 360p for all traffic loads, with switches back to 720p during the 10-second interval between UDP traffic generation. Despite the resolution changes, playback presented no interruptions until traffic reached 120% of the total bandwidth limit, where prolonged interruptions of 5 seconds or more were observed.

10Mbps bandwith limit

Wireshark graph showing HTTP traffic (in blue) vs. TCP traffic (in red) for the UDP 10Mbps test

Wireshark graph showing HTTP traffic (in blue) vs. TCP traffic (in red) for the UDP 10Mbps test

Graph showing time vs. playback resolution for the UDP 10Mbps test

Graph showing time vs. playback resolution for the UDP 10Mbps test

Initially, playback started using a 1080p resolution. Traffic stress of 40% and 60% of the bandwidth limit in UDP packets occasionally made the resolution drop to 720p. However, these drops were brief, and 1080p could be sustained for a good portion of each test. As in the previous test, resolution returned to 1080p in the 10-second interval between UDP traffic generation. Hitting 80% of the traffic load relative to the total bandwidth limit, resolution dropped to 360p with longer interruptions to playback. 100% of the traffic load also made the resolution drop to 360p, with interruptions now being longer. Playback completely stopped at 120% of the traffic load.

Result interpretation

After analyzing the results obtained, I can confirm that adaptive quality is employed to maintain uninterrupted playback to the greatest extent possible, thereby ensuring a certain level of Quality of Experience (QoE). This implies that the system can adapt to network conditions to continue functioning as intended, albeit at the expense of performance or perceived quality. However, the degree to which quality can be guaranteed varied between tests, diminishing as the amount of external traffic increased. Quality switching was frequently observed during testing, with the quality dropping as external traffic increased. This technique is commonly used in modern video playback to adapt to different network conditions and is generally not considered as detrimental to the viewing experience compared to buffering pauses. In some cases, I observed pauses of more than 10 seconds or even a complete stop to playback. Of course, this is a situation where external factors are working against the system, yet it still results in a significantly degraded viewing experience.

Test 2: Stressing the network with TCP traffic

The second test for our system follows the same principles as the first one, in that we're evaluating system performance and QoS when the network is stressed by external traffic. However, this test swaps out UDP traffic for TCP traffic, and thus, changes to the test are needed to account for the inherent properties of TCP. To justify these changes, let's briefly review what TCP is and discuss a special feature of the protocol that will be the basis for the results we're expecting to observe in this test: TCP fairness.

TCP and TCP Fairness

The Transmission Control Protocol operates at the transport layer of the TCP/IP protocol suite for network communication. TCP differentiates itself from UDP by being connection-oriented, which means that a logical connection between two endpoints needs to be established before communication begins and maintained throughout the exchange until the process is ended by both ends. Thanks to this, TCP allows for data delivery to be orderly, reliable, and error-free. It achieves this through several features and mechanisms, such as flow and congestion control, error recovery, and data retransmission mechanisms. In contrast, UDP operates under the best-effort principle, where data is not guaranteed to be delivered to its destination.

As I just mentioned, TCP has the capability to control data flow and network congestion. These two features form the foundation of what is known as the 'TCP Fairness' principle. It ensures that TCP connections competing for network resources have an equal opportunity to transmit data, thereby preventing any single connection from monopolizing all the available bandwidth. To achieve this principle on an actual network, both TCP characteristics outlined earlier in this section work together as follows:

  • The congestion control algorithm dynamically adjusts the data transmission rate of a connection when signs of congestion are detected, such as packet loss or increased delays
  • The flow control mechanism manages data transmission rates between end devices in the network. The receiving devices announce their available buffer size so that sending devices adjust their transmission rates accordingly, therefore preventing the sender from overwhelming the receiver with data.

With this, network flows are able to respond dynamically to circumstances in the network to ensure the most optimal working conditions for all devices. TCP fairness is achieved when the following conditions exist in the network:

  • All TCP connections have an equal chance to transmit data.
  • Data transmission rates are adjusted on the fly to distribute bandwidth fairly.
  • There's a proportional adjustment of data transmission rates for all connections competing for resources in the network, ensuring no single connection dominates over the others.

Finally, it is extremely important to keep in mind that TCP fairness exclusively applies to TCP connections and does not extend to other protocols such as UDP.

Resuming our discussion of this test, if you recall, I used HTTP as the transport medium for the video segments. HTTP messages are encapsulated in TCP segments, therefore making our streaming system perfect for observing TCP fairness. Consequently, I conducted this test to observe, not only the behavior of the adaptive streaming system under this new environment and, but also the limits, if any, of this phenomenon.

With that goal in mind, a test redesign was necessary and several elements of it were modified. To begin with, the network topology was altered as shown by the following illustration:

TCP test topology diagram as implemented in Mininet

TCP test topology diagram as implemented in Mininet

The main change to the topology was the addition of several hosts connected to S2. These new hosts represent TCP connections that will compete for network resources alongside the media server and the multimedia player. They essentially do the same task as the UDP clients in the previous tests, which is to generate external traffic to stress the network.

These new hosts will generate stress traffic by downloading a 5.3GB MP4 file each using wget, a command-line utility for Linux that utilizes HTTP (and consequently, TCP) for downloading files. The file will be hosted on the server running on Host 1, which, if you recall, is also hosting the video segments. The introduction of these new hosts establishes the necessary conditions to observe the equitable distribution of bandwidth that characterizes TCP fairness.

The number of hosts is initially set to 5 and increments by 5 until 50 hosts are present in the network. Each increment is represented in an individual Mininet topology Python script. Bandwidth limit for the link between the server and S1 is set to 10Mbps. These conditions are meant to test the limits of TCP fairness by gradually increasing stress in an already constrained network. Each test runs for 2 and a half minutes.

When designing the test, I initially took an automated approach to setup, as in the first trial. Unfortunately, this resulted in strange behavior of the network that did not allow me to obtain usable data for results. For this reason, the experiment was redone without automation, though this led to a cumbersome setup and execution that does not guarantee the production of the same results as those I'll present further below. For completeness' sake, I will present all data prepared for and collected from both versions and offer my hypothesis as to why the first test did not work as expected.

Version 1

For this test, and any subsequent versions of it, a heavy .mp4 file is needed to provide the file each host will download, to be referred from here onward as the download test file. I recommend a file with a size above 5GB. Theoretically, any file type should work, as long as the size requirement is met and is appropriately named as specified in the following steps.

Reproduction steps

  1. Place your test download file in the streaming-cdn folder. Rename it dl_test.mp4.
  2. Open a terminal in the streaming-sdn folder. All commands should run on this folder.
  3. Run OpenDaylight with ./od.sh and wait for boot process to finish
  4. Run the topology with command sudo python tcp_Xh.py, where X is the number of hosts (5 to 50).
  5. Refer to the instructions for the previous test and repeat steps 4 to 6

For this test, each host runs wget with a 5 second pause between each new host starting the download. This is to have all hosts downloading simultaneously, but without risking a potential overload due to resource depletion for the network emulator.

Observed behavior

After the initial setup, the video starts playing during the 20-second wait period before hosts start downloading. Then, each host initiates a wget download, with a 5-second delay before the next host begins downloading. However, a peculiar behavior emerges over time or as more hosts are added. The older hosts, those that initiated downloads earlier, experience a gradual reduction in download speed until their downloads are completely paused, despite wget continuing to run. In the end, only the last host to start downloading remains, with all the bandwidth allocated to it. Furthermore, the video stream had stopped at this point as well.

This phenomenon suggests a potential congestion-related issue or resource allocation imbalance within the network, where the early initiators face throttling or prioritization changes, leading to diminished download performance and eventual halting of data transfer. Naturally, it does not correspond to a scenario where TCP fairness is being enacted. Following multiple iterations of the test, I posited that the origins of this unforeseen event could possibly be traced back to one or more of the following elements:

  • OpenDaylight's default network flow balancing and routing policies for TCP traffic might be favoring newer connections for bandwidth allocation
    • Mininet OpenFlow switches could also be the cause this behavior, either on its own or by instruction of the controller
  • Mininet links between hosts and switches could be saturated in the presence of numerous hosts trying to download
  • Limits to the amount of processes that can be run inside a Mininet topology
  • Mininet switches could have limited packet buffer space
  • Running wget in the background with the & symbol to achieve automated and simultaneous downloads

Unfortunately, a definite conclusion on the cause of this behavior could not be reached due to the scope of the project and the amount of time allotted for its completion. Additionally, resources on the technologies used for the network, particularly OpenDaylight, are scarce and outdated. This greatly hampered my research, and I ended up having to look into other avenues to hopefully correct this issue.

As for the results that can be obtained using this version of the test, there's a chance that they could be acceptable, given that observing the desired result is never a given in experiments. However, in an effort to "debug" the entire test and identify the cause of this problem, I tried running each network script without any automation. The following section presents the specifics of this second run and the results I obtained.

Version 2

As stated in the previous section, this test requires manual intervention and monitoring over the entire process. The following instructions assume they will be reproduced in that fashion.

Reproduction steps

  1. Place your test download file in the streaming-cdn folder. Rename it dl_test.mp4.
  2. Open a terminal in the streaming-sdn folder. All commands should run on this folder.
  3. Run OpenDaylight with ./od.sh and wait for boot process to finish
  4. Run the topology with command sudo python tcp_50h_noAuto
  5. When the Mininet console appears, execute the following commands:
    1. Enter command xterm h1. In the newly opened xterm window, run python3 -m RangeHTTPServer. The server will start running.
    2. Return to the Mininet console and run xterm h2 h2. Two more xterm windows will appear. In either window, run command wireshark, then vlc-wrapper in the remaining one.
    3. Follow setup instructions 4.1 to 4.12 as shown in the reproduction instructions for Test 1
  6. Run command xterm h1 h2 h3 ... h(N-3), where N is the total number of hosts to use
    1. For each host, I recommend preemptively copying and pasting the following command: wget http://10.0.0.1:8000/dl_test.mp4 -O "testN.mp4", where N is the number assigned to the host downloading the file.
      1. It is very important that each host downloads a uniquely named file to avoid overwriting and to allow proper monitoring of bandwidth use
  7. Return to the VLC window and start playing the video.
  8. After 30 seconds of playback, try your best to start running every downloader as fast as possible, waiting 5 seconds between each host. Keep an eye on the download speeds indicated in every xterm window.
  9. When 2:30 minutes of playback time have passed, manually stop every downloader host and pause playback on VLC
  10. Return to Wireshark and export the corresponding data.

If you try to run the test this way, you'll soon discover that it's an incredibly overwhelming task. But, it works! In the next section, I'll go over what I observed with this new method.

Observed behavior

Before starting this section, I need to emphasize that, unlike the first version where automation is involved, manually controlling each aspect of the test introduces human error (or rather, delay) into the equation. This not only affects the reproducibility of the test but could also skew results and render them unusable for serious scientific analysis. Considering the scope of this project, however, I feel there's still value in going over what I observed and the results I managed to obtain with this setup.

A general observation that applies to all test combinations executed is that video quality immediately drops to 360p when the additional hosts start downloading. This doesn't come as a surprise considering the limit to server bandwidth, but what is definitely surprising (or, at least, it was to me) is seeing bandwidth being distributed equally between downloading hosts, even under extreme conditions. Here are some examples using 5, 10 and 20 hosts.

Downloading a file with 5 simulataneous hosts

Downloading a file with 5 simulataneous hosts

Downloading a file with 10 simulataneous hosts

Downloading a file with 10 simulataneous hosts

Downloading a file with 20 simulataneous hosts

Downloading a file with 20 simulataneous hosts

As you can see, each host is being given an equal amount of bandwidth. The amount of available bandwidth obviously decreases as the number of downloads increases. However, in each of the configurations, every host was still allocated an amount of bandwidth, regardless of how small it was. With 50 hosts, I managed to see each one of them doing their best at a whopping fast...19kbps! All in all, a miracle, if nothing else.

And that's not all: while quality switching and buffering times for video playback over on the media player also increase with the number of hosts, it keeps requesting video segments to store in its buffer. Perhaps, if even lower resolutions than 240p were considered for the tests, it's likely that playback would be able to continue, even with 50 simultaneous downloads. Unfortunately, that would be quite unacceptable on a modern streaming service and absolutely not the point of this test.

With all this, I can affirm that the system was able to use TCP fairness to its advantage in order to equitably distribute resources in the network. As a result, all systems managed to fulfill their tasks, even when facing less than ideal conditions, albeit sacrificing performance.

The concluding segment of this section will display the obtained results from running this version of the test.

Results

Results for this test were obtained using System 1, but due to the nature of the reproduction process, they were not corroborated on System 2. Two types of graphs are presented:

  • A time in seconds vs. TCP and HTTP traffic graph generated by Wireshark's "I/O Graph" function
  • A time in seconds vs. resolution graph generated by Python's matplotlib and pandas based on a dataset exported by Wireshark that contains all filtered HTTP requests made by the player on Host 2

Using 5 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 5 hosts test

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 5 hosts test

Graph showing time vs. playback resolution for TCP with 15 hosts test

Graph showing time vs. playback resolution for TCP with 15 hosts test

Using 10 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 10 hosts test

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 10 hosts test

Graph showing time vs. playback resolution for TCP with 10 hosts test

Graph showing time vs. playback resolution for TCP with 15 hosts test

Using 15 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 15 hosts test

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 15 hosts test

Graph showing time vs. playback resolution for TCP with 15 hosts test

Graph showing time vs. playback resolution for TCP with 15 hosts test

Using 20 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 20 hosts test

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 20 hosts test

Graph showing time vs. playback resolution for TCP with 20 hosts test

Graph showing time vs. playback resolution for TCP with 20 hosts test

Using 25 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 25 hosts test

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 25 hosts test

Graph showing time vs. playback resolution for TCP with 25 hosts test

Graph showing time vs. playback resolution for TCP with 25 hosts test

Using 30 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 30 hosts test

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 30 hosts test

Graph showing time vs. playback resolution for TCP with 30 hosts test

Graph showing time vs. playback resolution for TCP with 30 hosts test

Using 35 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 35 hosts test

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 35 hosts test

Graph showing time vs. playback resolution for TCP with 35 hosts test

Graph showing time vs. playback resolution for TCP with 35 hosts test

Using 40 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 40 hosts test

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 40 hosts test

Graph showing time vs. playback resolution for TCP with 40 hosts test

Graph showing time vs. playback resolution for TCP with 40 hosts test

Using 45 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 45 hosts test

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 45 hosts test

Graph showing time vs. playback resolution for TCP with 45 hosts test

Graph showing time vs. playback resolution for TCP with 45 hosts test

Using 50 downloader hosts

Wireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 50 hosts test

GWireshark graph showing HTTP traffic from streaming (in blue) vs. TCP download traffic (in red) for TCP with 50 hosts test

Graph showing time vs. playback resolution for TCP with 50 hosts test

Graph showing time vs. playback resolution for TCP with 50 hosts test

Result interpretation

As previously discussed, the use of HTTP traffic, and therefore TCP, as the transport medium for both video playback and file downloads allowed the system to leverage the advantages of this transport layer protocol. Congestion regulation and network flow management worked together to regulate resource competition between hosts, thereby establishing TCP fairness in the system.

The benefits of this principle were evident in how bandwidth was evenly distributed among downloader hosts: each one of them continued their task, regardless of its speed. However, the most significant improvement was observed in video playback. Whereas in the UDP trial, playback would sometimes completely stop under extreme network stress, I now observed that playback continued, even under duress, and the player kept requesting video segments despite only a small amount of bandwidth allocated to that process.

Conclusions

The aim of this project was to explore the feasibility of using software-defined networks to implement modern Internet services, such as an adaptive streaming service. I sought to gather data to determine the validity of this hypothesis by constructing a video streaming system atop a software-emulated software-defined network. This system underwent two distinct types of tests aimed at measuring performance and observable Quality of Experience (QoE) in network traffic scenarios akin to real-life conditions, utilizing two of the most widely used network communication protocols: TCP and UDP.

The experiments underscore the importance of fine-tuning network management and policy configurations to accommodate the inherently variable nature of UDP traffic. The lack of QoE guarantees inherent in UDP's best-effort design necessitates additional measures to maintain satisfactory user experiences. In contrast, TCP's robustness shines through, showcasing its ability to seamlessly integrate with SDNs and uphold QoE standards even in demanding network conditions. While TCP demonstrates clear advantages over UDP in these controlled experiments, it's essential to acknowledge the potential complexities of real-world network scenarios. In such environments, a multitude of protocols and traffic types interact, posing unique challenges that may not be fully addressed by TCP alone.

Therefore, while these findings are promising, they also highlight the need for further exploration and refinement, particularly in understanding how different traffic types interact and affect overall network performance.

Software-defined networks offer a revolutionary approach to modernizing internet infrastructure, providing unprecedented control and adaptability. Despite time constraints limiting a comprehensive exploration, it's clear that SDNs, especially with controllers like OpenDaylight, offer granular control over network behavior and comprehensive monitoring capabilities through tools like Mininet. This flexibility enables tailored network configurations to meet the evolving demands of cloud-based applications, where processing requirements continue to grow.

However, the journey toward fully realizing the potential of SDNs is still ongoing. A notable challenge is the lack of comprehensive support material, hindering the adoption and troubleshooting processes. This deficiency, particularly pronounced with tools like OpenDaylight, undermines confidence in delving deeper into the technology. Despite these hurdles, initial interactions with SDNs suggest a promising future, with ample room for development and exploration.

To sum up, I can conclude that it is feasible to build a video transmission system over a software-defined network. Quality assurance mechanisms, such as buffering and adaptive quality, are extremely useful for achieving this. Software-defined networks offer internet services a robust way to modernize themselves and pave the way for the construction of complex services that handle large amounts of data.

There is still much to explore in this field, such as performance comparisons with a traditional network topology, thorough exploration of the customization, control, and management capabilities of OpenDaylight, or the use of these tools in different types of applications.

References

E. Liotou, K. Chatzieleftheriou, G. Christodoulou, N. Passas and L. Merakos, "Emulation of HTTP Adaptive Video Streaming over SDN," 2021 IEEE International Mediterranean Conference on Communications and Networking (MeditCom), Athens, Greece, 2021, pp. 144-149, doi: 10.1109/MeditCom49071.2021.9647445.

Goransson, P., Culver, T., & Black, C. (2016). Software defined networks: A Comprehensive Approach. Morgan Kaufmann Publishers.

Kurose, J. F., & Ross, K. W. (2021). Computer networking: A Top-down Approach.

Azodolmolky, S. (2013). Software Defined Networking with OpenFlow. Packt Pub Limited.

Dostálek, L., & Kabelová, A. (2006). Understanding TCP/IP: A Clear and Compre. Packt Pub Limited.

Hu, F. (2014). Network Innovation through OpenFlow and SDN: Principles and Design. CRC Press.

Nayyar, A., Nagrath, P., & Singla, B. (2022). Software defined networks: Architecture and Applications. Wiley-Scrivener.

Townsley, D. (2001, October 9). TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581. University of Maryland. Retrieved March 13, 2024, from https://www.cs.umd.edu/~shankar/417-F01/Slides/chapter3b/index.htm

Thanks

As this project comes to a close, so does my time as an undergraduate student. Many times during this journey, I found myself lacking the strength to move forward. It is thanks to those that lent me some of theirs that I am able to stand where I am today. Hence, I'd like to express my gratitude to the following:

To my loving family, Elizabeth, Mario, Daniel, Frida and Mila, who supported me in every step of the way, in every sense of the word, even during my toughest moments.

To my dearest friends Jerome, Harry, Sebastian, Erin and Alex, with whom I've found a place of acceptance, understanding and a lot of fun, and with whom I hope to cross paths in real life soon.

To Haruko Uesaka (hàl), whose wonderful music significantly altered the course of my life and provided much needed emotional sustenance.

I'd also like to thank Dr. Adán Medrano for the guidance, kindness and encouragement.

Last Stories

What's your thoughts?

Please Register or Login to your account to be able to submit your comment.