-
Notifications
You must be signed in to change notification settings - Fork 3
Coordination layer integration
The purpose of this guide is to give platform Administrators a general idea on how to define test cases that are specifically tailored for their particular platform: What are the tools that are available by default on the coordination layer and what are the gaps that need to be filled by them. This guide assumes certain familiarity with the components of the Release A. In particular, it is expected that:
- The Administrator has a Release A deployment available, or is able to create a new one.
- The Administrator knows the procedure for linking a test case definition in the ELCM with the corresponding entry in the Portal.
- The Administrator knows how to create experiments in the Portal, run them in the ELCM and review the generated logs.
In general, if the Administrator was able to execute the common test cases reported in D6.2, they should be able to follow the instructions in this guide. Here are some extra details about this guide in no particular order:
- It is not necessary to use a production deployment in order to complete these exercises. All the components can be deployed in a single (virtual) machine using Windows or Linux.
- Since all platforms performed the common test cases in D6.2 with OpenTAP test plans, it is purposefully left out of this guide with the intention of showcasing alternative methods for implementing test cases.
- The exercises make use of common utilities (ping, iPerf) along with the iPerf remote agent. These are used without the full integration provided by the OpenTAP plugins, so that the Administrator can see the kind of areas where extra integration work needs to be performed by the platforms.
- This guide does not detail how to perform this extra integration work. More information about this is available in Deliverable D5.3 and will be discussed during the next WP5 meeting.
- New tasks are available as part of Release B, along with some extra functionality. However, since the procedure for using them and for defining test cases remains unchanged all the experience obtained by following this guide is directly applicable once the new release is available.
- A running deployment of the Portal and ELCM (Release A)
- A running instance of the iPerf remote agent:
The following exercise shows how it is possible to perform a simple ping test using only the functionality provided by the ELCM and the system applications.
- Create a new yml file in the TestCases folder of the ELCM. Populate it with the following contents:
PingTest:
Dashboard: {}
- The first line defines the name of the test (must be equal to the name included in the Portal configuration).
- The second line defines the Grafana dashboard, which is purposefully empty
- Inside the
PingTest
key, define the first of the actions. We will use the CliExecute task to run ping and save the output to a file:
PingTest:
- Order: 6
Task: Run.CliExecute
Config:
Parameters: powershell "ping -n 4 8.8.8.8 > test.txt"
CWD: C:\5GENESIS
Dashboard: {}
- Pay attention to the indentation. And modify the value of CWD to a folder where you have write permission.
- The example above shows the ping command for Windows, using powershell as CLI. On Linux the equivalent command may be:
bash -c "ping -c 4 8.8.8.8 > test.txt"
- Ensure that you limit the execution of ping with -n/-c, otherwise the command (and the experiment) will never end.
- Since the output from the previous command is redirected to a file, it will not be shown in the logs. We can add a second action that displays the contents of the file afterwards:
PingTest:
- Order: 6
Task: Run.CliExecute
Config:
Parameters: powershell "ping -n 4 8.8.8.8 > test.txt"
CWD: C:\5GENESIS
- Order: 7
Task: Run.CliExecute
Config:
Parameters: powershell "cat test.txt"
CWD: C:\5GENESIS
Dashboard: {}
- In this case the exact value of
Order
is not important, just ensure that the second action has a higher value- The command for Linux may be
bash -c "cat test.txt"
- In this case we make use of an application without any kind of integration with the 5Genesis platforms. In this case it's
ping
, but it may be any configuration or measurement tool for any equipment. - For this example we have limited the retrieval of results to simply adding the command line output to the logs. In order to further integrate with the platform (in particular with the Analytics module), the platform could develop a script that parses the generated file and sends the values to the InfluxDb database. This script must be customized to the format and kind of data generated by the equipments.
This exercise shows how to use more complex functionality provided by the command line and the ELCM tasks. In this example we make use of the iPerf agent for PC, as well as the iPerf executable directly. For simplicity all the components are installed on the same machine, but the procedure should be similar otherwise.
- Create a new yml file in the TestCases folder of the ELCM. Populate it with the following contents:
iPerfTest:
Dashboard: {}
- We are aware that we may need to change the port where the iPerf agent is listening in the future. For this reason, we
will make use of the Publish task to define it and variable expansion to replace the value in the rest of the tasks. Add this
task below the
iPerfTest
key:
- Order: 5
Task: Run.Publish
Config:
AgentPort: 5555 # Customize as needed
- Now we need to activate the iPerf server using the provided REST API. For this we can use
curl
on Linux orInvoke-RestMethod
in Windows. Since in both cases we need to handle string escaping we decide to use a script with a configurableport
parameter in order to ease the process. Create a file namedstart_iPerf.ps1
(or .sh) and populate it with the following content:
$port=$args[0]
Invoke-RestMethod -Uri "http://localhost:$port/Iperf" -Method POST -Body '["-s","-p","8888"]' -ContentType "application/json"
For Linux:
port=$1
curl -X POST -H "Content-Type: application/json" -d '["-s","-p","8888"]' "http://localhost:$port/Iperf"
- If you experience issues while starting the iPerf server try changing the format of the parameters to
'["-s","-p 8888"]'
- This will create an iPerf instance that listens on port 8888. This port is different than the one used by the agent itself, which is 5555 in our example.
- Now we add a call to this script. Add the following task below the
Publish
:
- Order: 6
Task: Run.CliExecute
Config:
Parameters: |
powershell ./start_iperf.ps1 @[AgentPort]
CWD: C:\5GENESIS
- Customize the paths as needed.
@[AgentPort]
is the variable expansion for the value we defined in thePublish
task. You can also define a default value if needed, that will be used when the value has not been defined beforehand. For example@[AgentPort:5000]
- We now use the iPerf executable directly to start the client. Add the following task:
- Order: 8
Task: Run.CliExecute
Config:
Parameters: powershell "./iperf.exe -c 127.0.0.1 -p 8888 -i 1 -t 4"
CWD: C:\5GENESIS # Customize as needed
The client will run for 4 seconds. The results from the client will appear directly on the console and the logs.
- Before we can retrieve the results from the server, we need to disable the instance running on the iPerf agent. For these two actions we use the REST API again, but since the calls are much simpler we decide to write the commands directly. Add the following two tasks to the test case:
- Order: 9
Task: Run.CliExecute
Config:
Parameters: powershell "Invoke-RestMethod -Uri http://localhost:@[AgentPort]/Close -Method GET"
# Or bash -c "curl -X GET http://localhost:@[AgentPort]/Close"
CWD: C:\5GENESIS
- Order: 10
Task: Run.CliExecute
Config:
Parameters: powershell "Invoke-RestMethod -Uri http://localhost:@[AgentPort]/LastJsonResult -Method GET"
# Or bash -c "curl -X GET http://localhost:@[AgentPort]/LastJsonResult"
CWD: C:\5GENESIS
The complete test case definition is as follows:
iPerfTest:
- Order: 5
Task: Run.Publish
Config:
AgentPort: 5555 # Customize as needed
- Order: 6
Task: Run.CliExecute
Config:
Parameters: |
powershell ./start_iperf.ps1 @[AgentPort]
CWD: C:\5GENESIS
- Order: 8
Task: Run.CliExecute
Config:
Parameters: powershell "./iperf.exe -c 127.0.0.1 -p 8888 -i 1 -t 4"
CWD: C:\5GENESIS # Customize as needed
- Order: 9
Task: Run.CliExecute
Config:
Parameters: powershell "Invoke-RestMethod -Uri http://localhost:@[AgentPort]/Close -Method GET"
# Or bash -c "curl -X GET http://localhost:@[AgentPort]/Close"
CWD: C:\5GENESIS
- Order: 10
Task: Run.CliExecute
Config:
Parameters: powershell "Invoke-RestMethod -Uri http://localhost:@[AgentPort]/LastJsonResult -Method GET"
# Or bash -c "curl -X GET http://localhost:@[AgentPort]/LastJsonResult"
CWD: C:\5GENESIS
Dashboard: {}
- The Publish task can also be used for selecting different equipment: For example, consider that we define two UEs that correspond with two different iPerf agents. By selecting one or the other while defining the experiment in the Portal we can change the address we use without modifying the testcase or creating a new one. Just ensure that the tasks that configure the UEs have a lower
Order
than those that run the testcase actions. - In this case we control some equipment using a particular interface (REST) and create a custom script (start_iPerf) for controlling the equipment. This script is very simple since it makes use of utilities readily available in a normal Linux or Windows deployment, however, it shows the necessity of creating custom tools for handling the control of heterogeneous equipment.
- We see again the necessity of converting heterogeneous results to a format compatible with the Analytics module: the agent returns a mildy preprocessed JSON while the iPerf executable output would need to be parsed completely.
- It is important to use blocking and non-blocking actions when necessary: in this case we use the agent to start one of the iPerf instances (non-blocking), and then use iPerf directly for 4 seconds (blocking):
- If you need to run several processes in parallel consider wrapping them in a similar way as in the remote agents, or consider using parallel steps in a TAP testplan.
- We could have used two different iPerf agents for the client and server. In this case there is no blocking action that would create the 4 seconds delay, so an alternative way to force the wait should be in place. On Release B a
Delay
task is available, while on Release A this may be achieved using shell scripting.