blob: a70c6a216563917591a7321b9191ac412ed9006e [file] [log] [blame]
Kailash Khalasi80b82182015-06-26 22:58:40 +00001*** Settings ***
Kailash Khalasi14f069e2015-07-01 17:03:23 +00002Documentation ONOS Switch Scale Test
Kailash Khalasi80b82182015-06-26 22:58:40 +00003Suite Setup ONOS Suite Setup ${CONTROLLER_IP} ${CONTROLLER_USER}
4Suite Teardown ONOS Suite Teardown
5Library SSHLibrary
6Library Collections
7Library OperatingSystem
8Library String
9Library RequestsLibrary
10Library HttpLibrary.HTTP
11
12*** Variables ***
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000013##Grab the environment variables sources from your "cell"
Kailash Khalasi80b82182015-06-26 22:58:40 +000014${CONTROLLER_IP} %{OC1}
15${MININET_IP} %{OCN}
16${CONTROLLER_USER} %{ONOS_USER}
17${MININET_USER} %{ONOS_USER}
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000018##USER_HOME used for public key
Kailash Khalasicdcda3a2015-07-01 17:10:45 +000019${USER_HOME} %{HOME}
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000020##ONOS_HOME is where the onos dist will be deployed on the controller vm
Kailash Khalasi80b82182015-06-26 22:58:40 +000021${ONOS_HOME} /opt/onos
22${RESTCONFPORT} 8181
23${LINUX_PROMPT} $
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000024##SWITCHES_RESULT_FILE and JENKINS_WORKSPACE can be configurable...read overriding variables in README
Kailash Khalasi14f069e2015-07-01 17:03:23 +000025##SWITCHES_RESULT_FILE is used to plot data. you can use a jenkins post job for this or do it manually
Kailash Khalasicdcda3a2015-07-01 17:10:45 +000026##NOTE: This file must exist otherwise the test will fail when trying to write to this file
Kailash Khalasi80b82182015-06-26 22:58:40 +000027${SWITCHES_RESULT_FILE} ${USER_HOME}/workspace/tools/switches.csv
28${JENKINS_WORKSPACE} ${USER_HOME}/workspace/ONOS-Stable/
29${prompt_timeout} 30s
30${start} 10
31${end} 100
32${increments} 10
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000033##Number of nodes in cluster. To add more nodes, create CONTROLLER_IP2/3/4 etc. variables above and change this cluster variable
Kailash Khalasi80b82182015-06-26 22:58:40 +000034${cluster} 1
35
36*** Test Cases ***
37Find Max Switches By Scaling
38 [Documentation] Find the max number of switches from ${start} until reaching ${end} in steps of ${increments}. The following checks are made through REST APIs:
39 ... -\ Verify device count is correct
40 ... -\ Verify device status is available
41 ... -\ Verify device roles are MASTER (default role in case of standalone controller)
42 ... -\ Verify topology recognizes corret number of devices (Through api "/topology")
43 ... -\ Observe each device individually
44 ... -\ Observe links, hosts, and flows through the controller
45 ... -\ Observe device info at lower level on mininet (Written for PoC). Shows flows, links, and ports. Checks can be easily implemented at that level as well
46 ... -\ STOP Mininet Topo
47 ... -\ Verify device count is zero
48 ... -\ Verify topology sees no devices (Through api "/topology")
49 [Tags] done
50 Append To File ${SWITCHES_RESULT_FILE} Max Switches Linear Topo\n
51 ${max}= Find Max Switches ${start} ${end} ${increments}
52 Log ${max}
53 Append To File ${SWITCHES_RESULT_FILE} ${max}\n
54
55*** Keywords ***
56ONOS Suite Setup
57 [Arguments] ${controller} ${user}
58 [Documentation] Transfers the ONOS dist over to the test vm and start the controller. We will leverage the bash script, "onos-install" to do this.
59 Create Controller IP List
60 ${rc}= Run and Return RC onos-package
61 Should Be Equal As Integers ${rc} 0
62 : FOR ${ip} IN @{controller_list}
63 \ ${rc}= Run and Return RC onos-install -f ${ip}
64 \ Should Be Equal As Integers ${rc} 0
65 Create HTTP Sessions
66 Wait Until Keyword Succeeds 60s 2s Verify All Controller are up
67 #If creating a cluster, create a keyword and call it here
68
69ONOS Suite Teardown
70 [Documentation] Stop ONOS on Controller VMs and grabs karaf logs on each controller (put them in /tmp)
71 ${rc}= Run and Return RC onos-kill
72 #Should Be Equal As Integers ${rc} 0
73 ${rc}= Run and Return RC cp ${SWITCHES_RESULT_FILE} ${JENKINS_WORKSPACE}
74 Should Be Equal As Integers ${rc} 0
75 ${rc}= Run and Return RC rm ${SWITCHES_RESULT_FILE}
76 Should Be Equal As Integers ${rc} 0
77 Clean Mininet System
78 : FOR ${ip} IN @{controller_list}
79 \ Get Karaf Logs ${ip}
80
81Create Controller IP List
82 [Documentation] Creates a list of controller ips for a cluster. When creating a cluster, be sure to set each variable to %{OC} env vars in the variables section
83 @{controller_list}= Create List ${CONTROLLER_IP}
84 Set Suite Variable @{controller_list}
85
86Create HTTP Sessions
87 [Documentation] Creates an http session with all controllers in the cluster. Session names are set to respective ips.
88 ${HEADERS}= Create Dictionary Content-Type application/json
89 : FOR ${ip} IN @{controller_list}
90 \ Create Session ${ip} http://${ip}:${RESTCONFPORT} headers=${HEADERS}
91
92Find Max Switches
93 [Arguments] ${start} ${stop} ${step}
94 [Documentation] Will find out max switches starting from ${start} till reaching ${stop} and in steps defined by ${step}
95 ${max-switches} Set Variable ${0}
96 ${start} Convert to Integer ${start}
97 ${stop} Convert to Integer ${stop}
98 ${step} Convert to Integer ${step}
99 : FOR ${switches} IN RANGE ${start} ${stop+1} ${step}
100 \ Start Mininet Linear ${switches}
101 \ ${status} ${result} Run Keyword And Ignore Error Verify Controllers are Not Dead
102 \ Exit For Loop If '${status}' == 'FAIL'
103 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
104 \ ... Ensure Switch Count ${switches}
105 \ Exit For Loop If '${status}' == 'FAIL'
106 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
107 \ ... Ensure Switches are Available
108 \ Exit For Loop If '${status}' == 'FAIL'
109 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
110 \ ... Ensure Switch Role MASTER
111 \ Exit For Loop If '${status}' == 'FAIL'
112 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
113 \ ... Ensure Topology ${switches} ${cluster}
114 \ Exit For Loop If '${status}' == 'FAIL'
115 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
116 \ ... Experiment Links, Hosts, and Flows
117 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
118 \ ... Check Each Switch Individually
119 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
120 \ ... Check Mininet at Lower Level
121 \ Stop Mininet
122 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
123 \ ... Ensure No Switches are Available
124 \ Exit For Loop If '${status}' == 'FAIL'
125 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
126 \ ... Ensure No Switches in Topology ${cluster}
127 \ Exit For Loop If '${status}' == 'FAIL'
128 \ ${max-switches} Convert To String ${switches}
129 [Return] ${max-switches}
130
131Run Command On Remote System
132 [Arguments] ${remote_system} ${cmd} ${user}=${CONTROLLER_USER} ${prompt}=${LINUX_PROMPT} ${prompt_timeout}=30s
133 [Documentation] Reduces the common work of running a command on a remote system to a single higher level robot keyword,
134 ... taking care to log in with a public key and the command given is written and the output returned. No test conditions
135 ... are checked.
136 Log Attempting to execute ${cmd} on ${remote_system}
137 ${conn_id}= SSHLibrary.Open Connection ${remote_system} prompt=${prompt} timeout=${prompt_timeout}
138 Login With Public Key ${user} ${USER_HOME}/.ssh/id_rsa any
139 SSHLibrary.Write ${cmd}
140 ${output}= SSHLibrary.Read Until ${LINUX_PROMPT}
141 SSHLibrary.Close Connection
142 Log ${output}
143 [Return] ${output}
144
145Start Mininet Linear
146 [Arguments] ${switches}
147 [Documentation] Start mininet linear topology with ${switches} nodes
148 Log To Console \n
149 Log To Console Starting mininet linear ${switches}
150 ${mininet_conn_id}= Open Connection ${MININET_IP} prompt=${LINUX_PROMPT} timeout=${switches*3}
151 Set Suite Variable ${mininet_conn_id}
152 Login With Public Key ${MININET_USER} ${USER_HOME}/.ssh/id_rsa any
153 Write sudo mn --controller=remote,ip=${CONTROLLER_IP} --topo linear,${switches} --switch ovsk,protocols=OpenFlow13
154 Read Until mininet>
155 Sleep 6
156
157Stop Mininet
158 [Documentation] Stop mininet topology
159 Log To Console Stopping Mininet
160 Switch Connection ${mininet_conn_id}
161 Read
162 Write exit
163 Read Until ${LINUX_PROMPT}
164 Close Connection
165
166Check Mininet at Lower Level
167 [Documentation] PoC for executing mininet commands at the lower level
168 Switch Connection ${mininet_conn_id}
169 Read
170 Write dpctl dump-flows -O OpenFlow13
171 ${output}= Read Until mininet>
172 Log ${output}
173 Write dump
174 ${output}= Read Until mininet>
175 Log ${output}
176 Write links
177 ${output}= Read Until mininet>
178 Log ${output}
179 Write ports
180 ${output}= Read Until mininet>
181 Log ${output}
182
183Clean Mininet System
184 [Arguments] ${mininet_system}=${MININET_IP}
185 [Documentation] Cleans the mininet environment (sudo mn -c)
186 Run Command On Remote System ${mininet_system} sudo mn -c ${CONTROLLER_USER} ${LINUX_PROMPT} 600s
187
188Verify All Controller are up
189 [Documentation] Verifies each controller is up by issuing a rest call and getting a 200
190 : FOR ${ip} IN @{controller_list}
191 \ ${resp}= ONOS Get ${ip} devices
192 \ Should Be Equal As Strings ${resp.status_code} 200
193
194Verify Controllers are Not Dead
195 [Documentation] Verifies each controller is not dead by making sure karaf log does not contain "OutOfMemoryError" and rest call still returns 200
196 : FOR ${ip} IN @{controller_list}
197 \ Verify Controller Is Not Dead ${ip}
198
199Verify Controller Is Not Dead
200 [Arguments] ${controller}
201 ${response}= Run Command On Remote System ${controller} grep java.lang.OutOfMemoryError /opt/onos/log/karaf.log
202 Should Not Contain ${response} OutOfMemoryError
203 ${resp} RequestsLibrary.Get ${controller} /onos/v1/devices
204 Log ${resp.content}
205 Should Be Equal As Strings ${resp.status_code} 200
206
207Experiment Links, Hosts, and Flows
208 [Documentation] Currently this only returns the information the controller has on links, hosts, and flows. Checks can easily be implemented
209 : FOR ${ip} IN @{controller_list}
210 \ ${resp}= ONOS Get ${ip} links
211 \ ${jsondata} To JSON ${resp.content}
212 \ ${length}= Get Length ${jsondata['links']}
213 \ Log ${resp.content}
214 \ ${resp}= ONOS Get ${ip} flows
215 \ ${jsondata} To JSON ${resp.content}
216 \ ${length}= Get Length ${jsondata['flows']}
217 \ Log ${resp.content}
218 \ ${resp}= ONOS Get ${ip} hosts
219 \ ${jsondata} To JSON ${resp.content}
220 \ ${length}= Get Length ${jsondata['hosts']}
221 \ Log ${resp.content}
222
223Ensure Topology
224 [Arguments] ${device_count} ${cluster_count}
225 [Documentation] Verifies the devices count through /topoplogy api. Currently, cluster count is inconsistent (Possible bug, will look into it), so the check on that is ignored, but logged
226 : FOR ${ip} IN @{controller_list}
227 \ ${resp}= ONOS Get ${ip} topology
228 \ Log ${resp.content}
229 \ ${devices}= Get Json Value ${resp.content} /devices
230 \ ${clusters}= Get Json Value ${resp.content} /clusters
231 \ Should Be Equal As Strings ${devices} ${device_count}
232 \ #Should Be Equal As Strings ${clusters} ${cluster_count}
233
234Ensure No Switches in Topology
235 [Arguments] ${cluster_count}
236 [Documentation] Verifies the topology sees no devices
237 : FOR ${ip} IN @{controller_list}
238 \ ${resp}= ONOS Get ${ip} topology
239 \ Log ${resp.content}
240 \ ${devices}= Get Json Value ${resp.content} /devices
241 \ ${clusters}= Get Json Value ${resp.content} /clusters
242 \ Should Be Equal As Strings ${devices} 0
243 \ #Should Be Equal As Strings ${clusters} ${cluster_count}
244
245ONOS Get
246 [Arguments] ${session} ${noun}
247 [Documentation] Common keyword to issue GET requests to the controller. Arguments are the session (controller ip) and api path
248 ${resp} RequestsLibrary.Get ${session} /onos/v1/${noun}
249 Log ${resp.content}
250 [Return] ${resp}
251
252Ensure Switch Count
253 [Arguments] ${switch_count}
254 [Documentation] Verfies the device count (passed in as arg) on each controller is accurate.
255 : FOR ${ip} IN @{controller_list}
256 \ ${resp} ONOS Get ${ip} devices
257 \ Log ${resp.content}
258 \ Should Be Equal As Strings ${resp.status_code} 200
259 \ Should Not Be Empty ${resp.content}
260 \ ${jsondata} To JSON ${resp.content}
261 \ ${length}= Get Length ${jsondata['devices']}
262 \ Should Be Equal As Integers ${length} ${switch_count}
263
264Ensure Switches are Available
265 [Documentation] Verifies that the switch's availability state is true on all switches through all controllers
266 : FOR ${ip} IN @{controller_list}
267 \ ${resp} ONOS Get ${ip} devices
268 \ Log ${resp.content}
269 \ Should Be Equal As Strings ${resp.status_code} 200
270 \ Should Not Be Empty ${resp.content}
271 \ ${jsondata} To JSON ${resp.content}
272 \ #Robot tweak to do a nested for loop
273 \ Check Each Switch Status ${jsondata} True
274
275Check Each Switch Status
276 [Arguments] ${jdata} ${bool}
277 ${length}= Get Length ${jdata['devices']}
278 : FOR ${INDEX} IN RANGE 0 ${length}
279 \ ${data}= Get From List ${jdata['devices']} ${INDEX}
280 \ ${status}= Get From Dictionary ${data} available
281 \ Should Be Equal As Strings ${status} ${bool}
282
283Ensure No Switches are Available
284 [Documentation] Verifies that the switch's availability state is false on all switches through all controllers
285 : FOR ${ip} IN @{controller_list}
286 \ ${resp} ONOS Get ${ip} devices
287 \ Log ${resp.content}
288 \ Should Be Equal As Strings ${resp.status_code} 200
289 \ Should Not Be Empty ${resp.content}
290 \ ${jsondata} To JSON ${resp.content}
291 \ Check Each Switch Status ${jsondata} False
292
293Check Each Switch Individually
294 [Documentation] Currently just observe each information the device has. Checks can easily be implemented
295 : FOR ${ip} IN @{controller_list}
296 \ ${resp} ONOS Get ${ip} devices
297 \ Should Be Equal As Strings ${resp.status_code} 200
298 \ ${jsondata} To JSON ${resp.content}
299 \ #Robot tweak to do a nested for loop
300 \ Check Each Switch ${jsondata}
301
302Check Each Switch
303 [Arguments] ${jdata}
304 ${length}= Get Length ${jdata['devices']}
305 @{dpid_list}= Create List
306 : FOR ${INDEX} IN RANGE 0 ${length}
307 \ ${devicedata}= Get From List ${jdata['devices']} ${INDEX}
308 \ ${id}= Get From Dictionary ${devicedata} id
309 \ Append To List ${dpid_list} ${id}
310 \ Log ${dpid_list}
311 ${length}= Get Length ${dpid_list}
312 : FOR ${i} IN @{dpid_list}
313 \ ${resp} ONOS Get ${ip} devices/${i}
314 \ Log ${resp.content}
315
316Ensure Switch Role
317 [Arguments] ${role}
318 [Documentation] Verifies that the controller's role for each switch is MASTER (default in standalone mode)
319 : FOR ${ip} IN @{controller_list}
320 \ ${resp} ONOS Get ${ip} devices
321 \ Log ${resp.content}
322 \ Should Be Equal As Strings ${resp.status_code} 200
323 \ Should Not Be Empty ${resp.content}
324 \ ${jsondata} To JSON ${resp.content}
325 \ Ensure Role ${jsondata} ${role}
326
327Ensure Role
328 [Arguments] ${jdata} ${role}
329 ${length}= Get Length ${jdata['devices']}
330 : FOR ${INDEX} IN RANGE 0 ${length}
331 \ ${data}= Get From List ${jdata['devices']} ${INDEX}
332 \ ${status}= Get From Dictionary ${data} role
333 \ Should Be Equal As Strings ${status} ${role}
334
335Get Karaf Logs
336 [Arguments] ${controller}
337 [Documentation] Compresses all the karaf log files on each controler and puts them on your pybot execution machine (in /tmp)
338 Run Command On Remote System ${controller} tar -zcvf /tmp/${SUITE NAME}.${controller}.tar.gz ${ONOS_HOME}/log
339 SSHLibrary.Open Connection ${controller} prompt=${LINUX_PROMPT} timeout=${prompt_timeout}
340 Login With Public Key ${CONTROLLER_USER} ${USER_HOME}/.ssh/id_rsa any
341 SSHLibrary.Get File /tmp/${SUITE NAME}.${controller}.tar.gz /tmp/
342 SSHLibrary.Close Connection