blob: 46277537418f4f61f6683d33259b35ceb7b01382 [file] [log] [blame]
Kailash Khalasi80b82182015-06-26 22:58:40 +00001*** Settings ***
Kailash Khalasi14f069e2015-07-01 17:03:23 +00002Documentation ONOS Switch Scale Test
Kailash Khalasi80b82182015-06-26 22:58:40 +00003Suite Setup ONOS Suite Setup ${CONTROLLER_IP} ${CONTROLLER_USER}
4Suite Teardown ONOS Suite Teardown
5Library SSHLibrary
6Library Collections
7Library OperatingSystem
8Library String
9Library RequestsLibrary
10Library HttpLibrary.HTTP
11
12*** Variables ***
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000013##Grab the environment variables sources from your "cell"
Kailash Khalasi80b82182015-06-26 22:58:40 +000014${CONTROLLER_IP} %{OC1}
15${MININET_IP} %{OCN}
16${CONTROLLER_USER} %{ONOS_USER}
17${MININET_USER} %{ONOS_USER}
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000018##USER_HOME used for public key
Kailash Khalasi80b82182015-06-26 22:58:40 +000019${USER_HOME} /home/fedora
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000020##ONOS_HOME is where the onos dist will be deployed on the controller vm
Kailash Khalasi80b82182015-06-26 22:58:40 +000021${ONOS_HOME} /opt/onos
22${RESTCONFPORT} 8181
23${LINUX_PROMPT} $
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000024##SWITCHES_RESULT_FILE and JENKINS_WORKSPACE can be configurable...read overriding variables in README
Kailash Khalasi14f069e2015-07-01 17:03:23 +000025##SWITCHES_RESULT_FILE is used to plot data. you can use a jenkins post job for this or do it manually
Kailash Khalasi80b82182015-06-26 22:58:40 +000026${SWITCHES_RESULT_FILE} ${USER_HOME}/workspace/tools/switches.csv
27${JENKINS_WORKSPACE} ${USER_HOME}/workspace/ONOS-Stable/
28${prompt_timeout} 30s
29${start} 10
30${end} 100
31${increments} 10
Kailash Khalasi4d7ba022015-06-30 03:11:14 +000032##Number of nodes in cluster. To add more nodes, create CONTROLLER_IP2/3/4 etc. variables above and change this cluster variable
Kailash Khalasi80b82182015-06-26 22:58:40 +000033${cluster} 1
34
35*** Test Cases ***
36Find Max Switches By Scaling
37 [Documentation] Find the max number of switches from ${start} until reaching ${end} in steps of ${increments}. The following checks are made through REST APIs:
38 ... -\ Verify device count is correct
39 ... -\ Verify device status is available
40 ... -\ Verify device roles are MASTER (default role in case of standalone controller)
41 ... -\ Verify topology recognizes corret number of devices (Through api "/topology")
42 ... -\ Observe each device individually
43 ... -\ Observe links, hosts, and flows through the controller
44 ... -\ Observe device info at lower level on mininet (Written for PoC). Shows flows, links, and ports. Checks can be easily implemented at that level as well
45 ... -\ STOP Mininet Topo
46 ... -\ Verify device count is zero
47 ... -\ Verify topology sees no devices (Through api "/topology")
48 [Tags] done
49 Append To File ${SWITCHES_RESULT_FILE} Max Switches Linear Topo\n
50 ${max}= Find Max Switches ${start} ${end} ${increments}
51 Log ${max}
52 Append To File ${SWITCHES_RESULT_FILE} ${max}\n
53
54*** Keywords ***
55ONOS Suite Setup
56 [Arguments] ${controller} ${user}
57 [Documentation] Transfers the ONOS dist over to the test vm and start the controller. We will leverage the bash script, "onos-install" to do this.
58 Create Controller IP List
59 ${rc}= Run and Return RC onos-package
60 Should Be Equal As Integers ${rc} 0
61 : FOR ${ip} IN @{controller_list}
62 \ ${rc}= Run and Return RC onos-install -f ${ip}
63 \ Should Be Equal As Integers ${rc} 0
64 Create HTTP Sessions
65 Wait Until Keyword Succeeds 60s 2s Verify All Controller are up
66 #If creating a cluster, create a keyword and call it here
67
68ONOS Suite Teardown
69 [Documentation] Stop ONOS on Controller VMs and grabs karaf logs on each controller (put them in /tmp)
70 ${rc}= Run and Return RC onos-kill
71 #Should Be Equal As Integers ${rc} 0
72 ${rc}= Run and Return RC cp ${SWITCHES_RESULT_FILE} ${JENKINS_WORKSPACE}
73 Should Be Equal As Integers ${rc} 0
74 ${rc}= Run and Return RC rm ${SWITCHES_RESULT_FILE}
75 Should Be Equal As Integers ${rc} 0
76 Clean Mininet System
77 : FOR ${ip} IN @{controller_list}
78 \ Get Karaf Logs ${ip}
79
80Create Controller IP List
81 [Documentation] Creates a list of controller ips for a cluster. When creating a cluster, be sure to set each variable to %{OC} env vars in the variables section
82 @{controller_list}= Create List ${CONTROLLER_IP}
83 Set Suite Variable @{controller_list}
84
85Create HTTP Sessions
86 [Documentation] Creates an http session with all controllers in the cluster. Session names are set to respective ips.
87 ${HEADERS}= Create Dictionary Content-Type application/json
88 : FOR ${ip} IN @{controller_list}
89 \ Create Session ${ip} http://${ip}:${RESTCONFPORT} headers=${HEADERS}
90
91Find Max Switches
92 [Arguments] ${start} ${stop} ${step}
93 [Documentation] Will find out max switches starting from ${start} till reaching ${stop} and in steps defined by ${step}
94 ${max-switches} Set Variable ${0}
95 ${start} Convert to Integer ${start}
96 ${stop} Convert to Integer ${stop}
97 ${step} Convert to Integer ${step}
98 : FOR ${switches} IN RANGE ${start} ${stop+1} ${step}
99 \ Start Mininet Linear ${switches}
100 \ ${status} ${result} Run Keyword And Ignore Error Verify Controllers are Not Dead
101 \ Exit For Loop If '${status}' == 'FAIL'
102 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
103 \ ... Ensure Switch Count ${switches}
104 \ Exit For Loop If '${status}' == 'FAIL'
105 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
106 \ ... Ensure Switches are Available
107 \ Exit For Loop If '${status}' == 'FAIL'
108 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
109 \ ... Ensure Switch Role MASTER
110 \ Exit For Loop If '${status}' == 'FAIL'
111 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
112 \ ... Ensure Topology ${switches} ${cluster}
113 \ Exit For Loop If '${status}' == 'FAIL'
114 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
115 \ ... Experiment Links, Hosts, and Flows
116 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
117 \ ... Check Each Switch Individually
118 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
119 \ ... Check Mininet at Lower Level
120 \ Stop Mininet
121 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
122 \ ... Ensure No Switches are Available
123 \ Exit For Loop If '${status}' == 'FAIL'
124 \ ${status} ${result} Run Keyword And Ignore Error Wait Until Keyword Succeeds ${switches*2} 2s
125 \ ... Ensure No Switches in Topology ${cluster}
126 \ Exit For Loop If '${status}' == 'FAIL'
127 \ ${max-switches} Convert To String ${switches}
128 [Return] ${max-switches}
129
130Run Command On Remote System
131 [Arguments] ${remote_system} ${cmd} ${user}=${CONTROLLER_USER} ${prompt}=${LINUX_PROMPT} ${prompt_timeout}=30s
132 [Documentation] Reduces the common work of running a command on a remote system to a single higher level robot keyword,
133 ... taking care to log in with a public key and the command given is written and the output returned. No test conditions
134 ... are checked.
135 Log Attempting to execute ${cmd} on ${remote_system}
136 ${conn_id}= SSHLibrary.Open Connection ${remote_system} prompt=${prompt} timeout=${prompt_timeout}
137 Login With Public Key ${user} ${USER_HOME}/.ssh/id_rsa any
138 SSHLibrary.Write ${cmd}
139 ${output}= SSHLibrary.Read Until ${LINUX_PROMPT}
140 SSHLibrary.Close Connection
141 Log ${output}
142 [Return] ${output}
143
144Start Mininet Linear
145 [Arguments] ${switches}
146 [Documentation] Start mininet linear topology with ${switches} nodes
147 Log To Console \n
148 Log To Console Starting mininet linear ${switches}
149 ${mininet_conn_id}= Open Connection ${MININET_IP} prompt=${LINUX_PROMPT} timeout=${switches*3}
150 Set Suite Variable ${mininet_conn_id}
151 Login With Public Key ${MININET_USER} ${USER_HOME}/.ssh/id_rsa any
152 Write sudo mn --controller=remote,ip=${CONTROLLER_IP} --topo linear,${switches} --switch ovsk,protocols=OpenFlow13
153 Read Until mininet>
154 Sleep 6
155
156Stop Mininet
157 [Documentation] Stop mininet topology
158 Log To Console Stopping Mininet
159 Switch Connection ${mininet_conn_id}
160 Read
161 Write exit
162 Read Until ${LINUX_PROMPT}
163 Close Connection
164
165Check Mininet at Lower Level
166 [Documentation] PoC for executing mininet commands at the lower level
167 Switch Connection ${mininet_conn_id}
168 Read
169 Write dpctl dump-flows -O OpenFlow13
170 ${output}= Read Until mininet>
171 Log ${output}
172 Write dump
173 ${output}= Read Until mininet>
174 Log ${output}
175 Write links
176 ${output}= Read Until mininet>
177 Log ${output}
178 Write ports
179 ${output}= Read Until mininet>
180 Log ${output}
181
182Clean Mininet System
183 [Arguments] ${mininet_system}=${MININET_IP}
184 [Documentation] Cleans the mininet environment (sudo mn -c)
185 Run Command On Remote System ${mininet_system} sudo mn -c ${CONTROLLER_USER} ${LINUX_PROMPT} 600s
186
187Verify All Controller are up
188 [Documentation] Verifies each controller is up by issuing a rest call and getting a 200
189 : FOR ${ip} IN @{controller_list}
190 \ ${resp}= ONOS Get ${ip} devices
191 \ Should Be Equal As Strings ${resp.status_code} 200
192
193Verify Controllers are Not Dead
194 [Documentation] Verifies each controller is not dead by making sure karaf log does not contain "OutOfMemoryError" and rest call still returns 200
195 : FOR ${ip} IN @{controller_list}
196 \ Verify Controller Is Not Dead ${ip}
197
198Verify Controller Is Not Dead
199 [Arguments] ${controller}
200 ${response}= Run Command On Remote System ${controller} grep java.lang.OutOfMemoryError /opt/onos/log/karaf.log
201 Should Not Contain ${response} OutOfMemoryError
202 ${resp} RequestsLibrary.Get ${controller} /onos/v1/devices
203 Log ${resp.content}
204 Should Be Equal As Strings ${resp.status_code} 200
205
206Experiment Links, Hosts, and Flows
207 [Documentation] Currently this only returns the information the controller has on links, hosts, and flows. Checks can easily be implemented
208 : FOR ${ip} IN @{controller_list}
209 \ ${resp}= ONOS Get ${ip} links
210 \ ${jsondata} To JSON ${resp.content}
211 \ ${length}= Get Length ${jsondata['links']}
212 \ Log ${resp.content}
213 \ ${resp}= ONOS Get ${ip} flows
214 \ ${jsondata} To JSON ${resp.content}
215 \ ${length}= Get Length ${jsondata['flows']}
216 \ Log ${resp.content}
217 \ ${resp}= ONOS Get ${ip} hosts
218 \ ${jsondata} To JSON ${resp.content}
219 \ ${length}= Get Length ${jsondata['hosts']}
220 \ Log ${resp.content}
221
222Ensure Topology
223 [Arguments] ${device_count} ${cluster_count}
224 [Documentation] Verifies the devices count through /topoplogy api. Currently, cluster count is inconsistent (Possible bug, will look into it), so the check on that is ignored, but logged
225 : FOR ${ip} IN @{controller_list}
226 \ ${resp}= ONOS Get ${ip} topology
227 \ Log ${resp.content}
228 \ ${devices}= Get Json Value ${resp.content} /devices
229 \ ${clusters}= Get Json Value ${resp.content} /clusters
230 \ Should Be Equal As Strings ${devices} ${device_count}
231 \ #Should Be Equal As Strings ${clusters} ${cluster_count}
232
233Ensure No Switches in Topology
234 [Arguments] ${cluster_count}
235 [Documentation] Verifies the topology sees no devices
236 : FOR ${ip} IN @{controller_list}
237 \ ${resp}= ONOS Get ${ip} topology
238 \ Log ${resp.content}
239 \ ${devices}= Get Json Value ${resp.content} /devices
240 \ ${clusters}= Get Json Value ${resp.content} /clusters
241 \ Should Be Equal As Strings ${devices} 0
242 \ #Should Be Equal As Strings ${clusters} ${cluster_count}
243
244ONOS Get
245 [Arguments] ${session} ${noun}
246 [Documentation] Common keyword to issue GET requests to the controller. Arguments are the session (controller ip) and api path
247 ${resp} RequestsLibrary.Get ${session} /onos/v1/${noun}
248 Log ${resp.content}
249 [Return] ${resp}
250
251Ensure Switch Count
252 [Arguments] ${switch_count}
253 [Documentation] Verfies the device count (passed in as arg) on each controller is accurate.
254 : FOR ${ip} IN @{controller_list}
255 \ ${resp} ONOS Get ${ip} devices
256 \ Log ${resp.content}
257 \ Should Be Equal As Strings ${resp.status_code} 200
258 \ Should Not Be Empty ${resp.content}
259 \ ${jsondata} To JSON ${resp.content}
260 \ ${length}= Get Length ${jsondata['devices']}
261 \ Should Be Equal As Integers ${length} ${switch_count}
262
263Ensure Switches are Available
264 [Documentation] Verifies that the switch's availability state is true on all switches through all controllers
265 : FOR ${ip} IN @{controller_list}
266 \ ${resp} ONOS Get ${ip} devices
267 \ Log ${resp.content}
268 \ Should Be Equal As Strings ${resp.status_code} 200
269 \ Should Not Be Empty ${resp.content}
270 \ ${jsondata} To JSON ${resp.content}
271 \ #Robot tweak to do a nested for loop
272 \ Check Each Switch Status ${jsondata} True
273
274Check Each Switch Status
275 [Arguments] ${jdata} ${bool}
276 ${length}= Get Length ${jdata['devices']}
277 : FOR ${INDEX} IN RANGE 0 ${length}
278 \ ${data}= Get From List ${jdata['devices']} ${INDEX}
279 \ ${status}= Get From Dictionary ${data} available
280 \ Should Be Equal As Strings ${status} ${bool}
281
282Ensure No Switches are Available
283 [Documentation] Verifies that the switch's availability state is false on all switches through all controllers
284 : FOR ${ip} IN @{controller_list}
285 \ ${resp} ONOS Get ${ip} devices
286 \ Log ${resp.content}
287 \ Should Be Equal As Strings ${resp.status_code} 200
288 \ Should Not Be Empty ${resp.content}
289 \ ${jsondata} To JSON ${resp.content}
290 \ Check Each Switch Status ${jsondata} False
291
292Check Each Switch Individually
293 [Documentation] Currently just observe each information the device has. Checks can easily be implemented
294 : FOR ${ip} IN @{controller_list}
295 \ ${resp} ONOS Get ${ip} devices
296 \ Should Be Equal As Strings ${resp.status_code} 200
297 \ ${jsondata} To JSON ${resp.content}
298 \ #Robot tweak to do a nested for loop
299 \ Check Each Switch ${jsondata}
300
301Check Each Switch
302 [Arguments] ${jdata}
303 ${length}= Get Length ${jdata['devices']}
304 @{dpid_list}= Create List
305 : FOR ${INDEX} IN RANGE 0 ${length}
306 \ ${devicedata}= Get From List ${jdata['devices']} ${INDEX}
307 \ ${id}= Get From Dictionary ${devicedata} id
308 \ Append To List ${dpid_list} ${id}
309 \ Log ${dpid_list}
310 ${length}= Get Length ${dpid_list}
311 : FOR ${i} IN @{dpid_list}
312 \ ${resp} ONOS Get ${ip} devices/${i}
313 \ Log ${resp.content}
314
315Ensure Switch Role
316 [Arguments] ${role}
317 [Documentation] Verifies that the controller's role for each switch is MASTER (default in standalone mode)
318 : FOR ${ip} IN @{controller_list}
319 \ ${resp} ONOS Get ${ip} devices
320 \ Log ${resp.content}
321 \ Should Be Equal As Strings ${resp.status_code} 200
322 \ Should Not Be Empty ${resp.content}
323 \ ${jsondata} To JSON ${resp.content}
324 \ Ensure Role ${jsondata} ${role}
325
326Ensure Role
327 [Arguments] ${jdata} ${role}
328 ${length}= Get Length ${jdata['devices']}
329 : FOR ${INDEX} IN RANGE 0 ${length}
330 \ ${data}= Get From List ${jdata['devices']} ${INDEX}
331 \ ${status}= Get From Dictionary ${data} role
332 \ Should Be Equal As Strings ${status} ${role}
333
334Get Karaf Logs
335 [Arguments] ${controller}
336 [Documentation] Compresses all the karaf log files on each controler and puts them on your pybot execution machine (in /tmp)
337 Run Command On Remote System ${controller} tar -zcvf /tmp/${SUITE NAME}.${controller}.tar.gz ${ONOS_HOME}/log
338 SSHLibrary.Open Connection ${controller} prompt=${LINUX_PROMPT} timeout=${prompt_timeout}
339 Login With Public Key ${CONTROLLER_USER} ${USER_HOME}/.ssh/id_rsa any
340 SSHLibrary.Get File /tmp/${SUITE NAME}.${controller}.tar.gz /tmp/
341 SSHLibrary.Close Connection