blob: c4d6a1f0dbb4e15ebd0775388f07b472ac5c244d [file] [log] [blame]
Zack Williams553a3632019-08-09 17:14:43 -07001Architecture and Design
2***********************
Charles Chan33bac082019-09-12 01:07:51 -07003
4Introduction to OF-DPA Pipeline
5-------------------------------
6
7In this design note, we are going to explain the design choices we have made and how we got around OF-DPA (OpenFlow Data Plane Abstraction) pipeline restrictions to implement the features we need.
8We will start from explaining the OFDPA flow tables and group tables we use.
9Fig. 1 shows the simplified OFDPA pipeline overview.
10
11.. image:: images/arch-ofdpa.png
12 :width: 1000px
13
14Fig. 1 Simplified OF-DPA pipeline overview
15
16
17Flow Tables
18-----------
19
20VLAN Table
21^^^^^^^^^^
22.. note::
23 The **VLAN Flow Table (id=10)** is used for IEEE 801.Q VLAN assignment and filtering to specify how VLANs are to be handled on a particular port.
24 **All packets must have an associated VLAN id in order to be processed by subsequent tables**.
25
26 **Table miss**: goto **ACL table**.
27
28According to OFDPA spec, we need to assign a VLAN ID even for untagged packets.
29Each untagged packet will be tagged with an **internal VLAN** when being handled by VLAN table.
30The internal VLAN will be popped when the packet is sent to a output port or controller.
31The internal VLAN is assigned according to the subnet configuration of the input port.
32Packets coming from ports that do not have subnet configured (e.g. the spine facing ports) will be tagged with VLAN ID **4094**.
33
34The internal VLAN is also used to determine the subnet when a packet needs to be flooded to all ports in the same subnet. (See L2 Broadcast section for detail.)
35
36
37Termination MAC Table
38^^^^^^^^^^^^^^^^^^^^^
39.. note::
40 The **Termination MAC (TMAC) Flow Table (id=20)** determines whether to do bridging or routing on a packet.
41 It identifies routed packets their destination MAC, VLAN, and Ethertype.
42 Routed packet rule types use a Goto-Table instruction to indicate that the next table is one of the routing tables.
43
44 **Table miss**: goto **Bridging table**.
45
46In this table, we determine which table the packet should go to by checking the destination MAC address and the Ethernet type of the packet.
47
48- if dst_mac = router MAC and eth_type = ip, goto **unicast routing** table
49- if dst_mac = router MAC and eth_type = mpls, goto **MPLS table**
50- if dst_mac = multicast MAC (01:00:5F:00:00:00/FF:FF:FF:80:00:00), goto **multicast routing** table
51- none of above, goto **bridging table**
52
53
54MPLS Tables
55^^^^^^^^^^^
56.. note::
57 The MPLS pipeline can support three **MPLS Flow Tables, MPLS Table 0 (id=23), MPLS Table 1 (id=24) and MPLS Table 2 (id=25)**.
58 An MPLS Flow Table lookup matches the label in the outermost MPLS shim header in the packets.
59
60 - MPLS Table 0 is only used to pop a protection label on platforms that support this table, or to detect an MPLS- TP Section OAM PDU.
61 - MPLS Table 1 and MPLS Table 2 can be used for all label operations.
62 - MPLS Table 1 and MPLS Table 2 are synchronized flow tables and updating one updates the other
63
64 **Table miss**: goto **ACL table**.
65
66We only use MPLS Table 1 (id=24) in current design.
67MPLS packets are matched by the MPLS label.
68The packet will go to **L3 interface group** with MPLS label being popped and further go to destination leaf switch.
69
70
71Unicast Routing Table
72^^^^^^^^^^^^^^^^^^^^^
73.. note::
74 The **Unicast Routing Flow Table (id=30)** supports routing for potentially large numbers of IPv4 and IPv6 flow entries using the hardware L3 tables.
75
76 **Table miss**: goto **ACL table**.
77
78In this table, we determine where to output a packet by checking its **destination IP (unicast)** address.
79
80- if dst_ip locates at a **remote switch**, the packet will go to an **L3 ECMP group**, be tagged with MPLS label, and further go to a spine switch
81- if dst_ip locates at the **same switch**, the packet will go to an **L3 interface group** and further go to a host
82
83Note that the priority of flow entries in this table is sorted by prefix length.
84Longer prefix (/32) will have higher priority than shorter prefix (/0).
85
86
87Multicast Routing Table
88^^^^^^^^^^^^^^^^^^^^^^^
89.. note::
90 The **Multicast Routing Flow Table (id=40)** supports routing for IPv4 and IPv6 multicast packets.
91
92 **Table miss**: goto **ACL table**.
93
94Flow entries in this table always match the **destination IP (multicast)**.
95Matched packets will go to an **L3 multicast group** and further go to the next switch or host.
96
97
98Bridging Table
99^^^^^^^^^^^^^^
100.. note::
101 The **Bridging Flow Table (id=50)** supports Ethernet packet switching for potentially large numbers of flow entries using the hardware L2 tables.
102 The Bridging Flow Table forwards either based on VLAN (normal switched packets) or Tunnel id (isolated forwarding domain packets),
103 with the Tunnel id metadata field used to distinguish different flow table entry types by range assignment.
104
105 **Table miss**: goto **ACL table**.
106
107In this table, we match the **VLAN ID** and the **destination MAC address** and determine where the packet should be forwarded to.
108
109- if the destination MAC can be matched, the packet will go to the **L2 interface group** and further sent to the destination host.
110- if the destination MAC can not be matched, the packet will go to the **L2 flood group** and further flooded to the same subnet.
111 Since we cannot match IP in bridging table, we use the VLAN ID to determine which subnet this packet should be flooded to.
112 The VLAN ID can be either (1) the internal VLAN assigned to untagged packets in VLAN table or (2) the VLAN ID that comes with tagged packets.
113
114
115Policy ACL Table
116^^^^^^^^^^^^^^^^
117.. note::
118 The Policy ACL Flow Table supports wide, multi-field matching.
119 Most fields can be wildcard matched, and relative priority must be specified in all flow entry modification API calls.
120 This is the preferred table for matching BPDU and ARP packets. It also provides the Metering instruction.
121
122 **Table miss**: **do nothing**.
123 The packet will be forwarded using the output or group in the action set, if any.
124 If the action set does not have a group or output action the packet is dropped.
125
126In ACL table we trap **ARP**, **LLDP**, **BDDP**, **DHCP** and send those packets to the **controller**.
127
128
129Group Tables
130------------
131
132L3 ECMP Group
133^^^^^^^^^^^^^
134.. note::
135 OF-DPA L3 ECMP group entries are of OpenFlow type **SELECT**.
136 For IP routing the action buckets reference the OF-DPA **L3 Unicast group** entries that are members of the multipath group for ECMP forwarding.
137
138 An OF-DPA L3 ECMP Group entry can also be used in a Provider Edge Router.
139 In this packet flow it can chain to either an **MPLS L3 Label** group entry or to an **MPLS Fast Failover** group entry.
140
141 An OF-DPA L3 ECMP Group entry can be specified as a routing target instead of an OF-DPA L3 Unicast Group entry. Selection of an action bucket for forwarding a particular packet is hardware-specific.
142
143
144MPLS Label Group
145^^^^^^^^^^^^^^^^
146.. note::
147 MPLS Label Group entries are of OpenFlow **INDIRECT** type.
148 There are four MPLS label Group entry subtypes, all with similar structure.
149 These can be used in different configurations to **push up to three labels** for tunnel initiation or LSR swap.
150
151
152MPLS Interface Group
153^^^^^^^^^^^^^^^^^^^^
154.. note::
155 MPLS Interface Group Entry is of OpenFlow type **INDIRECT**.
156 It is used to **set the outgoing L2 header** to reach the next hop label switch router or provider edge router.
157
158We use **L3 ECMP** group to randomly pick one spine switch when we need to route a packet from leaves to spines.
159
160We point each bucket to an **MPLS Label** Group in which the MPLS labels are pushed to the packets to realize Segment Routing mechanism.
161(More specifically, we use the subtype 2 **MPLS L3 VPN Label**).
162
163We then point an MPLS Label Group points to an **MPLS Interface** Group in which the destination MAC is set to the next hop (spine router).
164
165Finally, the packet will goto an **L2 Interface** Group and being sent to the output port that goes to the spine router.
166Detail of how segment routing is implemented will be explained in the L3 unicast section below.
167
168
169L3 Unicast Group
170^^^^^^^^^^^^^^^^
171.. note::
172 OF-DPA L3 Unicast group entries are of OpenFlow **INDIRECT** type.
173 L3 Unicast group entries are used to supply the routing next hop and output interface for packet forwarding.
174 To properly route a packet from either the Routing Flow Table or the Policy ACL Flow Table, the forwarding flow entry must reference an L3 Unicast Group entry.
175
176 All packets must have a VLAN tag.
177 **A chained L2 Interface group entry must be in the same VLAN as assigned by the L3 Unicast Group** entry.
178
179We use L3 Unicast Group to rewrite the **source MAC**, **destination MAC** and **VLAN ID** when routing is needed.
180
181
182L3 Multicast Group
183^^^^^^^^^^^^^^^^^^
184.. note::
185 OF-DPA L3 Multicast group entries are of OpenFlow **ALL** type.
186 The action buckets describe the interfaces to which multicast packet replicas are forwarded.
187 Note that:
188
189 - Chained OF-DPA **L2 Interface** Group entries must be in the **same VLAN** as the OF-DPA **L3 Multicast** group entry. However,
190
191 - Chained OF-DPA **L3 Interface** Group entries must be in **different VLANs** from the OF-DPA **L3 Multicast** Group entry, **and from each other**.
192
193We use L3 multicast group to replicate multicast packets when necessary.
194It is also possible that L3 multicast group consists of only one bucket when replication is not needed.
195Detail of how multicast is implemented will be explained in the L3 multicast section below.
196
197
198L2 Interface Group
199^^^^^^^^^^^^^^^^^^
200.. note::
201 L2 Interface Group entries are of OpenFlow **INDIRECT** type, with a single action bucket.
202 OF-DPA L2 Interface group entries are used for egress VLAN filtering and tagging.
203 If a specific set of VLANs is allowed on a port, appropriate group entries must be defined for the VLAN and port combinations.
204
205 Note: OF-DPA uses the L2 Interface group declaration to configure the port VLAN filtering behavior.
206 This approach was taken since OpenFlow does not support configuring VLANs on physical ports.
207
208
209L2 Flood Group
210^^^^^^^^^^^^^^
211.. note::
212 L2 Flood Group entries are used by VLAN Flow Table wildcard (destination location forwarding, or DLF) rules.
213 Like OF-DPA L2 Multicast group entry types they are of OpenFlow **ALL** type.
214 The action buckets each encode an output port.
215 Each OF-DPA L2 Flood Group entry bucket forwards a replica to an output port, except for packet IN_PORT.
216
217 All of the OF-DPA L2 Interface Group entries referenced by the OF-DPA Flood Group entry,
218 and the OF- DPA Flood Group entry itself, must be in the **same VLAN**.
219
220 Note: There can only be **one OF-DPA L2 Flood Group** entry defined **per VLAN**.
221
222
223L2 Unicast
224----------
225
226.. image:: images/arch-l2u.png
227 :width: 800px
228
229Fig. 2: L2 unicast
230
231.. image:: images/arch-l2u-pipeline.png
232 :width: 1000px
233
234Fig. 3: Simplified L2 unicast pipeline
235
236The L2 unicast mechanism is designed to support **intra-rack (intra-subnet)** communication when the destination host is **known**.
237
238Pipeline Walkthrough - L2 Unicast
239^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240- **VLAN Table**: An untagged packet will be assigned an internal VLAN ID according to the input port and the subnet configured on the input port. Packets of the same subnet will have the same internal VLAN ID.
241- **TMAC Table**: Since the destination MAC of a L2 unicast packet is not the MAC of leaf router, the packet will miss the TMAC table and goes to the bridging table.
242- **Bridging Table**: If the destination MAC is learnt, there will be a flow entry matching that destination MAC and pointing to an L2 interface group.
243- **ACL Table**: IP packets will miss the ACL table and the L2 interface group will be executed.
244 L2 Interface Group: The internal assigned VLAN will be popped before the packet is sent to the output port.
245
246
247L2 Broadcast
248------------
249
250.. image:: images/arch-l2f.png
251 :width: 800px
252
253Fig. 4: L2 broadcast
254
255.. image:: images/arch-l2f-pipeline.png
256 :width: 1000px
257
258Fig. 5: Simplified L2 broadcast pipeline
259
260Pipeline Walkthrough - L2 Broadcast
261^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
262- **VLAN Table**: (same as L2 unicast)
263- **TMAC Table**: (same as L2 unicast)
264- **Bridging Table**: If the destination MAC is not learnt,
265 there will NOT be a flow entry matching that destination MAC.
266 It will then fallback to a lower priority entry that matches the VLAN (subnet) and point to an L2 flood group.
267- **ACL Table**: IP packets will miss the ACL table and the L2 flood group will be executed.
268- **L2 Flood Group**: Consists of all L2 interface groups related to this VLAN (subnet).
269- **L2 Interface Group**: The internal assigned VLAN will be popped before the packet is sent to the output port.
270
271
272ARP
273---
274
275.. image:: images/arch-arp-pipeline.png
276 :width: 1000px
277
278Fig. 6: Simplified ARP pipeline
279
280All ARP packets will be forwarded according to the bridging pipeline.
281In addition, a **copy of the ARP packet will be sent to the controller**.
282
283- Controller will use the ARP packets for **learning purpose and update host store** accordingly.
284- Controller only **replies** an ARP request if the request is trying to **resolve an interface address configured on the switch edge port**.
285
286
287Pipeline Walkthrough - ARP
288^^^^^^^^^^^^^^^^^^^^^^^^^^
289It is similar to L2 broadcast. Except ARP packets will be matched by a special ACL table entry and being copied to the controller.
290
291
292L3 Unicast
293----------
294
295.. image:: images/arch-l3u.png
296 :width: 800px
297
298Fig. 7: L3 unicxast
299
300.. image:: images/arch-l3u-src-pipeline.png
301 :width: 1000px
302
303Fig. 8 Simplified L3 unicast pipeline - source leaf
304
305Pipeline Walkthrough - Source Leaf Switch
306^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
307- **VLAN Table**: An untagged packet will be assigned an internal VLAN ID according to the input port and the subnet configured on the input port. Packets of the same subnet will have the same internal VLAN ID.
308- **TMAC Table**: Since the destination MAC of a L3 unicast packet is the MAC of leaf router and the ethernet type is IPv4, the packet will match the TMAC table and go to the unicast routing table.
309- **Unicast Routing Table**: In this table we will lookup the destination IP of the packet and point the packet to corresponding L3 ECMP group
310- **ACL Table**: IP packets will miss the ACL table and the L3 ECMP group will be executed.
311- **L3 ECMP Group**: Hashes on 5 tuple to pick a spine switch and goto the MPLS Label Group.
312- **MPLS Label Group**: Push the MPLS label corresponding to the destination leaf switch and goto the MPLS Interface Group.
313- **MPLS Interface Group**: Set source MAC address, destination MAC address, VLAN ID and goto the L2 Interface Group.
314- **L2 Interface Group**: The internal assigned VLAN will be popped before the packet is sent to the output port that goes to the spine.
315
316.. image:: images/arch-l3u-transit-pipeline.png
317 :width: 1000px
318
319Fig. 9 Simplified L3 unicast pipeline - spine
320
321Pipeline Walkthrough - Spine Switch
322^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
323- **VLAN Table**: An untagged packet will be assigned an internal VLAN ID according to the input port and the subnet configured on the input port. Packets of the same subnet will have the same internal VLAN ID.
324- **TMAC Table**: Since the destination MAC of a L3 unicast packet is the MAC of spine router and the ethernet type is MPLS, the packet will match the TMAC table and go to the MPLS table.
325- **MPLS Table**: In this table we will lookup the MPLS label of the packet, figure out the destination leaf switch, pop the MPLS label and point to L3 ECMP Group.
326- **ACL Table**: IP packets will miss the ACL table and the MPLS interface group will be executed.
327- **L3 ECMP Group**: Hash to pick a link (if there are multiple links) to the destination leaf and goto the L3 Interface Group.
328- **MPLS Interface Group**: Set source MAC address, destination MAC address, VLAN ID and goto the L2 Interface Group.
329- **L2 Interface Group**: The internal assigned VLAN will be popped before the packet is sent to the output port that goes to the destination leaf switch.
330
331.. image:: images/arch-l3u-dst-pipeline.png
332 :width: 1000px
333
334Fig. 10 Simplified L3 unicast pipeline - destination leaf
335
336Pipeline Walkthrough - Destination Leaf Switch
337^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
338- **VLAN Table**: An untagged packet will be assigned an internal VLAN ID according to the input port and the subnet configured on the input port. Packets of the same subnet will have the same internal VLAN ID.
339- **TMAC Table**: Since the destination MAC of a L3 unicast packet is the MAC of leaf router and the ethernet type is IPv4, the packet will match the TMAC table and go to the unicast routing table.
340- **Unicast Routing Table**: In this table we will lookup the destination IP of the packet and point the packet to corresponding L3 Unicast Group.
341- **ACL Table**: IP packets will miss the ACL table and the L3 Unicast Group will be executed.
342- **L3 Unicast Group**: Set source MAC address, destination MAC address, VLAN ID and goto the L2 Interface Group.
343- **L2 Interface Group**: The internal assigned VLAN will be popped before the packet is sent to the output port that goes to the destination leaf switch.
344
345
346The L3 unicast mechanism is designed to support inter-rack(inter-subnet) untagged communication when the destination host is known.
347
348Path Calculation and Failover - Unicast
349^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
350Coming soon...
351
352
353L3 Multicast
354------------
355
356.. image:: images/arch-l3m.png
357 :width: 800px
358
359Fig. 11 L3 multicast
360
361.. image:: images/arch-l3m-pipeline.png
362 :width: 1000px
363
364Fig.12 Simplified L3 multicast pipeline
365
366The L3 multicast mechanism is designed to support use cases such as IPTV. The multicast traffic comes in from the upstream router, replicated by the leaf-spine switches, send to multiple OLTs and eventually get to the subscribers.
367
368.. note::
369 We would like to support different combinations of ingress/egress VLAN, including
370
371 - untagged in -> untagged out
372 - untagged in -> tagged out
373 - tagged in -> untagged out
374 - tagged in -> same tagged out
375 - tagged in -> different tagged out
376
377 However, due to the above-mentioned OFDPA restrictions,
378
379 - It is NOT possible to chain L3 multicast group to L2 interface group directly if we want to change the VLAN ID
380 - It is NOT possible to change VLAN ID by chaining L3 multicast group to L3 interface group since all output ports should have the same VLAN
381 but the spec requires chained L3 interface group to have different VLAN ID from each other.
382
383 That means, if we need to change VLAN ID, we need to change it before the packets get into the multicast routing table.
384 The only viable solution is changing the VLAN ID in the VLAN table.
385 We change the VLAN tag on the ingress switch (i.e. the switch that connects to the upstream router) when necessary.
386 On transit (spine) and egress (destination leaf) switches, output VLAN tag will remain the same as input VLAN tag.
387
388Pipeline Walkthrough - Ingress Leaf Switch
389^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
390
391.. csv-table:: Table 1. All Possible VLAN Combinations on Ingress Switch
392 :file: tables/arch-mcast-ingress.csv
393 :widths: 2, 5, 5, 10, 10, 5
394 :header-rows: 1
395
396.. note::
397 In the presence of ``vlan-untagged`` configuration on the ingress port of the ingress switch, the ``vlan-untagged`` will be used instead of 4094.
398 The reason is that we cannot distinguish unicast and multicast traffic in that case, and therefore must assign the same VLAN to the packet.
399 The VLAN will anyway get popped in L2IG in this case.
400
401Table 1 shows all possible VLAN combinations on the ingress switches and how the packet is processed through the pipeline.
402We take the second case **untagged -> tagged 200** as an example to explain more details.
403
404- **VLAN Table**: An untagged packet will be assigned the **egress VLAN ID**.
405- **TMAC Table**: Since the destination MAC of a L2 unicast packet is a multicast MAC address, the packet will match the TMAC table and goes to the multicast routing table.
406- **Multicast Routing Table**: In this table we will lookup the multicast group (destination multicast IP) and point the packet to the corresponding L3 multicast group.
407- **ACL Table**: Multicast packets will miss the ACL table and the L3 multicast group will be executed.
408- **L3 Multicast Group**: The packet will be matched by **egress VLAN ID** and forwarded to multiple L2 interface groups that map to output ports.
409- **L2 Interface Group**: The egress VLAN will be kept in this case and the packet will be sent to the output port that goes to the transit spine switch.
410
411
412Pipeline Walkthrough - Transit Spine Switch and Egress Leaf Switch
413^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
414
415.. csv-table:: Table 2. All Possible VLAN Combinations on Transit/Egress Switch
416 :file: tables/arch-mcast-transit-egress.csv
417 :widths: 2, 5, 5, 10, 10, 5
418 :header-rows: 1
419
420Table 2 shows all possible VLAN combinations on the transit/egress switches and how the packet is processed through the pipeline.
421Note that we have already changed the VLAN tag to the desired egress VLAN on the ingress switch.
422Therefore, there are only two cases on the transit/egress switches - either keep it untagged or keep it tagged. We take the first case **untagged -> untagged** as an example to explain more details.
423
424- **VLAN Table**: An untagged packet will be assigned an **internal VLAN ID** according to the input port and the subnet configured on the input port. Packets of the same subnet will have the same internal VLAN ID.
425- **TMAC Table**: (same as ingress switch)
426- **Multicast Routing Table**: (same as ingress switch)
427- **ACL Table**: (same as ingress switch)
428- **L3 Multicast Group**: The packet will be matched by **internal VLAN ID** and forwarded to multiple L2 interface groups that map to output ports.
429- **L2 Interface Group**: The egress VLAN will be popped in this case and the packet will be sent to the output port that goes to the egress leaf switch.
430
431Path Calculation and Failover - Multicast
432^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
433Coming soon...
434
435
436VLAN Cross Connect
437------------------
438
439.. image:: images/arch-xconnect.png
440 :width: 800px
441
442Fig. 13 VLAN cross connect
443
444.. image:: images/arch-xconnect-pipeline.png
445 :width: 1000px
446
447Fig. 14 Simplified VLAN cross connect pipeline
448
449VLAN Cross Connect is originally designed to support Q-in-Q packets between OLTs and BNGs.
450The cross connect pair consists of two output ports.
451Whatever packet comes in on one port with specific VLAN tag will be sent to the other port.
452
453.. note::
454 It can only cross connects **two ports on the same switch**.
455 :doc:`Pseudowire <configuration/pseudowire>` is required to connect ports across different switches.
456
457We use L2 Flood Group to implement VLAN Cross Connect.
458The L2 Flood Group for cross connect only consists of two ports.
459The input port will be removed before flooding according to the spec and thus create exactly the desire behavior of cross connect.
460
461Pipeline Walkthrough - Cross Connect
462^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
463- **VLAN Table**: When a tagged packet comes in, we no longer need to assign the internal VLAN.
464 The original VLAN will be carried through the entire pipeline.
465- **TMAC Table**: Since the VLAN will not match any internal VLAN assigned to untagged packets,
466 the packet will miss the TMAC table and goes to the bridging table.
467- **Bridging Table**: The packet will hit the flow rule that match the cross connect VLAN ID and
468 being sent to corresponding L2 Flood Group.
469- **ACL Table**: IP packets will miss the ACL table and the L2 flood group will be executed.
470- **L2 Flood Group**: Consists of two L2 interface groups related to this cross connect VLAN.
471 L2 Interface Group: The original VLAN will NOT be popped before the packet is sent to the output port.
472
473
474vRouter
475-------
476
477.. image:: images/arch-vr.png
478 :width: 800px
479
480Fig. 15 vRouter
481
482The Trellis fabric needs to be connected to the external world via the vRouter functionality.
483**In the networking industry, the term vRouter implies a "router in a VM". This is not the case in Trellis**.
484Trellis vRouter is NOT a software router.
485**Only the control plane of the router, i.e routing protocols, runs in a VM**.
486We use the Quagga routing protocol suite as the control plane for vRouter.
487
488The **vRouter data plane is entirely in hardware**.
489Essentially the entire hardware fabric serves as the (distributed) data plane for vRouter.
490
491The **external router views the entire Trellis fabric as a single router**.
492
493.. image:: images/arch-vr-overview.png
494
495.. image:: images/arch-vr-logical.png
496
497.. note::
498 Dual external routers is also supported for redundancy. Visit :doc:`External Connectivity <configuration/dual-homing>` for details.
499
500Pipeline Walkthrough - vRouter
501^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
502The pipeline is exactly as same as L3 unicast. We just install additional flow rules in the unicast routing table on each leaf routers.
503
504
505Learn More
506----------
507.. tip::
508 Most of our design discussion and meeting notes are kept in `Google Drive <https://drive.google.com/drive/folders/0Bz9dNKPVvtgsR0M5R0hWSHlfZ0U>`_.
509 If you are wondering why features are designed and implemented in a certain way, you may find the answers there.