Coming soon - Get a detailed view of why an account is flagged as spam!
view details
39
Build your own carrier grade exploding network with FlowSpec
Post Body

Build your own carrier grade exploding network with FlowSpec

Are you looking to up your skills to get your next job with a service provider? Learn how to make your carrier network explode with FlowSpec with these simple steps service providers don’t want you to know!

Here’s a lab to figure out how to setup Flowspec in a test network with some simple rules. This is ideal for a high CCNP level or CCIE level engineer; you should be familiar with BGP, ideally in multi-protocol format, before completing this lab. To use this, you’ll need VIRL, GNS3, EVE-NG, or similar; packet tracer is not going to work here. You’ll also need the CSR1kV image, the IOS-XRv image, some sort of ethernet switch (built-in for GNS3 is fine), and possibly the IOL/IOU images. This was built out in GNS3 with the images that are available with a VIRL subscription; if you’re using something else your mileage may vary. You'll be building your own lab here, 'cause you can use some more experience, but the entire configs and topology are at the bottom if you get stuck along the way.

What is flowspec?

Flowspec allows network engineers to send data via MP-BGP to multiple routers to implement special handling rules for specific traffic. In this lab, we will match ICMP and later BGP and drop matching packets. We can also rate-limit packets, divert them to other next hops, or send them in to other VRFs.

To accomplish this, we will set up a simple DIA style provider network with two PE routers and a route-reflector. We will connect two CE routers in two separate ASNs, and at this point we should be able to send traffic from one CE router, through our provider network, to another.

With that configured, we will add in Flowspec with the intent of block large ICMP packets. To do this we will make sure we configure our provider devices to share the “ipv4 flowspec” address family and that they impose any policy learned through MP-BGP. We’ll also configure the IOS-XRv router as a Flowspec controller. With the XRv and 1kV images, you can only configure them to act as a controller and a client respectively. In the real world you can use other items as controllers (such as ExaBGP) or clients. Note that the lightweight IOL/IOU images typically don’t run Flowspec at all, so while we can use them as CE routers in this case, our P and PE routers must be 1kV images.

The Lab

First build a topology according to this diagram. All switch ports are on access ports on the same vlan, and both PE routers, the P router, and XRv controller will sit in the same subnet (10.0.0.0/24) with IP addresses .1, .2, .100, and .101 respectively. The PE routers will be configured as 172.16.1.1/24 and 172.16.2.1/24 on their customer facing interfaces, with the associated CE routers set to .2 in their respective subnets. The PE ASN is #1000, with the CE ASN’s as #101 and #102.

With IP addressing set on physical interfaces and a local ping working, enable OSPF on all the 10.0.0.0/24 interfaces, then configure a loopback on each device with 10.1.0.x/32, and share this in to OSPF as well. You should now be able to ping between any two PE loopback addresses.

(P/PE address configs)
interface Loopback0
 ip address 10.1.0.1 255.255.255.255
 ip ospf 1 area 0
interface GigabitEthernet1
 ip address 10.0.0.1 255.255.255.0
 ip ospf 1 area 0

Configure BGP on your PE-CE routers and make sure you have your networks being advertised (e.g. with route redistribution).

(PE1 config)
router bgp 1000
 bgp log-neighbor-changes
neighbor 172.16.1.2 remote-as 101
 !
 address-family ipv4
  redistribute connected
  neighbor 172.16.1.2 activate
 exit-address-family

Configure your P and PE routers with iBGP peerings and make sure that you have added in the appropriate route-reflector information. We will use template to make this more scalable.

(PE configs)
router bgp 1000
 template peer-policy internal-pol
  route-reflector-client
  send-community both
 exit-peer-policy
 !
 template peer-session internal-ses
  remote-as 1000
  update-source Loopback0
  timers 10 30
 exit-peer-session
 !
 bgp log-neighbor-changes
 neighbor 10.1.0.1 inherit peer-session internal-ses
 neighbor 10.1.0.2 inherit peer-session internal-ses
!
 address-family ipv4
  neighbor 10.1.0.1 activate
  neighbor 10.1.0.1 inherit peer-policy internal-pol
  neighbor 10.1.0.2 activate
  neighbor 10.1.0.2 inherit peer-policy internal-pol
exit-address-family
 !

Now check and troubleshoot connectivity as needed. You should be able to ping from 172.16.1.2 to 172.16.2.2 with both standard sized ICMP and ICMP at 1500 bytes.

Turning on Flowspec

Now let’s enable Flowspec on all our P and PE devices and make sure we are sharing that data as well. On each device we’ll first enable flowspec on all interfaces.

(P and PE configs)
flowspec
 local-install interface-all
 address-family ipv4 flowspec
  neighbor 10.1.0.100 activate
  neighbor 10.1.0.100 send-community both

Next we’ll enable it on our PE router, and we will add in the controller IP so it will work later. At this point we want to make sure that our PE router has the “send-community both” command either in the template or applied to the individual neighbors. If we do not do that, flowspec will not work properly.

(P router)
router bgp 1000
neigh 10.1.0.101 inherit peer-ses internal-ses
address-family ipv4 flowspec
  neighbor 10.1.0.1 activate
  neighbor 10.1.0.1 inherit peer-policy internal-pol
  neighbor 10.1.0.2 activate
  neighbor 10.1.0.2 inherit peer-policy internal-pol
  neighbor 10.1.0.101 activate

P routers
router bgp 1000
address-family ipv4 flowspec
  neighbor 10.1.0.100 activate
  neighbor 10.1.0.100 send-community both
 exit-address-family

If everything is working correctly we should see both PE routers connected to our P router

sh bgp ipv4 uni sum

Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
10.1.0.1        4         1000     173     179       41    0    0 00:26:38        3
10.1.0.2        4         1000     176     173       41    0    0 00:26:45        3

Now let’s configure our XRv controller to be on the network and set it up to speak MP-BGP. If you’re not familiar with XRv, you’ll need to make sure you go into config mode, enter your command, then type “commit” to actually put them in to play.

(FS controller)
interface Loopback0
 ipv4 address 10.1.0.101 255.255.255.255
no shut
!
interface GigabitEthernet0/0/0/0
 ipv4 address 10.0.0.101 255.255.255.0
no shut
router ospf 1
 mpls ldp auto-config
 area 0
  interface Loopback0
  !
  interface GigabitEthernet0/0/0/0
  !
 !
!
router bgp 1000
 address-family ipv4 unicast
 !
 address-family ipv4 flowspec
 !
 neighbor 10.1.0.100
  remote-as 1000
  update-source Loopback0
  address-family ipv4 unicast
  !
  address-family ipv4 flowspec
  !
 !
!

At this point we should have XRv up:

RP/0/0/CPU0:ios#sh bgp ipv4 flow sum

Neighbor        Spk    AS MsgRcvd MsgSent   TblVer  InQ OutQ  Up/Down  St/PfxRcd
10.1.0.100        0  1000     268     252        5    0    0 00:08:06          0

If we go back on our PE router and look at our flowspec announcements we should see it show up as blank

P#sh bgp ipv4 flowspec
P#

Flowspec policy

Now it’s time to create a class map and policy map, then apply it to Flowspec on the XRv. We’re going to match and block large ICMP packets.

(FS controller)    
class-map type traffic match-all large-ICMP
 match protocol icmp
 match packet length 1000-65535
 end-class-map
!
policy-map type pbr FS-policy
 class type traffic large-ICMP
  drop
 !
 class type traffic class-default
 !
 end-policy-map
!
flowspec
 address-family ipv4
  service-policy type pbr FS-policy

Once we do that we should now see a new Flowspec announcement in BGP on our route reflector.

P#sh bgp ipv4 flowspec

     Network          Next Hop            Metric LocPrf Weight Path
 *>i  Proto:=1,Length:>=1000&<=65535
                      0.0.0.0                       100      0 i
P#

P#sh bgp ipv4 flowspec det
BGP routing table entry for Proto:=1,Length:>=1000&<=65535, version 8
  Paths: (1 available, best #1, table IPv4-Flowspec-BGP-Table)
  Advertised to update-groups:
     4
  Refresh Epoch 1
  Local
    0.0.0.0 from 10.1.0.101 (10.1.0.101)
      Origin IGP, localpref 100, valid, internal, best
      Extended Community: FLOWSPEC Traffic-rate:1000,0
      rx pathid: 0, tx pathid: 0x0

We can see here that we’re matching protocol #1 (ICMP), and IP packets that are between 1000 and 65535 in length, then setting a traffic rate of 0 to drop all traffic.

We should see the same announcement on all our PE routers if we do the same command

PE1#show bgp ipv4 flowspec det
BGP routing table entry for Proto:=1,Length:>=1000&<=65535, version 14
  Paths: (1 available, best #1, table IPv4-Flowspec-BGP-Table)
  Not advertised to any peer
  Refresh Epoch 2
  Local
    0.0.0.0 from 10.1.0.100 (10.1.0.100)
      Origin IGP, localpref 100, valid, internal, best
      Extended Community: FLOWSPEC Traffic-rate:1000,0
      Originator: 10.1.0.101, Cluster list: 10.1.0.100
      rx pathid: 0, tx pathid: 0x0

We can also check things out in Flowspec

PE1#show flowspec afi-all det
AFI: IPv4
  Flow           :Proto:=1,Length:>=1000&<=65535
    Actions      :Traffic-rate: 0 bps  (bgp.1)
    Statistics                        (packets/bytes)
      Matched             :                   0/0
      Dropped             :                   0/0

If we have reached this point, we should be good to go for testing.

CE1#ping 172.16.2.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.2.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 7/12/29 ms
CE1#ping 172.16.2.2 size 1500
Type escape sequence to abort.
Sending 5, 1500-byte ICMP Echos to 172.16.2.2, timeout is 2 seconds:
…..

If you’re getting this result then we are in good shape. We are able to send standard sized pings through but larger pings are dropped. If we modify our policies on the XRv we can change what traffic is blocked including different sizes, addresses, protocols, etc.

Flowspec for BGP, CenturyLink style

Let’s do a sweet CL block of BGP by adding in this config to the XRv and committing it.

(FS controller)
class-map type traffic match-all match-BGP
 match protocol tcp
 match destination-port 179
 end-class-map
!
!
policy-map type pbr FS-policy
 class type traffic match-BGP
  drop

If we get a successful commit we should see the results rather quickly

PE1#show flowspec afi-all det
AFI: IPv4
  Flow           :Proto:=1,Length:>=1000&<=65535
    Actions      :Traffic-rate: 0 bps  (bgp.1)
    Statistics                        (packets/bytes)
      Matched             :                   5/7570
      Dropped             :                   5/7570
  Flow           :Proto:=6,DPort:=179
    Actions      :Traffic-rate: 0 bps  (bgp.1)
    Statistics                        (packets/bytes)
      Matched             :                   5/389
      Dropped             :                   5/389


Followed by:

PE1#
*Sep  5 16:43:51.863: %BGP-3-NOTIFICATION: received from neighbor 10.1.0.100 4/0 (hold time expired) 0 bytes
*Sep  5 16:43:51.867: %BGP-5-NBR_RESET: Neighbor 10.1.0.100 reset (BGP Notification received)
*Sep  5 16:43:51.879: %BGP-5-ADJCHANGE: neighbor 10.1.0.100 Down BGP Notification received
*Sep  5 16:43:51.880: %BGP_SESSION-5-ADJCHANGE: neighbor 10.1.0.100 IPv4 Flowspec topology base removed from session  BGP Notification received
*Sep  5 16:43:51.881: %BGP_SESSION-5-ADJCHANGE: neighbor 10.1.0.100 IPv4 Unicast topology base removed from session  BGP Notification received

At this point BGP will flap forever. You can go in and delete the line in the xrv with:

(FS controller)
policy-map type pbr FS-policy
no class type traffic match-BGP

However you’re going to notice that we don’t initially remove the Flowspec rules from all the devices. Because we’re blocking BGP, the Flowspec withdraw rules don’t get processed and BGP remains blocked leading to a chicken-and-egg type problem. Eventually BGP should time out which will delete the Flowspec rules. Once this occurs after the policy in the controller has been corrected we should see things come back up.

This is the end of the intro to Flowspec lab. Feel free to play around with different topologies, blocking lists, traffic generators and sinks, etc.

Resources

topology diagram

configs

Author
Account Strength
100%
Account Age
8 years
Verified Email
Yes
Verified Flair
No
Total Karma
144,117
Link Karma
8,472
Comment Karma
134,250
Profile updated: 1 month ago
Posts updated: 11 months ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
4 years ago