This post will build off my last one, DMVPN, and here we will discuss the routing protocol options as well as each of their configurations. I will break out each protocol into a separate post in order to help keep things straight, putting them all together has the potential to get confusing (for both you and me!). The routing protocol options I will document are OSPF, RIPv2, EIGRP, and BGP and each has some unique features and quirks, so I will try and point them out where necessary.
One of the joys of EIGRP in a DMVPN network is Split-Horizon. Just a quick refresher on Split Horizon – it is the rule that prohibits a router from advertising a route through an interface that the router itself uses to reach the destination. This is done in order to prevent loops in the network, but with DMVPN we need to disable this feature via the no split-horizon EIGRP AS# command.
Quick note: When I am posting the configurations for the sites I will only notate the routing protocol additions. If you need information on DMVPN configuration, see my previous post.
Router/Switch Output
Commands
Notes
First up, the DMVPN hub:
First thing we should do is create a loopback interface and address so we have something to see and ping.
Rack1DMVPN(config)# int l0
Rack1DMVPN(config-if)# ip address 100.100.100.100 255.255.255.255
Now onto the Tunnel configuration
Rack1DMVPN(config)# interface Tunnel0
Rack1DMVPN(config-if)# ip address 192.168.11.1 255.255.255.0
Rack1DMVPN(config-if)# no ip redirects
Rack1DMVPN(config-if)# ip mtu 1400
In large EIGRP DMVPN deployments it may be necessary to change the EIGRP hold time. This is done to allow the DMPVN hub time to recover due to all the convergence. The maximum hold time should not exceed 7 times the EIGRP hello timers, or 35 seconds. (That is from the Cisco DMVPN Design and Implemenation document)
Rack1DMVPN(config-if)# ip hold-time eigrp 100 35
Typically in EIGRP the next hop advertised is the router itself, but in DMVPN you want to make sure the spokes know about each other. In order to allow this to happen, you need to add no ip next-hop-self
Rack1DMVPN(config-if)# no ip next-hop-self eigrp 100
This command is used to map out the multicast traffic on the hub
Rack1DMVPN(config-if)# ip nhrp map multicast dynamic
Rack1DMVPN(config-if)# ip nhrp network-id 1
Rack1DMVPN(config-if)# ip nhrp holdtime 600
Rack1DMVPN(config-if)# ip virtual-reassembly
Rack1DMVPN(config-if)# no ip route-cache cef
Here we will disable split-horizon
Rack1DMVPN(config-if)# no ip split-horizon eigrp 100
Rack1DMVPN(config-if)# tunnel source GigabitEthernet0/1
Rack1DMVPN(config-if)# tunnel mode gre multipoint
Rack1DMVPN(config-if)# tunnel protection ipsec profile Labbing
Now we can configure EIGRP AS100
Rack1DMVPN(config)#router eigrp 100
Lets define the networks we want to advertise. Be sure not to advertise the outside interface of your router.
Rack1DMVPN(config-router)# network 100.100.100.100 0.0.0.0
Rack1DMVPN(config-router)# network 192.168.11.0
Disable auto-summary
Rack1DMVPN(config-router)# no auto-summary
And for good measure, passing out the loopback interfaces. No need to send Hellos there.
Rack1DMVPN(config-router)# passive-interface Loopback0
Now onto R3!
First thing, lets get that loopback interface created
Rack1R3(config)# int loop0
Rack1R3(config-if)# ip address 3.3.3.3 255.255.255.255
Now onto the Tunnel
Rack1R3(config)# interface Tunnel0
Rack1R3(config-if)# ip address 192.168.11.3 255.255.255.0
Rack1R3(config-if)# no ip redirects
Rack1R3(config-if)# ip mtu 1400
Tweak the EIGRP timer to match
Rack1R3(config-if)# ip hold-time eigrp 100 35
Make sure we leave the next-hop information unchanged
Rack1R3(config-if)# no ip next-hop-self eigrp 100
Rack1R3(config-if)# ip nat inside
Rack1R3(config-if)# ip nhrp map multicast dynamic
Rack1R3(config-if)# ip nhrp map 192.168.11.1 150.1.254.254
Configure a multicast map pointing to the outside interface of the DMVPN hub router.
This command is just like the broadcast keyword on a Frame Relay map.
Rack1R3(config-if)# ip nhrp map multicast 150.1.254.254
Rack1R3(config-if)# ip nhrp network-id 1
Rack1R3(config-if)# ip nhrp holdtime 600
Rack1R3(config-if)# ip nhrp nhs 192.168.11.1
And disable split horizon.
Rack1R3(config-if)# no ip split-horizon eigrp 100
Rack1R3(config-if)# tunnel source GigabitEthernet0/1
Rack1R3(config-if)# tunnel mode gre multipoint
Rack1R3(config-if)# tunnel protection ipsec profile Labbing
Now for the EIGRP process
Rack1R3(config)# router eigrp 100
Rack1R3(config-router)# passive-interface Loopback0
Rack1R3(config-router)# network 3.3.3.3 0.0.0.0
Rack1R3(config-router)# network 192.168.11.0
Rack1R3(config-router)# no auto-summary
Onto R4
Again, lets create a loopback so we have something to advertise.
Rack1R4(config)# interface Loopback0
Rack1R4(config-if)# ip address 4.4.4.4 255.255.255.255
Tunnel interface:
Rack1R4(config)# interface Tunnel0
Rack1R4(config-if)# ip address 192.168.11.4 255.255.255.0
Rack1R4(config-if)# no ip redirects
Rack1R4(config-if)# ip mtu 1400
Rack1R4(config-if)# ip hold-time eigrp 100 35
Rack1R4(config-if)# no ip next-hop-self eigrp 100
Rack1R4(config-if)# ip nhrp map 192.168.11.1 150.1.254.254
Rack1R4(config-if)# ip nhrp map multicast 150.1.254.254
Rack1R4(config-if)# ip nhrp network-id 1
Rack1R4(config-if)# ip nhrp nhs 192.168.11.1
Rack1R4(config-if)# no ip split-horizon eigrp 100
Rack1R4(config-if)# tunnel source GigabitEthernet0/1
Rack1R4(config-if)# tunnel mode gre multipoint
Rack1R4(config-if)# tunnel protection ipsec profile Labbing
EIGRP Process:
Rack1R4(config)# router eigrp 100
Rack1R4(config-router)# passive-interface Loopback0
Rack1R4(config-router)# network 4.4.4.4 0.0.0.0
Rack1R4(config-router)# network 192.168.11.0
Rack1R4(config-router)# no auto-summary
Now for R5
First the loopback interface
Rack1R5(config)# interface Loopback0
Rack1R5(config-if)# ip address 5.5.5.5 255.255.255.255
Now the tunnel interface
Rack1R5(config)# interface Tunnel0
Rack1R5(config-if)# ip address 192.168.11.5 255.255.255.0
Rack1R5(config-if)# no ip redirects
Rack1R5(config-if)# ip mtu 1400
Rack1R5(config-if)# ip hold-time eigrp 100 35
Rack1R5(config-if)# no ip next-hop-self eigrp 100
Rack1R5(config-if)# ip nhrp map 192.168.11.1 150.1.254.254
Rack1R5(config-if)# ip nhrp map multicast 150.1.254.254
Rack1R5(config-if)# ip nhrp network-id 1
Rack1R5(config-if)# ip nhrp holdtime 600
Rack1R5(config-if)# ip nhrp nhs 192.168.11.1
Rack1R5(config-if)# ip virtual-reassembly
Rack1R5(config-if)# no ip route-cache cef
Rack1R5(config-if)# no ip route-cache
Rack1R5(config-if)# tunnel source GigabitEthernet0/1
Rack1R5(config-if)# tunnel mode gre multipoint
Rack1R5(config-if)# tunnel protection ipsec profile Labbing
And finally EIGRP
Rack1R5(config)# router eigrp 100
Rack1R5(config-router)# passive-interface Loopback0
Rack1R5(config-router)# network 5.5.5.5 0.0.0.0
Rack1R5(config-router)# network 192.168.11.0
Rack1R5(config-router)# no auto-summary
So now we should test everything.
Let’s check our EIGRP neighbors on some the routers.
DMVPN:
Rack1DMVPN#sh ip eigrp neighbors
EIGRP-IPv4 Neighbors for AS(100)
H Address Interface Hold Uptime SRTT RTO Q Seq
(sec) (ms) Cnt Num
2 192.168.11.3 Tu0 31 00:09:22 5 1362 0 1008
1 192.168.11.4 Tu0 33 00:16:07 1 1362 0 1069
0 192.168.11.5 Tu0 32 00:16:09 1 1362 0 1067
Rack1DMVPN#
Yup, looks like all the spoke neighbors are there.
Lets check one spoke and make sure we are neighbored up.
Rack1R3#sh ip eigrp neighbors
IP-EIGRP neighbors for process 100
H Address Interface Hold Uptime SRTT RTO Q Seq
(sec) (ms) Cnt Num
0 192.168.11.1 Tu0 34 00:10:43 9 1362 0 21
Rack1R3#
Ok, so it looks like we have neighbors now. Lets check the routing table on R3 now.
Rack1R3#sh ip route
Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area
N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2
E1 – OSPF external type 1, E2 – OSPF external type 2
i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2
ia – IS-IS inter area, * – candidate default, U – per-user static route
o – ODR, P – periodic downloaded static route
Gateway of last resort is 150.1.13.13 to network 0.0.0.0
100.0.0.0/32 is subnetted, 1 subnets
D 100.100.100.100 [90/27008000] via 192.168.11.1, 00:11:17, Tunnel0
3.0.0.0/32 is subnetted, 1 subnets
C 3.3.3.3 is directly connected, Loopback0
4.0.0.0/32 is subnetted, 1 subnets
D 4.4.4.4 [90/28288000] via 192.168.11.4, 00:11:17, Tunnel0
5.0.0.0/32 is subnetted, 1 subnets
D 5.5.5.5 [90/28288000] via 192.168.11.5, 00:11:17, Tunnel0
C 192.168.11.0/24 is directly connected, Tunnel0
150.1.0.0/24 is subnetted, 1 subnets
C 150.1.13.0 is directly connected, GigabitEthernet0/1
S* 0.0.0.0/0 [1/0] via 150.1.13.13
Rack1R3#
We have routes to all the remote locations loopback addresses.
Let’s check our DMVPN connections:
Rack1R3#sh dmvpn
Legend: Attrb –> S – Static, D – Dynamic, I – Incomplete
N – NATed, L – Local, X – No Socket
# Ent –> Number of NHRP entries with same NBMA peer
NHS Status: E –> Expecting Replies, R –> Responding
UpDn Time –> Up or Down Time for a Tunnel
==========================================================================
Interface: Tunnel0, IPv4 NHRP Details
Type:Spoke, NHRP Peers:1,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
—– ————— ————— —– ——– —–
1 150.1.254.254 192.168.11.1 UP 11:42:25 S
Rack1R3#
Now, lets see if we can ping R4 and R5 loopback from the R3 loopback.
Rack1R3#p 5.5.5.5 so l0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 5.5.5.5, timeout is 2 seconds:
Packet sent with a source address of 3.3.3.3
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/4/8 ms
Rack1R3#ping 4.4.4.4 so l0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 4.4.4.4, timeout is 2 seconds:
Packet sent with a source address of 3.3.3.3
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 4/4/4 ms
Lets check the DMVPN Rack1R3#sh dmvpn Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete N - NATed, L - Local, X - No Socket # Ent --> Number of NHRP entries with same NBMA peer NHS Status: E --> Expecting Replies, R --> Responding UpDn Time --> Up or Down Time for a Tunnel ==========================================================================
Interface: Tunnel0, IPv4 NHRP Details Type:Spoke, NHRP Peers:3,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb ----- --------------- --------------- ----- -------- ----- 1 150.1.254.254 192.168.11.1 UP 11:43:53 S 1 150.1.9.4 192.168.11.4 UP 00:00:01 D 1 150.1.10.5 192.168.11.5 UP 00:00:04 D
Rack1R3#
Look at that, we have full connectivity and the DMVPN was built between the neigbors!
Up next, RIP.
Yandy said:
One thing to keep in mind with EIGRP is that it’s usually a good idea to make the Spoke routers Stubs.. There’s no reason why queries should be going from Hub to Spoke for loss of reachability. This is just one thing that helps EIGRP scale on large DMVPN deployments.
fryguy said:
Good point – I do agree there. It all goes down to the design at the remote site I guess.
It is also good to summarize at the boundries where possible.
Thanks for the input, much appreaciated.
Pingback: DMVPN : les tunnels à la demande!
Steve said:
i have a couple questions, I am currently working on upgrading our HUB routers from 72ks to ASRs. With this came the chance to revisit the spoke configs. I am looking ta 130 spokes and growth is pretty steady roughly 5 a month.
what defines a large scale deployment?
If the above answer matches what I’m looking at then I should go ahead and change eigrp hold timer to the Cisco recommend value (35). I am a little on the fence with this decision. I like to let eigrp run as native as possible.
Steve said:
i have a couple questions, I am currently working on upgrading our HUB routers from 72ks to ASRs. With this came the chance to revisit the spoke configs. I am looking ta 130 spokes and growth is pretty steady roughly 5 a month.
what defines a large scale deployment?
If the above answer matches what I’m looking at then I should go ahead and change eigrp hold timer to the Cisco recommend value (35). I am a little on the fence with this decision. I like to let eigrp run as native as possible.
fryguy said:
I would suggesting contacting your Cisco SE and have them ping their internal teams. They are going to give you the most up to date information on this and also be able to take into consideration any hardware/configs that you have. My thoughts on that size of a deployment, it might be time to change the hold timer as that is a very large amount of neighbors to maintain.
Steve said:
i have a couple questions, I am currently working on upgrading our HUB routers from 72ks to ASRs. With this came the chance to revisit the spoke configs. I am looking ta 130 spokes and growth is pretty steady roughly 5 a month.
what defines a large scale deployment?
If the above answer matches what I’m looking at then I should go ahead and change eigrp hold timer to the Cisco recommend value (35). I am a little on the fence with this decision. I like to let eigrp run as native as possible.
Will lip jiang said:
only R3 has the config: “ip nat inside ” on tunnel interface. should it be delete ?
Will lip jiang said:
only R3 has the config: “ip nat inside ” on tunnel interface. should it be delete ?
derek said:
Thanks for the great article. My company has decided to deploy DMVPN with a single hub and multiple spokes geographically dispersed. I am using a Cisco 5850 running IOS 12.4(15)T7 as the Hub Router, and Cisco 1841 running IOS 12.4(24)T5 as the spoke. I have the same configuration as your example. The only difference, I have physical interfaces for my internal networks instead of Loopback. I can ping across the VPN to both Tunnel and FastE interfaces. The problem, I cannot route to anything behind the spoke or hub. I am using eigrp as the routing protocol.