cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1949
Views
6
Helpful
8
Replies

Nexus 3k5, routing and leaking routes (VRF <=> default)

exane
Level 1
Level 1

Hi guys,

I am using 3548 to connect to partners.

I use a VRF, with static routes, and my VRF can ping hosts behind the next hop. Perfect.

In my VRF, I can get routes from the default VRF :

[...]

vrf context myPartner

  address-family ipv4 unicast

    import vrf default map me-to-myPartner

[...]

router bgp 1000

  address-family ipv4 unicast

    redistribute direct route-map me-to-myPartner

    redistribute static route-map me-to-myPartner

[...]

WPO-TLON1-002# sh ip route vrf myPartner

IP Route Table for VRF "myPartner"

'*' denotes best ucast next-hop

'**' denotes best mcast next-hop

'[x/y]' denotes [preference/metric]

'%<string>' in via output denotes VRF <string>

172.18.92.80/28, ubest/mbest: 1/0, attached

    *via 172.18.92.94%default, Vlan3935, [20/0], 00:02:52, bgp-1000, external, tag 1000

So route-map is working, and I am pushing from the VRF default to my VRF myPartner.

I should be happy with that, but I am not able to ping from the default the hosts behind the gateway in the vrf.

Event with static routing (in the default) :

interface Ethernet1/35

  no cdp enable

  no switchport

  speed 1000

  ip address 172.18.49.222/29

ip route 172.16.112.252/32 ethernet 1/35 172.18.49.217

vrf context myPartner

  ip route 172.16.112.252/32 172.18.49.217

And then :

#ping 172.18.49.217 vrf myPartner

[...]

5 packets transmitted, 5 packets received, 0.00% packet loss

#ping 172.18.49.217

[...]

5 packets transmitted, 0 packets received, 100.00% packet loss

In the documentation (http://www.cisco.com/en/US/docs/switches/datacenter/nexus3548/sw/unicast/602_A1_1/l3_virtual.html
) we can read : "Route leaking to the default VRF is not allowed because it is the global VRF."

Ok, well, but what should I do ?

8 Replies 8

jamie.grive
Level 1
Level 1

I don't know how this is done in Nexus but I think you need a route in the VRF which points to the default vrf using the 'global' keyword? But I guess that command isn't there so perhaps point it to an interface in the global VRF - maybe a loopback or something

I have some Sup2T doing GRT <=> VRF routing well, but this is quite different in the Nexus. Non-default to default VRF routing is quite special ...

Figures... I guess you can't do anything with route-targets in the default context?

So it wouldn't work if you did a static route in the vrf with an interface in the default?

Sorry i'm not being much help!

You're right Jamie, it does not work with a static route in the default. Really strange behaviour ...

We are progressing.

Supposing we forget the "default" VRF.

Loopback (lo10, 100.100.100.1/24) <=> VRF prod <=> VRF partner <=> fiber <=> partner <=> target host

If we create a production vrf and a partner vrf, and we import/export routes from bgp inter-vrf, I have routes.

# sh ip route vrf PARTNER

10.10.10.0/24, ubest/mbest: 1/0, attached

    *via 10.10.10.1, Eth1/52, [0/0], 00:22:44, direct

10.10.10.1/32, ubest/mbest: 1/0, attached

    *via 10.10.10.1, Eth1/52, [0/0], 00:22:44, local

11.11.11.1/32, ubest/mbest: 1/0

    *via 10.10.10.2, [1/0], 00:13:36, static

100.100.100.0/24, ubest/mbest: 1/0, attached

    *via 100.100.100.1%PRODUCTION, Lo10, [20/0], 00:18:53, bgp-100, external, tag 100

# sh ip route vrf PRODUCTION

10.10.10.0/24, ubest/mbest: 1/0, attached

    *via 10.10.10.1%PARTNER, Eth1/52, [20/0], 00:34:43, bgp-100, external, ta

g 100

11.11.11.1/32, ubest/mbest: 1/0

    *via 10.10.10.2%PARTNER, [20/0], 00:34:43, bgp-100, external, tag 100

100.100.100.0/24, ubest/mbest: 1/0, attached

    *via 100.100.100.1, Lo10, [0/0], 01:12:47, direct

100.100.100.1/32, ubest/mbest: 1/0, attached

    *via 100.100.100.1, Lo10, [0/0], 01:12:47, local

Well, looks like good, I have the static routes from the PARTNER vrf to the PRODUCTION vrf, and I have the return route from the PRODUCTION vrf to the PARTNER vrf.

I am able to ping from my PARTNER vrf the end-side link of the PARTNER (10.10.10.2/24).

I am able to ping from my PRODUCTION vrf the our-side link of the PARTNER link (10.10.10.1/24).

I am NOT able to ping from my PRODUCTION vrf the 10.10.10.2 ...

We are progressing, but this is not successfull ... any idea ?

Now you are doing VRF-lite route leaking between two VRFs it is a much more supported configuration (seems to be a lot of issues with the default/global as you've said).

Could be something simple - Are your sourcing the ping from an interface that the other side has a return route to (e.g. from Loopback10)? Can you attach the new configs?

Well, after having many discussions about this, it seems that we are a little bit to early on that technology.

We are using a 3548/3548/3048 trio for building simple low latency multicast delivery platform, and using fresh out of the oven NX-OS 6 with new allowed features ... we are sure we will hurt the limitations.

For example, multicast routing through VRFs is one.

So we will simplify everything in not-using VRFs, VRF-Lite route leaking, etc ...

We will have to be more accurate in route distribution and usage on the default, but we will be able to use all the power of the 3548 in warp mode.

For now, Nexus 1 - 0 network team

:-)

Well OK as long as you are able to get the result you need.

This sort of thing happens a lot when you are at the 'bleeding edge' of new technologies on new platforms. Often you are the beta testers for the developers! I would recommend feeding back your experiences to the TAC if you can.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card