We have a redundancy connection with our ISP from two different sites BG1 (border gateway working), and BG2 (border gateway standby), the ISP use local preference in order to select the path and the Borders gateways use RANK (in both sites) in order to send the traffic to the ISP. Our two Border gatewas are connected through MPLS backbone using bgp between PE-CE, and the routing protocol between border gateway and CE is ospf.
The problem happened when the interface between BG1 and ISP goes down, BG1 send LSA update to the CE (1500 routes) in order to inform to ospf that the link is down (expected behavior), so all of the traffic is redirected to BG2, but after 60 seconds BG1 will send to CE1 new LSAs update (400 routes) even the interface is down, so the CE1 will try to send all those routes through BG1 but due to the interface is down those networks are not reachable.
Please guys can you help us to understand which timers or parameters are causing this abnormal behavior??.
Thanks any comment is very welcome
to understand why BG1 has behaved in this way it is necessary to see parts of the configuration.
if you like remove public ip addresses (replace them with what you like) and user/pwd pairs and use the attach option
The problem can be originated by mutual redistribution between OSPF and BGP done in two different routers.
Hope to help
hi Giuseppe..thanks for your help..
I attach the configuration for CE1 (actually we have two 6509 cho1ipsw3, cho2ipsw4) and CE2 (we also have two 6509 neo1ipsw1, neo2ipsw2) I also attach some slides that describe the problem and architecture for BG1 and BG2 the configuration is very simple I will send you later.
thanks a lot.
you have provided a lot of details of the Cisco boxes and I see you are preparing a Case Study with a nice PPT presentation.
here are my first comments after some look at the documentation.
the problem is related to redistribution of BGP into OSPF.
We can see there are at least 1500 external routes in the IP routing table
(by the way sh ip ospf database database-summary is very handy here on cisco boxes)
This is a quite a number.
if you trust the packet capture results, if it is the result of a third party sniffer and you are not relying on Cisco switches, and you can think that the wrong LSA update is really sent by BG device I would open a case with your Nokia tech support.
There is a chance that the single failure of an interface triggers the need for a massive purge on the OSPF database.
The box can be put in crisis with very high cpu usage and some way it is not able to deal with this massive event and after a few seconds the redistribution process fails to think some BGP routes are still alive and send the new wrong update.
Before doing this you need to verify the BGP configuration of BG device:
there is only one eBGP session on the Gp interface or there is any other BGP session?
In case there are multiple BGP sessions, the redistribution takes care of this checking the BGP next-hop of routes, or simply looks for the presence of a prefix in the BGP table?
These are just some suggestions on what to check before opening a case
if the eBGP session is only one and you are redistributing BGP into OSPF there are few doubts it should be a scalability issue.
Thanks for giving me an update on last GPRS architecture, I had some exposure in the past but we were the MPLS backbone guys interconnecting the various SGSN, GGSN nodes.
I remember also the BGP connections to GRX.
Hope to help
thanks so much for your help..I attach the config for BG (bgp and ospf) like you see those configurations are very simple.. we only have one ebgp session .. the think is before we have the same topology and configuration and redundancy was working but since we change the routing protocol between CE-PE to BGP (before ospf) we start to face this problem. the number of routes is very big due to those routes belong to the roaming networks.. do you think that we can change some timers or parameters in order to avoid that the failure in one interface can cause a massive purge on the ospf database??
thanks for your help again.. any comment is really appreciated.
>> redundancy was working but since we change the routing protocol between CE-PE to BGP (before ospf) we start to face this problem
The weak point of current design is the fact that a single failure can cause this massive purge of OSPF database.
>> do you think that we can change some timers or parameters in order to avoid that the failure in one interface can cause a massive purge on the ospf database??
Very difficult question that you should ask to FW developers:
a possible suggestion is the use of the so called OSPF stub router function:
this feature allows to advertise OSPF prefixes with max metric before BGP convergence.
It might help in your case because it could make the FW box BG to advertise unusable routes.
Be also aware that you are redistributing mutually BGP and OSPF into two points the BGW and the BGX nodes.
For example on cisco routers you can use route tags to avoid routing injections back:
if BGW losts the BGP session it removes all OSPF external routes coming from BGP.
Then if the OSPF external routes injected by BGX arrive on the node it can try to redistribute them into BGP.
After the routes are in BGP they could be reinjected back in OSPF domain
It is a question of timers for this kind of issue as I wrote above cisco routers have the route tags to avoid that a route coming from BGP into OSPF at node BGX is reimported in BGP at node BGW.
I don't think this is happening but this is part of the scenario.
Hope to help
thanks for the advises..and sorry to bother you again..
We made some tests in bg2 (border gateway backup) when we shutdown the interface between bg2 and ISP the result was the same us before (some wrong lsa are received after 60sec) there is any timer or parameter which can cause this update in bgp?? We suspect that maybe the problem is not due to massive purge on the ospf database. Other question is about your comment "you can use route tags to avoid routing injections back" what kind of routes can be injected back?? this routes can cause wrong lsa ???
thanks so much for your help
by reviewing all the information you have provided in the thread I suggest you, if possible, the following test:
have one BG BG02W or BG02X isolated from network both in Gp and Gn.
In this condition performs the Gp failure test on the other node.
If you see the expected OSPF results: all LSAs are withdrawed and no unwanted LSAs are issued after 60 seconds the problem is the mutual redistribution OSPF to BGP and BGP to OSPF that you perform in both BG devices.
>> what kind of routes can be injected back?
during the fault device BG1 removes its own LSAs after the eBGP session as the underlying link is down.
After this massive purge device BG2 may re-issue its own external LSAs or simply BG1 looks at its OSPF database and finds them.
These external routes are received on BG1 and imported in BGP.
After BGP routes exist they can be redistributed to OSPF causing the wrong LSA you see.
These suggestions are just that.
This is quite a difficult issue.
if the behaviour doesn't change even with BG2 isolated when BG1 Gp fails you can go on with the case with Nokia support.
Hope to help