We are about to build a New Data centre to eventually replace one of two current data centres. The project manager has decided they would like to use the same address range for the new data centre as the current data centre so servers can be easily moved in stages with minimal config changes. The data centre that is being replaced is only small and has one VLAN.
Each of our servers use dual nics with the default gateway set to the HSRP address of the VLAN.
The current two data centres are connected over an etherchannel which we route across.
The only way I can think of doing this is to have a trunk between the three data centres. When servers are moved from the old data centre to the new data centre their DG would remain unchanged. (HSRP address of VLAN at the old data centre)
To me this just doesn't seem good practice. Can anyone see a any potential issues with this. I need to put up a good argument to have a new address range for the new data centre.
Any help much appreciated.
Check out the SRNDs for the Data Center. They will show you best practices. Also your local Cisco SE can help, that's what they are there for.
Hope it helps.
sounds like you will have to trunk the vlans across in order to use same ip space. otherwise get ready to add a firewalls or some other devices to nat between the sites with the same addresses
Not sure from your description why you need a trunk between the 3 data centres.
Yes you would need a trunk between the old DC and the new DC but the other current DC can still just be routed.
So currently you have
DC1 -> L3 etherchannel -> DC2 (or is it a L2 etherchannel - it's not clear)
During migration -
DC1 -> L3 etherchannel -> DC2 -> L2 trunk -> NEW DC
Note that if there is only one vlan the connection between DC2 and NEW DC does not have to be a trunk, it could just be an access port in that vlan.
When all servers have moved from DC2 then
DC1 -> L3 etherchannel -> NEW DC.
I have done exactly this in the past to migrate from one building to another. All you are doing is temporarily extending the L2 domain and there is nothing actually wrong with doing that.
Sorry haven't made myself very clear.
There wouldn't be a connection between dc2 and the new dc therefore I would need a trunk which spans across all 3 sites.
Between dc1 and dc2 we use a layer 2 etherchannel but we route across it by
having an SVI in the same network at both sites. We need a a trunk from dc1 to dc2 as we have 3 non routed vlans across this link.
So I should be fine having servers in the new dc who's DG still point to a HSRP address which sits in dc2?
"So I should be fine having servers in the new dc who's DG still point to a HSRP address which sits in dc2?"
As long as the vlan in DC2 that the servers are part of in the newdc is available over both links and is not routed from dc1 to dc2 then yes you will be fine.
But if the vlan in dc2 is only reachable from dc1 by routing then no it won't work.
I've been thick, of course it won't work as we route between the dc1 & dc2.
I was thinking I could just add a trunk
and get from newdc to dc2 across this trunk.
"I've been thick, of course it won't work as we route between the dc1 & dc2."
"We need a a trunk from dc1 to dc2 as we have 3 non routed vlans across this link."
The above 2 are confusing. Is the link between DC1 & DC2 a L2 etherchannel or a L2 etherchannel trunk.
If it's an etherchannel trunk lets say that the vlan you are migrating is vlan 20.
The SVI for vlan 20 is on the switch in DC2.
There is no need why you can't simply add vlan 20 to the trunk etherchannel between dc1 and dc2. Then add another trunk or simply a L2 link in vlan 20 between the newDC and DC1.
Then any device in newDC getting to it's DG just has it's traffic L2 switches from newDC to DC1 to DC2 where the L3 SVI for vlan 20 is.
But it does depend on how traffic is routed to and from vlan 20 at present.
So could you clarify -
1) DC1 -> DC2 link - it is L2. Is it an etherchannel or an etherchannel trunk
2) the vlan in DC2 you want to migrate -
i) do devices in DC1 need to communicate with them
ii) if they do how do they access them ie. are the devices in DC1 in a different vlan and do you then route that traffic onto vlan 20.
I've attached a diagram so you can see how its set up. The Etherchannel is a trunk.
As you can see the servers at dc1 communicate with the servers at dc2 by routing across vlan 20. We do this by having an SVI in VLAN 20 at both sites which sets up an EIGRP neighbor relationship. The DG of the servers is the HSRP address of the VLAN.
We would be migrating the servers in VLAN 15 at dc2 to a new data centre which would only have connectivity to dc1.
If I allowed vlan 15 across the trunk then set up a connection from dc1 to the new data centre I should be able to keep the same address range for the migration. If servers in the new dc had to talk to a server in dc1 there default gateway would be at dc2 so they would need to go down there first and back again.
As long as you accept the fact that traffic to and from servers in the newdc will have to go via dc2 then it should work.
I would recommend testing this setup before committing to the migration ie. assign a loopback interface on your new switch in newDC giving it an address out of vlan 15 and then test to make sure connectivity between dc1 & dc2 servers still works as expected and that both dc1 and dc2 servers have connectivity to loopback.