hi! if i'm configuring 5 svrs within a private network with one of this server configured with 2 NIC (1private, 1corporate network). In term of design and security, where should i place these servers in? The core(6509 in the DC), distribution(network room L3 routing) or the edge sw? Thanks.
ther servers' private NICs will be in their own Vlan.
You can have them connected to access-layer switches following the hierarchical model, there is no strict reason to put them directly on core or distribution.
If you want to build a separate network with no communication with the normal one you could consider using MPLS VPN or simply VRF lite: in this way the private addresses are isolated.
So physical connections can be on access layer on distribution layer L3 setup can include the use of VRF lite.
in hierarchical design you can have a server farm block where you place two dedicated distribution switches and high speed access layer switches.
This can collapse to two dedicated switches.
Hope to help
hi! will there be any security issue in this case? what's the best practices? These servers will only be used by a small group of users in one of the dept.
In addition question which might not related to the above, if i have 2 pairs of 6509 (one old pair and one new pair) can i create a same vlan id on the new 6509 sup? or i must create a new range of vlan id? (this is for migration that goes by phases, moving the vlan interfaces one by one to the new pair)
Typically servers would be connected to the distribution/core switches with their own set of switches as suggested by Giuseppe. If this is not feasible then i noticed that all but one of the servers will be singly connected - how important is the uptime for these servers. If it is important do your distro switches have dual supervisors which could mitigate to some extent the single NIC connection.
Unless you have a collapsed distro/core servers should never be placed in the core. Just to be clear, servers should never really be directly connected into the distro layer but there may be mitigating circumstances ie. cost/resiliency etc.
As for security, well certainly the server with connections to both private and corporate network should have routing disabled between it's 2 interfaces. Other than that it's difficult to say without knowing the purpose of the private network and how these servers will be used.
In answer to your second question - if you want the same vlan to exist on all 4 switches so you can migrate servers one at a time between the switches then you need the same vlan id on all switches.
hi! In the 3rd paragraph, when you said disable routing, does that mean it's not advisable to add a route in the server itself for internet access? what would be the consequences if it's enabled?
As for the 4th paragraph, what i'm trying to do is migrate the router vlan interfaces over by phases. eg. in the existing old pair of core sw i've vlan10 in the router/sup engine, can i create vlan10 in the new pair of cores to run concurrently on both pairs of core sw? (both old and new cores will run concurrently until the migration is complete then the old cores will be retired) or must i create a different vlan id with a new range of ip and change the existing port on the new edge sw to the new vlan id? 2 same vlan id can't co-exist right? that's mean end of the day i will have different vlan id for all my vlan interfaces, right? :)
"In the 3rd paragraph, when you said disable routing, does that mean it's not advisable to add a route in the server itself for internet access? what would be the consequences if it's enabled?"
No it's not the same. You can add routes to a server and it will use them when it decides how to get to a destination but this is not the same as the server routing BETWEEN it's 2 interfaces. The consequences could be that a person could get from the corporate network to your private network but again without knowing your full topology etc. it's difficult to say. A good general rule to follow is to disable IP routing between interfaces on the same server unless you have a very good reason not to.
A lot depends on whether you are planning to change the IP addressing. If you want to keep the same addressing
1) connect all 4 switches together using L2 links.
2) Ensure that the new switches get the same vlan database as the old switches - either using VTP server/client setup or VTP transparent.
3) Create new L3 SVI's for existing vlans on new switches and add them to HSRP on old switches. Note i'm assuming you are running HSRP on the old switches.
4) Migrate your machines across.
5) Once all machines have been migrated shut down the L3 SVI's on the old switches and the new ones will take over.
item 1) L2 link does that mean creating a trunk?
item 2) without vtp the vlan created within a sw is only available locally on that particular sw, other sw will not aware of vlan created on other switches?
what would be the different if i'm using different set of ip?
1) If you have multiple vlans which you do then yes these need to be L2 trunks.
2) Well yes and no. If the switches are connected together with a L2 trunk and you create the same vlan on both switches then clients within that vlan will be able to talk to each other.
If you were using a different set of IP's then you could connect the new switches to the old switches with L3 routed links and create new vlans and L3 SVI's. Then you could move servers/clients across and they could route to the old vlans.
This is a lot more work though. Readdressing clients is usually quite straightforward with DHCP but servers are a completely different matter. If you are lucky all your server apps will use DNS to resolve names to IP addresses but i have come across apps that had hardcoded IP addresses. Change the address and the app stops working. Plus you may have acl's/firewalls/NAT etc. to update.
Unless you are going to be getting a real benefit from readdressing eg. a more logical IP addressing schema, i would stick with the same addressing.