CiscoLive! 2011: What I Did On My Summer Vacation

Blog

Jul 26, 2011 10:03 PM
Jul 26th, 2011

On July 6th, I boarded a plane from Raleigh Durham Internation Airport to McCarran International Airport in Las Vegas to participate in my twelfth year of CiscoLive! / Networkers.  Just like last year, I would be helping to build and run the network at CiscoLive!.  Unlike last year, I would also be presenting five presentations, hosting a table topics section, participating in Meet the Engineer, and proctoring a Walk-in Self-paced lab.  Yes, I am insane.  Yet I was determined to make this the best year thus far...

...And it was!  Not only did I nail my speaking sessions, but the new session my colleague Jason Davis and I gave on the Network Operations Center (NOC) and network at CiscoLive! absolutely killed.  The whole NOC team came together to present what we had done to deliver a fully functional network in essentially three days.  I'd like to spend the bulk of this blog post on the network we built.

First, a bit about the experience building the network.  As I said, I arrived Wednesday July, 6.  From the moment I landed, I was working with a great team of people to build the CiscoLive! network.  Let me say that by Friday I was so tired I was seeing grass grow on the walls.  I hadn't been that tired since I would stay up all night hacking away in the computer labs in college, and then trying to make calculus class (yes, I'm a geek).  But it was totally worth it.  We built a 10 Gigabit campus network that supported over 16,000 users.  This network:

Screen shot 2011-07-31 at 7.04.26 PM.png

The Edge

The provider edge of the network was generously lent to us by Interop and Qwest Communications.  We had a dedicated 1 Gbps connection through Sunnyvale with a backup 1 Gbps connection through a colo in Denver.  This was the first year we didn't use NAT at CiscoLive!.  Instead, we used part of Interop's 45.0.0.0 class A.  We carved out 45.0.0.0/15 for our IPv4 use and 2620:144::/32 for IPv6.  Not only did we have routable networks, we were authoritative for our own DNS sub-domain.  We were allowed to use lasvegas.ciscolive2011.com.  Every host on the CiscoLive! network had fully internet resolvable A, AAAA, and PTR records.  These records were served out by Cisco Network Registrar 7.2 running on a RedHat virtual machine.

The Core

The Core of the network was composed of those two 6513E chassis in the center of the diagram.  These two switches were equipped with the brand new Supervisor 2T (Two Terabit) modules in a Virtual Switching System (VSS) configuration.  These switches held the full IPv4 and IPv6 routing tables.  In case you're wondering, the IPv4 routing table size at the time of the show was 350,000 prefixes with an IPv6 size of 6,500 prefixes.  Just a bit of trivia, but prior to World IPv6 Day on June 8th of this year, there were about 5,000 prefixes.  Good to know World IPv6 Day made a difference .

The Distribution

From the core we had 40 Gbps channel connections to two 6509E chassis in the Mandalay Bay basement.  These 6509s used Supervisor 720s in a VSS High Availability (HA) configuration.  These switches fed the access layer of the network.

The Access

The access layer was composed of Catalyst 3560E switches that connected back to the distribution using a 20 Gbps channel.  For some weird reason, I got the job of building the access layer configuration.  It felt good to put that CCIE of mine to a bit of use.  Based on previous shows, my goal was to put a few leading practices into operation to ensure proper network operations.  This meant that the switches were configured with DHCP snooping (to prevent rogue DHCP servers), IP SLA responder, error-disable auto-recovery (3 minutes), and access-lists to protect VTYs and SNMP.

Jason and I also drafted the manageability configuration for all devices in the network.  This year we were successful in deploying AAA to all network devices.  We installed CiscoSecure Access Control Server (ACS) 5.2 to provide our authentication and authorization services.  We also deployed SNMPv3 authPriv everywhere!  We wanted to be secure above all else, especially since we were going to be on the big bad internet without any NAT layer to protect us.

The NOC

The NOC itself was positioned in the center of the main hallway leading into the World of Solutions.  Jason remarked it looked like Dune Guild Navigator's tank.  I'll let you be the judge.

noc.jpg

The NOC connected to the rest of the network via a Catalyst 4507R+E that was located in the NOC itself.  The NOC was home to our NMS data center, which consisted of two Nexus 5010 switches, four UCS C200M2 servers running VMware ESXi 4.1 Update 1, and one top-of-rack 3560 switch.  Our NMS services included CiscoWorks LAN Management Solution 4.0.1, Cisco Prime LMS 4.1 (pre-release), the aforementioned ACS 5.2, the aforementioned CNR 7.2, and a FreeBSD virtual machine to run some Open Source tools.  Our management tool of choice was LMS.  We used it to provision and upgrade the devices prior to arriving in Las Vegas.  When in Las Vegas, we used LMS to monitor the network for faults, measure availability, and configure access port changes.

Also found in the NOC were the wireless hub.  We had eight 5508 Wireless LAN Controllers servicing all wireless users.  Each floor in the convention center (four in all) had its own pair of controllers (one primary and one backup).  While we could have serviced the entire show with one pair of controllers, we had more flexibility by separating the users on a floor basis.

The NOC also serviced the thin clients used for registration and Internet kiosks, as well as the physical security analytics servers.  More details on these elements can be found in the attached NOC presentation.

The Statistics

These year's show was a record breaker to be sure.  We had about 16,000 attendees on site (which means a lot more wireless devices).  I'm sure you're all dying to know the stats taken from our week-long net-fest.

StatisticValue

Total Number of unique DHCP Leases

28,298

Highest number of Active MACs (wired)

1,028

Highest Daily number of active DHCP leases

16,000

Managed Routers and Switches

170

Wireless Access Points

190

Average number of clients / AP

290
Top URLFacebook (248,485 hits)
Weirdest top 50 URL

http://www.bigbadtoystore.com/bbts/product.aspx?product=HAS20743&mode=retail&picture=in

And the coup de gras stat....between Friday July 8 and Thursday July, 14, the network switches 13.2 TB (that's terabytes) of traffic!

Until Next Time

While this show was a lot of fun, we're looking forward to the next one.  With any luck, I'll see you at CiscoLive! Europe 2012 in London!  In the meantime, attached you'll find sample configs for our access and distribution layer switches as well as the PDF version of the presentation we gave on the network at CiscoLive!.

Average Rating: 5 (2 ratings)

Comments

ROBERTO TACCON Mon, 07/30/2012 - 11:52

Thanks for sharing the configurations.

There is also the possibility to share configurations for the core and the internet BGP edge ?

Best Regards

Rob Huffman Wed, 08/01/2012 - 06:14

Hi Joe.....you must be a tad crazy to take on all this! Congrats on a job well done my friend!

Cheers!

Rob

Actions

Login or Register to take actions

This Blog

Posted July 26, 2011 at 10:03 PM
Stats:
Comments:3 Avg. Rating:5
Views:3141   
Shares:0
Categories: Cisco Prime
+

Related Content

Blogs Leaderboard

Rank Username Points
1 10
Rank Username Points
5