CISCO 3030 CONCENTRATOR STOPS SENDING TRAFFIC TO 3002 HARDWARE CLIENT
Cisco 3002 Hardware client connects a tunnel to 3030 concentrator in Network Extension mode.
Machines at the central site are able to reach resources behind the 3002 (via ping etc) once data is sent from behind the 3002.
If tunnel is left idle for a few hours, the Cisco 3030 reports the tunnel to be up and active, however at this point in time, no data can be sent from the (we are unable to ping devices behind the 3002) until data is sent from behind the 3002.
It seems that the 3002 looses its connection from the 3030 then imediately rebuilds a new connection. I have all timeouts set to "0" within the group settings for the 3002. Therefore the 3002 should keep all its connections up indefinitely. However this does not happen.
The logs on the 3030 indicate that the disconnection takes place with a reason "User required".
I would like to have the 3002 maintain its tunnel connection indefinitely.
Am I missing something here....
Is it not possible to do this, or is this a bug in Cisco's code
Re: CISCO 3030 CONCENTRATOR STOPS SENDING TRAFFIC TO 3002 HARDWA
Your real problem is that the tunnel is going down (must find out why) and the 3002 NEM node brings it up automatically. Once the tunnel is up you must start data from the 3002 side before 3000 traffic can flow the other way.
If the tunnel doesn't go down then the connectiona should stay up, indefinatly, if timeouts are set to 0.
Yes, all the timeouts are set 0/indefinetly, however, if the tunnel goes down this all means nothing. We need to find out the reason for the tunnel being torn down.
On the VPN 3000 set the debugging AUTH, IKE, IKEDBG to at least level 9.
Then let the tunnel come up. The next time the tunnel goes down save the logs and please post.
We have configured the outside and inside Interface with official ipv6 adresses, set a default route on outside Interface to our router, we also have definied a rule , which also gets hits, to permit tcp from inside Interface to any6.
In Syslog I also se...