This may belong in the security forums but I could not find a suitable category for my question. Anyway - I am implementing subnet-directed WoL in which the sending of packets will be controlled by a computer using a special application. The PC will only be allowed to broadcast on its own subnet. I am wondering if this is as much of a security concern as opposed the attack vectors that are introduced using a cross-subnet directed broadcast implementation?
Perhaps I am not fully understanding your explanation. But I think I understand that you will have a PC sending broadcast for WOL to its own local subnet. The level of concern for that is much lower than the concern if you were accepting subnet broadcasts from remote sources.
You are understanding correctly. But why would their be a lower vulnerability risk if the sources are remote as opposed to internal? I can see an attacker saying, "hey look, I found another attack vector to exploit in the form of WoL on this PC to send a flood of requests and DDoS the network"? So how are incoming streams from remote sources any more dangerous than from internal sources unless its a matter of taking the easier route?
I had several things in mind for my answer.
First I am assuming that you have some level of control over what gets connected on your local lan but no control over what is in the Internet. So I would assume some level of trust for devices on the local lan.
Second I believe that if you are monitoring the network and discover someone connected on the local lan is sending problematic/bad traffic that you have some ability to disconnect them and perhaps to remediate the problem.
Third is that one objective of attackers is frequently to send so much traffic that they consume bandwidth and impact access for your production applications. If the attack comes from a device on the local lan there is lots more bandwidth on the lan then there is on the wan so an attack from the lan does less damage than an attack that comes over your wan links where available bandwidth is likely to be more of an issue.
Valid points. Although I was actually thinking the exact opposite for #3. An external attacker has less bandwidth to play with, so pumping out continuous IP streams would yeild a lower throughput attack. So from the outside in say you have an attacker with an upload speed of 2mb/s who gets 200 hosts to respond. That is roughly a 200mb/s headed towards the target. On a GB LAN assuming the same variables above that amounts to 200GB traveling towards the target. To me the inside LAN attack would cause more damage, faster, assuming its not detected and remediated promptly. If I am wrong, what am I missing?
Thinking through your follow up question leads me to realize that I failed to discuss what is probably the most important reason why the level of concern is lower if the WOL is a local broadcast.
As you suggest the danger is that broadcast packets can be used in an amplification attack in which a packet from the attacker results in multiple packets being sent to the target. The reality is that this will always work if the attack is a local broadcast - because local broadcast always works. But a remote broadcast (directed subnet broadcast) only works if you allow it in through your interface. By default Cisco does not permit directed broadcast from outside. So you are stuck with local broadcasts working - you can not do anything about it. But you lower your concern about attacks if you eliminate the possibility of remote directed broadcast.
You said in your previous reply that it was a bandwidth issue which made an internal source attack less of a concern. But then in your last post you said the level of concern was lowered due to the fact that lowering your possibility of remote directed broadcast ( from external sources ) lowers the overall concern.
But if the source is internal how does lowering the remote directed from external have any impact on an internal source?
If internal *always* works but external can be prevented or limited then to consider internal a lower vulnerability risk it must come down to bandwidth?
That brings me back to my post about amplification from internal sources. If you have more bandwidth to play with and you cant stop it then its stands to reason that this would be a higher concern not so much for the network, as you do have more bandwidth, but much more so if you are trying to achieve a server take-down.
I'm sorry for being so stubborn here, maybe you are placing this in the lower concern basket due to the tighter security controls and the trust factor as you mentioned before that are typically seen with a security best practices model implemented.
So let me try it this way. If I understand your question correctly you want to compare the level of security concern for two environments.
1) Local LAN broadcasts are enabled and remote subnet directed broadcasts are disabled.
2) Local LAN broadcasts are enabled and remote subnet directed broadcasts are enabled.
In 1) you have minimized as much as possible the risk to your network.
In 2) you have additional risk to your network. So the degree of concern for 2) is greater and for 1) is less.