Saturday, 14 June 2008

Fault Tolerant DHCP

This week I heard some stories about DHCP network faults that could have been avoided by better design.

The most common points of failure is that networks have just one DHCP or you need to manually bring the second DHCP server online to restore service back to normal.

The branch offices are also hard to provide best fault tolerance you can but you don’t have the same budget for hardware that you have at head quarters.

And I’ve seen people try to get around this by providing the DHCP only from head office but this still leaves you with the point of failure on the connection between the sites and as we know when the communications do fail they quite often take hours or even days for the provider to get it working again.

So the best ways around this I have found for the moment is to use a hybrid of local and remote DHCP server so that if the link goes down you have a local resource to provide lease and if the local resource fails you still have the remote resource.
By providing the DHCP from more than one location you can get around the single point of failure however this does mean that the scope needs to be larger, Microsoft best practice use a 80/20 rule meaning that 20% of the address are provided by the standby and you need to remove the excluded address manually when the primary DHCP server fails and this takes time, if you are using Cisco routers and switches you can provide a hot standby option with the right hardware.

But most likely you’re using Microsoft windows so your options are a little bit more limited however if you look at running larger subnet on network address translated ranges you can over this, you are maybe wasting half of your IP range this ways however you don't have to wait for some to change over the scope options to get clients connected.



This diagram shows the most fault tolerant model that is currently available using a split DHCP between local and remote and cluster resources at critical site such as head office.

No comments: