DHCP Resiliency in Windows Server 2012

I take pleasures in the little things in life. Reading a good book, going to see a cool act that nobody else cares about (Eli “Paperboy” Reed anyone?), so while there is lots and lots of really cool new stuff in Windows Server 2012 (Fibre Channel in Hyper-V, Virtual Networks, PowerShell improvements . . .and the list goes on) I want to have a look at one of the little things. Nothing too fancy or really big and noticeable, but something I think many people will find useful, and easy to implement: DHCP Scope Failover.

In the past, if an organization wanted to reduce their dependency on a single DHCP server, they needed to implement a DHCP Split Scope design or building Failover Clusters. It works–but in my experience, even with the Split Scope Wizard, it was a bit of a pain to design and implement. And with a split scope, the reality is that you’re just splitting your overall pool between multiple servers. If a server goes down, the part of the pool that it was responsible is unavailable/unresponsive. If the server stays down a long time, that can cause you problems and may require you to build a new server with a restored copy of the DHCP database from the original server.

Windows Server 2012 provides us with a much simpler, more elegant solution: DHCP Failover. At its simplest you need two Windows Server 2012 servers, configured with the DHCP Server role, as indicated in Figure 1.

Once you have two servers, choose one of them and create and configure a scope, just as you normally would. Configure the options you require, the lease expiry etc. Feel free to use whatever management tools you prefer for this. One thing to note: if you are planning on configuring a scope for failover, you will want to make sure the options are scope-based, not server-based (unless both DHCP servers are going have identical server options, but I like to keep my scopes self-contained). In Figure 2 you can see a private network scope with fairly common scope options configured.

Now that I’ve got a scope configured on one of my DHCP servers (JFIN-SRV), I can configure it for failover. You can configure failover for either a balanced workload approach (an active-active relationship between the two servers) or a hot-failover (active-passive). Using the DHCP Manager, either of these options is fairly straightforward to configure. In DHCP Manager connected to the DHCP server currently hosting the scope, right-click on the scope and choose Configure Failover…. The first screen confirms which scope or scopes you are configuring failover for. Choose accordingly and click Next. The next screen asks you to choose or add the partner server. If you already have a scope set up for failover, then that server should be available in the drop-down. If not, you will need to click on Add Server and then enter in the name of the other DHCP server. Once you have the failover server identified, click Next.

The “Create a new failover relationship” page is the most important page in the wizard. It is here that you get to configure the parameters of the relationship between the two DHCP servers. Choose a Relationship Name that makes sense for your ongoing management.

Figure 3: DHCP Failover Hot Standby

Figure 4: DHCP Failover Load Balance

Those of you with a keen eye will likely have noticed that there are two versions of this page. Figure 3,”Hot standby” is the Active-Passive approach, and Figure 4 shows “Load balance” the Active-Active. We’ll discuss the Maximum Client Lead Time and State Switchover Interval settings a little later. First though we need to go over the differences between the two modes. If you choose “Hot standby” then you need to choose whether the server you currently have the DHCP Manager connected to will be the Active or the Standby server. Additionally, you need configure what percentage of the scope will be reserved for the Standby server. You’ll notice that it defaults to only 5%. While this has the practical effect of reducing the number of addresses that can be leased, it’s a pretty small number.

If you choose “Load Balance”, then the two servers will share the workload in the percentages you choose. Both servers know about the entire scope (a bit different from the Hot standby mode) and use an internal algorithm based on the MAC address of the requestor to determine which server will handle the request and with what address. You change the percentages it changes the algorithm. It’s pretty hands off. To secure the failover messages between the servers, set a Shared Secret.

That leaves two settings that need to be configured. These settings control the speed with which full failover occurs. A scope (or servers) configured for DHCP Failover have three main states of being: Normal, Communication Interrupted and Partner Down.

Obviously, “Normal” is the mode you want to see most of the time. If a server loses communication with it’s partner, then the mode switches to “Communication Interrupted”. During this state you can manually trigger a failover to the remaining server if you know that the failed server is not coming up soon, and the remaining server will take over responsibility for the entire scope. The remaining server will wait for the Maximum Client Lead Time before taking control of the entire scope. If you want a remaining server automatically switch from “Communication Interrupted” to “Partner Down” (thus triggering the Maximum Client Lead Time interval) you can set the State Switchover Interval value to determine how long it will stay in “Communication Interrupted” before switching over.

You will want to consider the impact these two properties may have on the load balance or standby reservation percentages. Especially in a Hot standby scenario, if you set a long Maximum Client Lead Time and State Switchover Interval, then you might think about increasing the percentage held on the Standby to better service requests until full failover occurs. Having said that, you will want to have some idea how many IP addresses are normally refreshed within whatever timeframes you configure, and make sure that whatever percentages you set will support that.

Once you have finished on that page of the wizard, click Next and then click Finish to complete the configuration. When you are done, you can use the DHCP Manager to see that the scope now has a Status of “Active” and is configured for a failover relationship, as shown in Figure 6. You can do the failover management by right-clicking on the scope and choosing the appropriate replication options to move all active leases from one member of the partnership to another (perhaps for maintenance of one server), as well as an option to deconfigure failover.

Pretty easy to configure, easy to manage. Pleasure in the little things.

Advertisements

Managing PowerShell 3.0 Updateable Help

***Author’s Note:  I was in a rush and didn’t properly preview the graphics for this post in my browser. If you are finding them hard to read, try using your browser’s zoom-in capability. I found that if I zoomed in a bit, the text in the graphics became legible.  Sorry. :(***

Updateable Help is one of the new features of PowerShell 3.0. The biggest benefit of Updateable Help is the improved accuracy of the inline help, especially across patches/service packs/module additions. While I have to admit that the inconsistencies that would creep in were not a major issue for me personally, I can see how it would create frustration. So we’re going to look at some of the basics of using and managing help for PowerShell 3.0

Using Help

First first—I’m going to assume that if you’re reading this blog that you don’t me to tell you how to get help for PowerShell cmdlets. But you will notice that if you have a computer that has not gotten updated help (and out of the box, it won’t have), you’ll see that a normal Get-Help cmdlet will return the basic list of switches.

You’ll also notice that in the Remarks Section it tells you that the full help has not been downloaded. When you add the –Detailed switch to your Get-help command, the results are not much different. Each switch gets its own line, but the detail you are likely looking for is not there. Again, the Remarks section is telling you to get the updated help. Obviously, getting the updated help is a valid option, but not the only option.

There is a new switch for Get-Help, -Online. Assuming that the computer from which you are running powershell has an internet connection, this allows you to get the detailed help you need, without forcing a full update of the help files. It will open a web browser session and take you to the TechNet Library page for the help for the cmdlet you indicated.

Getting the Help onto Your Machine

That might be fine for module-specific cmdlets that I don’t use often, or while tweaking a script on a server (as opposed to writing a script in whatever environment you normally use), where you don’t really need the help there all the time. However, on the machine that you DO want the help at hand more readily you really want the help files on your machine. Microsoft has provided Update-Help for this.

For the help to update properly I’ve found that I need to make sure I am running the PowerShell console under the Administrator User Account Control (right-click, Run As Administrator).

Once I’ve done that, then I can keep it really simple with the cmdlet Update-Help

If you look at my results you’ll notice a couple of things.

  1. I didn’t run this with the appropriate UAC so some of the help files didn’t update. Easily fixed. Open a PowerShell Console in the correct context and that goes away.
  2. There were a couple of other errors about DTD files and incorrect URIs. Those have to do with the help files not being available properly from the Microsoft servers. Instances of those errors should drop over time. If you want a detailed explanation of what exactly is happening, read this: http://bit.ly/SoMnIB . It explains it all quite well. So we’ll ignore those for the moment.

There are additional switches for the Update-Help cmdlet that allow you more control over what is going on. For instance, the –Module switch allows you to update the help only for a specific module. Take the time to explore those switches and see which ones may be of use to you.

How Do I Update Help if my server doesn’t have access to the Internet?

That was the first question that popped into my head when I first came across this idea of Updateable Help. The solution is reasonably straightforward and done in four steps.

  1. From a machine with internet access (and the appropriate modules you want the help for) run Update-Help.
  2. From the same machine run the Save-Help cmdlet. Save-Help will copy the help files to a directory of your choosing. Much like Update-Help, you can use a –Module switch to only save the help files for a specific module. The main parameter you need to provide is the parent directory where you want the files saved. In the example below I only saved the help for the Hyper-V module.
  3. Somehow transport the directory with all the help files to the computer(s) that don’t have internet access. Network share, portable drive of some sort, whatever works for you really.
  4. On the isolated machine, run Update-Help –SourcePath <path to folder that holds the help files> -Module <Name(s) of modules that have help files in that directory>

You can see in the screen shot below how steps 1-3 work. You will notice that I created a folder to hold the help files, and then ran Save-Help only for the Hyper-V module (for the sake of simplicity). I then displayed what files were put in the destination path.

  1. Once I had done that I can transfer the .cab and .xml files to the remote machine. In this instance I created a folder called “HyperVHelp” on the destination machine and put the files that folder. Once I had done that I ran Update-help. You will note that I used the –Module switch to keep it simple. Had I not done that, my screen would have been filled with red as it tried and failed to update the help for all known modules. The Hyper-V help would still update, but all the error messages are a bit off-putting. So I only updated the module I had help files for.

I ran the Update-Help command a second time, just to show you what can happen if I try to update help files that are currently up-to-date. As you can see, nothing major happens, so there’s no big concerns in that space.

Review

As you can see, the process for updating and managing PowerShell’s help is not terribly difficult. With a bit of practice and planning you should be able to develop and implement a plan for keeping your PowerShell inline help current and useful.

PowerShell 3.0 Workflow Basics

There are lots of new features and functionality in PowerShell 3.0, a brief overview of them found here: http://technet.microsoft.com/en-us/library/hh857339

One of the features that I find most exciting is the new scripting construct, Workflow. If you are familiar with creating a custom Function in PowerShell, then you know most of what you need to know about creating a workflow.

So what’s the difference?  The difference is that when you creating a new command using “Workflow” instead of “Function” in a script, the new command has access to a built-in library of common management parameters, such as PSComputerName, which enable multi-computer management scenarios.  This will make your scripts more powerful, and can be developed in a much simpler fashion.

Below you can see a very basic Workflow that I’ve created and saved on a member server in a domain.

As you can see, the Workflow itself is very basic, doing nothing more than gathering information about the powershell environment and writing it out to a text file that will site on the root of the C: drive. In reality this would need a lot more work, as it assumes that PowerShell is running, which is not a safe assumption if you’re getting this to execute on remote machines.  But ignore that, and assume that you do have a powershell console open on your remote machine.

I could have done the exact same thing with “Function” and it would work just fine on the local machine in which I ran the script.

However,  a look at the help for Get-PowerInfo reveals a host of parameters that are built in to the workflow.

Of immediate interest is -PSComputerName.  This parameter allows the code to run on remote machines (running PowerShell 3.0 and w/ remoting enabled) without me having to specifically add code to the workflow to manage remote sessions or import modules etc.

You can see that my workflow executes successfully, and when I go look at the C: drive on jfin-dc, the output text file has been created.

Obviously, a very simple use (and not all that practical in an of itself), but hopefully you can see that there is a great deal of power in this new Workflow construct in PowerShell 3.0.  As a scripting administrator, I can see how this will make my remote management scripts more powerful and easier to write.