Fun with Windows Deployment Services in WS2012 R2

Recently we’ve been having some random strangeness happening with our Windows Deployment Services (WDS) server in the office. A look around the environment didn’t uncover any “A-HA!” moments. All looked as it should. Performance counters and metrics were all where they should have been. But it didn’t change the fact that there was a noticeable, sudden downturn in performance of the WDS environment. We looked at the underlying SAN, the virtualization platform, the network infrastructure, the VMs themselves. They all were returning smiling happy faces and saying “Nothing to See Here.”

Now all this digging around did unearth some other issues that we needed to address, especially around patching of our VMs. I found a couple that somehow hadn’t been patched in over 2 1/2 years. Oops. That’s not really a good thing.  The WDS server wasn’t one of those, but it did have quite a few pending so I took the opportunity to take it offline and do some maintenance and patching (and in the process of running terminal sessions from within VM console sessions from within RDP sessions I managed to trigger updates to our SAN. But that’s a story for a different blog post. Suffice it to say it all ended up just fine).

Patches downloaded, installed, server rebooted. A quick test . . . no change in the performance. Grrrrrrrrr. But not really unexpected.

And so, in the grand tradition of IT expediency and pragmatism I had the thought “I could have built a new server faster than this.” So after consulting with a few colleagues we decided I should do just that. It allowed us to tick a few different boxes anyway, and it just happened to coincide with some unexpected free time in my schedule. So off I went!

General Approach

We decided to go with a side-by-side migration, basing the new WDS server on Windows Server 2012 R2, rather than an in-place upgrade. This was a good choice for us for lots of reasons, not least of which it is the recommended approach by Microsoft for most things. So we put together our big-picture plan, which looked like this:

  1. Do your homework. I’ve had some experience with WDS, but not massive amounts in the wild, so before I started down this path I needed to make sure I had enough knowledge to be able to confidently do the work without any major, preventable issues. I had a look through the topics found in course 20415-Implementing a Desktop Infrastructure, and then I headed off to TechNet. This article proved to be a useful starting point for me as well. It also gave me a useful basic checklist to make sure I didn’t miss any important steps.
  2. Plan and Install the Windows Server 2012 R2 VM. Using the VM configuration of the original WDS server as a starting point I made and documented the decisions about configuration and build of the VM. There were the obvious OS things like “Yes, it has to be a member of the domain”, computer name, static v. dynamic IP addressing (I went with static, so I could easily change the IP address later to take over the IP of the legacy server, thus reducing the need to reconfigure our network infrastructure), but also made decisions about some of the less obvious stuff, like how many and what type of virtual disks to use, where to store those disks (SCSI, in the shared storage, thin-provisioned), how much and what type of memory (dynamic, 2GB start-up 16GB max). Once that was done I created the VM and the disks, hooked up our WS2012R2 iso and built the VM to spec. I also tried to do as much patching as possible done here so I didn’t get interrupted when I was doing the WDS work.
  3. Set up the storage. We made the decision to make use of a few of the new features in WS2012 and WS2012R2, as well as trying to future-proof the configuration as much as we reasonably could. So what we decided in this case was to take the data disk (our second virtual disk) and set it up within Windows as part of a storage pool. If we need more storage later on we can create new virtual disks, and add them to the storage pool. Once we had that, I created a single virtual disk from that pool. Again, in the future we can extend that virtual disk if required, allowing us to relatively quickly and easily increase the storage available to hold our images. I also enabled Disk-Deduplication on the volume that was based on that virtual disk. For us, this was all about maximizing storage/minimizing storage use. WDS already de-duplicates the data in the WIMs, but it shouldn’t hurt for all of the other files.
  1. Install and configure WDS services. This was probably the most straightforward part of the process. Using the checklist I got from TechNet, I added the Windows Deployment Services role to our new VM. I wanted a new server, but not new images, so from there the configuration was really nothing more than exporting the boot and install wims from the legacy WDS server, and then adding and configuring them. On the legacy server I mapped a drive to the data drive on the new server and used that as the target for all the exports. We made the decision to only export the current wims that we use, not all the historic wims. So I really only had 4 boot wims and 5 install wims to deal with. On top of that I needed to document and copy out the unattendimage.xml files that correspond to those wims.This really wasn’t anything more than an exercise in documentation and paying attention to detail. When you export an image from WDS it makes a copy of the wim file, but does not include the other properties/configurations of the image, like unattend files. So I made sure that I documented and copied out all the unattend files that I were going to need, and then I did our image exports. Once those were done I created new images on our new WDS server, using the exported WIMs, and then going in and configuring each image to use the corresponding unattend file. Fortunately, I only had a handful of images so it wasn’t a big deal. If I were to do this again I’d spend a little time digging around in PowerShell to see if there is a command that would allow me to script that or at least do it in bulk.

     

  2. Test the new server. This was an easy one. Since we only use our WDS environment late in the day, I could take a couple of client machines and use for testing. So that’s what I did. I shutdown the legacy WDS server, and then changed the IP address of the new server to take over from the legacy server. This meant I did not have to go and change firewall, networking and DHCP settings. I restarted the WDS services and then booted up some clients.The first tests proved to be reasonably successful. The clients all successfully found the server, connected to the TFTP service and deployed images. It did reveal a couple of minor configuration errors I had made. Specifically, I had forgotten to set the boot image priorities and timeouts, and I had used the wrong unattend file for one image. Minor configuration issues that were easily fixed in about 5 minutes. The second test went as planned. No issues. However, the jury is still out on the performance issues. We won’t really know until we put it under are normal loads. It seemed quicker, but that was just the eyeball test. Regardless we have a leaner and meaner WDS server than we did before, so overall a worthwhile project anyway.
  3. Phase out the legacy server. I are quietly confident that the new server will function as required, and initial testing indicates that it will cope with the workload. So we are shutting down the original VM on a semi-permanent basis, but will leave it registered on our virtualization platform for the next 2 weeks. If we encounter any major issues we can quickly shut down the new server and spin up the original. If we are all clear after two weeks I will be delete the VM and all its files from our virtualization platform. However, I have taken a copy of the virtual disk that contained all the images for the legacy WDS server, and it is stored on external storage. If push comes to shove I can attach to the VM and get access to those wim files.
  4. Monitor and maintain. Our initial testing indicates that the new server is performing better than the last, but I haven’t received definitive proof as wet. That will likely occur when we do our first large-scale rollouts using the new WDS environment. But all signs are looking good for new.
  5. Write blog post about it. Done.

Obviously, this wasn’t a hugely technical article, more of an overview of the process and what I went through. There was a lot of good stuff in TechNet around WDS server management, so if you’re embarking on a WDS project you might want to start there.

Cheers.

NZMCT


Advertisements

Adding .Net 3.5 SP1 to an Azure VM through the Back Door.

If you’ve been working with Windows Server 2012/2012 R2 you might have noticed that you need the CD/install files handy if you want to install .NET 3.5. Which causes problems if you need that feature on an Azure VM. After a bit of digging I found this in the MSDN forums, and it has proven to be handy. I haven’t tried it enough to say that it is foolproof, but if you’re having this issue it’s worth a shot.

1. Go to Windows Update through the Control Panel.

2. Click “check for update” on the left side. It may take a while to check for the update like it did on my machine. (~5-10 min)

3. Click the “# important update is available” blue button next.

4. On the next screen you will be shown important updates that are ready to be installed. You should have an update called “Update for Microsoft .NET Framework 3.5 for x64-based Systems (KB3005628)”. With that update checked, click install on the bottom. There will be other updates available to you as well. I haven’t thoroughly tested for any combinations that specifically do/don’t work, so try at your own risk!

Assuming that update successfully install, lets go back to Server Manager and try the installation.

1 Go back to your Server Manager and in the top right corner click Manage -> Add Roles and Features.

2. Click next 4 times until you get to Features. Once in features check the box for “.NET Framework 3.5 Features” and then click next on the bottom.

3. On the next page if you see the yellow warning box to specify an alternate source path, just click ‘x’ to dismiss it. On the bottom of the page then, click install. If all goes well you should eventually see that installation succeeded.

If you’d like more detail on the why’s and wherefores:

http://support2.microsoft.com/kb/3005628

UPDATE:  Here’s an blog by an Aussie friend of mine that addresses a this issue in a non-Azure environment.

http://www.windowspcguy.net/?p=306

If you’re fighting this battle, definitely worth a read.

Enabling Data Dedup in Windows 8

I run a lot of Hyper-V VMs on my Windows 8 laptop, and my second hard drive is getting full, with all of my inactive vhd files.  I was thinking “It’s too bad Windows 8 doesn’t have volume deduplication like Windows Server 2012.” But I decided to have a bit of a nosy around the internet to see if there was a 3rd party solution.

Lo and behold! (And really, it shouldn’t surprise me by now, but it did a little bit anyway.) I found a really useful site that taught me how to hack the 2012 Dedup into my Win8 laptop.

NOTE: Doing this will put your computer in an unsupported state (due to mixing and matching SKUs of the windows code). It is up to you to assess the risk/reward equation of these actions.

Here’s the link:

http://www.scconfigmgr.com/2013/04/13/enable-deduplication-for-your-lab-environment-in-windows-8/

One of the dism commands is a little hard to copy/paste from his website, so here it is:

dism /online /add-package /packagepath:Microsoft-Windows-VdsInterop-Package~31bf3856ad364e35~amd64~~6.2.9200.16384.cab /packagepath:Microsoft-Windows-VdsInterop-Package~31bf3856ad364e35~amd64~en-US~6.2.9200.16384.cab /packagepath:Microsoft-Windows-FileServer-Package~31bf3856ad364e35~amd64~~6.2.9200.16384.cab /packagepath:Microsoft-Windows-FileServer-Package~31bf3856ad364e35~amd64~en-US~6.2.9200.16384.cab /packagepath:Microsoft-Windows-Dedup-Package~31bf3856ad364e35~amd64~~6.2.9200.16384.cab /packagepath:Microsoft-Windows-Dedup-Package~31bf3856ad364e35~amd64~en-US~6.2.9200.16384.cab

And since there is no gui for dedup in Windows 8, here’s a link to the online PowerShell help for the dedup cmdlets:

http://technet.microsoft.com/en-us/library/hh848450.aspx

DHCP Resiliency in Windows Server 2012

I take pleasures in the little things in life. Reading a good book, going to see a cool act that nobody else cares about (Eli “Paperboy” Reed anyone?), so while there is lots and lots of really cool new stuff in Windows Server 2012 (Fibre Channel in Hyper-V, Virtual Networks, PowerShell improvements . . .and the list goes on) I want to have a look at one of the little things. Nothing too fancy or really big and noticeable, but something I think many people will find useful, and easy to implement: DHCP Scope Failover.

In the past, if an organization wanted to reduce their dependency on a single DHCP server, they needed to implement a DHCP Split Scope design or building Failover Clusters. It works–but in my experience, even with the Split Scope Wizard, it was a bit of a pain to design and implement. And with a split scope, the reality is that you’re just splitting your overall pool between multiple servers. If a server goes down, the part of the pool that it was responsible is unavailable/unresponsive. If the server stays down a long time, that can cause you problems and may require you to build a new server with a restored copy of the DHCP database from the original server.

Windows Server 2012 provides us with a much simpler, more elegant solution: DHCP Failover. At its simplest you need two Windows Server 2012 servers, configured with the DHCP Server role, as indicated in Figure 1.

Once you have two servers, choose one of them and create and configure a scope, just as you normally would. Configure the options you require, the lease expiry etc. Feel free to use whatever management tools you prefer for this. One thing to note: if you are planning on configuring a scope for failover, you will want to make sure the options are scope-based, not server-based (unless both DHCP servers are going have identical server options, but I like to keep my scopes self-contained). In Figure 2 you can see a private network scope with fairly common scope options configured.

Now that I’ve got a scope configured on one of my DHCP servers (JFIN-SRV), I can configure it for failover. You can configure failover for either a balanced workload approach (an active-active relationship between the two servers) or a hot-failover (active-passive). Using the DHCP Manager, either of these options is fairly straightforward to configure. In DHCP Manager connected to the DHCP server currently hosting the scope, right-click on the scope and choose Configure Failover…. The first screen confirms which scope or scopes you are configuring failover for. Choose accordingly and click Next. The next screen asks you to choose or add the partner server. If you already have a scope set up for failover, then that server should be available in the drop-down. If not, you will need to click on Add Server and then enter in the name of the other DHCP server. Once you have the failover server identified, click Next.

The “Create a new failover relationship” page is the most important page in the wizard. It is here that you get to configure the parameters of the relationship between the two DHCP servers. Choose a Relationship Name that makes sense for your ongoing management.

Figure 3: DHCP Failover Hot Standby

Figure 4: DHCP Failover Load Balance

Those of you with a keen eye will likely have noticed that there are two versions of this page. Figure 3,”Hot standby” is the Active-Passive approach, and Figure 4 shows “Load balance” the Active-Active. We’ll discuss the Maximum Client Lead Time and State Switchover Interval settings a little later. First though we need to go over the differences between the two modes. If you choose “Hot standby” then you need to choose whether the server you currently have the DHCP Manager connected to will be the Active or the Standby server. Additionally, you need configure what percentage of the scope will be reserved for the Standby server. You’ll notice that it defaults to only 5%. While this has the practical effect of reducing the number of addresses that can be leased, it’s a pretty small number.

If you choose “Load Balance”, then the two servers will share the workload in the percentages you choose. Both servers know about the entire scope (a bit different from the Hot standby mode) and use an internal algorithm based on the MAC address of the requestor to determine which server will handle the request and with what address. You change the percentages it changes the algorithm. It’s pretty hands off. To secure the failover messages between the servers, set a Shared Secret.

That leaves two settings that need to be configured. These settings control the speed with which full failover occurs. A scope (or servers) configured for DHCP Failover have three main states of being: Normal, Communication Interrupted and Partner Down.

Obviously, “Normal” is the mode you want to see most of the time. If a server loses communication with it’s partner, then the mode switches to “Communication Interrupted”. During this state you can manually trigger a failover to the remaining server if you know that the failed server is not coming up soon, and the remaining server will take over responsibility for the entire scope. The remaining server will wait for the Maximum Client Lead Time before taking control of the entire scope. If you want a remaining server automatically switch from “Communication Interrupted” to “Partner Down” (thus triggering the Maximum Client Lead Time interval) you can set the State Switchover Interval value to determine how long it will stay in “Communication Interrupted” before switching over.

You will want to consider the impact these two properties may have on the load balance or standby reservation percentages. Especially in a Hot standby scenario, if you set a long Maximum Client Lead Time and State Switchover Interval, then you might think about increasing the percentage held on the Standby to better service requests until full failover occurs. Having said that, you will want to have some idea how many IP addresses are normally refreshed within whatever timeframes you configure, and make sure that whatever percentages you set will support that.

Once you have finished on that page of the wizard, click Next and then click Finish to complete the configuration. When you are done, you can use the DHCP Manager to see that the scope now has a Status of “Active” and is configured for a failover relationship, as shown in Figure 6. You can do the failover management by right-clicking on the scope and choosing the appropriate replication options to move all active leases from one member of the partnership to another (perhaps for maintenance of one server), as well as an option to deconfigure failover.

Pretty easy to configure, easy to manage. Pleasure in the little things.

Thoughts on Exchange 2013 Preview Install on Windows 2012 RC

I know, I know, I know–the RTM bits for WS2012 are available now.  But I already had a RC environment spun up and available to me. Disclaimer: This isn’t going to be a big technical blog outlining the steps to take when doing an install. There is a bit of technical detail and reference at the end, in the TechStuff section. Think of it more of a running diary of my experience getting this in. Having said that, lets go!

First thoughts:

Downloaded/extracted/mounted the install code into my VM. Found setup.exe and ran it.  Just wanted to see what would happen.  First few screens were the normal stuff: Licensing, Error reporting etc. One thing I noticed that made me go “Oh, that’s kind of nice” was when it prompted to look for Exchange updates before diving into the install.  Not an earth-shattering turn of events, but a nice touch.

The first important task (IMHO): PreRequisite Check.  Noticed that the only role options were for the CAS and Mailbox role. No Hub Transport, no UM. I hadn’t looked at any of the technical documentation yet, so this raised an eyebrow. It took me all of about 2 minutes to learn about the new architecture (http://technet.microsoft.com/en-us/library/jj150540(v=exchg.150).aspx#BKMK_Arch).

Since I hadn’t done any AD preparation, I got prompted for the Exchange Organization name, followed by a choice on turning Anti-Malware on/off. Setup said it needed internet access to update Anti-Malware, and since I don’t have that on these VMs, I chose no.  I’ll get some access and turn it on later.

Prompted for CAS external name settings.  I’ll configure those later.

Customer Experience Improvement Program. No.

Finally, actually did an actual prerequisite check. It had the option to let setup install any required Windows Features.  Finally. Not sure why this wasn’t included in Exchange 2010 setup. I said yes. If you’d rather have all these enabled prior to running setup, I’ve put the PowerShell cmdlets  to install all the features in the TechStuff section.

Still had some prereq problems, but it had to do with patches/updates that aren’t Windows Features, things like the Office 2010 Filter Pack SP1.  The download urls were provided, so it was pretty painless to deal with these and forge ahead.

One pain point here though. After downloading and installing everything that it had indicated, when I reran the PreReq Check, I got an error stating that Exchange Server 2013 Preview isn’t compatible with Microsoft Visual C++ 11 Beta Redistributable. I had to now uninstall it and rerun the PreReq check. Not hard, just annoying.

***UPDATE: The setup GUI did not install the “Windows Identity Foundation 3.5” Windows feature. This is required for the Exchange Control Panel (ECP) to work properly.  Even though it said it was going to install all required Windows Features, apparently it didn’t.  Watch out for this!

Once past that, it went to the actual install itself.  My only complaint is that the setup progress screen is bereft of much information. I’d like to see more information than “Step 4 of 15” and a percentage.  I’d like to know what each step was doing, at least in a big-picture kind of way.

Unfortunately, at step 8 out of 15, my laptop crashed, taking the VMs with it.  When I recovered and fired up the Exchange 2013 VM and reran setup it had detected that a previous instance of setup didn’t finish and prompted me to attempt to complete. I thought “let’s see how well this works”. In two words: It didn’t. I let it sit for about an hour, and it never made any progress one way or the other.  I cancelled out of the setup and tried to remove the installation manually.

First, I went to the Programs and Features control panel and uninstalled Exchange 2013 preview. That worked. So far so good.  I reran setup, but this time I got errors about Global Updates and permissions. A little digging made me realize that it wasn’t a clean uninstall.  I had to manually delete the “C:\Program Files\Microsoft\Exchange Server” directory and the “Microsoft System Objects” container in ADUC.

I took this opportunity to have a look at the command-line parameters for the Exchange 2013 setup.  A brief look at the inline help didn’t reveal anything overly exciting, so I had a check of the Prepare Topology help (setup.exe /help:PrepareTopology).

I ran the following:

Setup.exe /PrepareSchema /IAcceptExchangeServerLicensingTerms

Setup.exe /PrepareAD /OrganizationName: “NZMCT Org” /IAcceptExchangeServerLicensingTerms

Setup.exe /PrepareDomain /IAcceptExchangeServerLicensingTerms

Anyone else see the switch that I don’t like?  Especially since this isn’t a PowerShell cmdlet so there’s no AutoComplete. Arrrggghhhhhhh.

Once my topology was prepared I reran the setup GUI, and all went more or less according to plan.   I think had I not had a laptop crash, this would have been a straightforward install process.  Others have told me they had no problems.  I’ll take their word for it. 🙂

So What Now?

Now that I’ve got an environment I’m go to start having a play.  There’s the obvious differences in architecture and management (Say goodbye to the Exchange Management Console. PowerShell and ECP are your friends), but I’m sure there’s all kinds of good stuff in there, and I’m looking forward to figuring it all out.

TechStuff

import-module servermanager
 add-windowsfeature telnet-client,RSAT-ADDS,net-framework-45-core,windows-identity-foundation,Web-Static-Content,Web-Default-Doc,Web-Http-Errors,web-asp-net,web-asp-net45,Web-Net-Ext,Web-ISAPI-Ext,web-isapi-filter,Web-Http-Logging,Web-Log-Libraries,Web-Http-Tracing,Web-Windows-Auth,Web-Filtering,Web-Stat-Compression,Web-Dyn-Compression,Web-Mgmt-Console,Web-Scripting-Tools,Web-Client-Auth,server-media-foundation,MSMQ-Server,MSMQ-Directory