Fun with Windows Deployment Services in WS2012 R2

Recently we’ve been having some random strangeness happening with our Windows Deployment Services (WDS) server in the office. A look around the environment didn’t uncover any “A-HA!” moments. All looked as it should. Performance counters and metrics were all where they should have been. But it didn’t change the fact that there was a noticeable, sudden downturn in performance of the WDS environment. We looked at the underlying SAN, the virtualization platform, the network infrastructure, the VMs themselves. They all were returning smiling happy faces and saying “Nothing to See Here.”

Now all this digging around did unearth some other issues that we needed to address, especially around patching of our VMs. I found a couple that somehow hadn’t been patched in over 2 1/2 years. Oops. That’s not really a good thing.  The WDS server wasn’t one of those, but it did have quite a few pending so I took the opportunity to take it offline and do some maintenance and patching (and in the process of running terminal sessions from within VM console sessions from within RDP sessions I managed to trigger updates to our SAN. But that’s a story for a different blog post. Suffice it to say it all ended up just fine).

Patches downloaded, installed, server rebooted. A quick test . . . no change in the performance. Grrrrrrrrr. But not really unexpected.

And so, in the grand tradition of IT expediency and pragmatism I had the thought “I could have built a new server faster than this.” So after consulting with a few colleagues we decided I should do just that. It allowed us to tick a few different boxes anyway, and it just happened to coincide with some unexpected free time in my schedule. So off I went!

General Approach

We decided to go with a side-by-side migration, basing the new WDS server on Windows Server 2012 R2, rather than an in-place upgrade. This was a good choice for us for lots of reasons, not least of which it is the recommended approach by Microsoft for most things. So we put together our big-picture plan, which looked like this:

  1. Do your homework. I’ve had some experience with WDS, but not massive amounts in the wild, so before I started down this path I needed to make sure I had enough knowledge to be able to confidently do the work without any major, preventable issues. I had a look through the topics found in course 20415-Implementing a Desktop Infrastructure, and then I headed off to TechNet. This article proved to be a useful starting point for me as well. It also gave me a useful basic checklist to make sure I didn’t miss any important steps.
  2. Plan and Install the Windows Server 2012 R2 VM. Using the VM configuration of the original WDS server as a starting point I made and documented the decisions about configuration and build of the VM. There were the obvious OS things like “Yes, it has to be a member of the domain”, computer name, static v. dynamic IP addressing (I went with static, so I could easily change the IP address later to take over the IP of the legacy server, thus reducing the need to reconfigure our network infrastructure), but also made decisions about some of the less obvious stuff, like how many and what type of virtual disks to use, where to store those disks (SCSI, in the shared storage, thin-provisioned), how much and what type of memory (dynamic, 2GB start-up 16GB max). Once that was done I created the VM and the disks, hooked up our WS2012R2 iso and built the VM to spec. I also tried to do as much patching as possible done here so I didn’t get interrupted when I was doing the WDS work.
  3. Set up the storage. We made the decision to make use of a few of the new features in WS2012 and WS2012R2, as well as trying to future-proof the configuration as much as we reasonably could. So what we decided in this case was to take the data disk (our second virtual disk) and set it up within Windows as part of a storage pool. If we need more storage later on we can create new virtual disks, and add them to the storage pool. Once we had that, I created a single virtual disk from that pool. Again, in the future we can extend that virtual disk if required, allowing us to relatively quickly and easily increase the storage available to hold our images. I also enabled Disk-Deduplication on the volume that was based on that virtual disk. For us, this was all about maximizing storage/minimizing storage use. WDS already de-duplicates the data in the WIMs, but it shouldn’t hurt for all of the other files.
  1. Install and configure WDS services. This was probably the most straightforward part of the process. Using the checklist I got from TechNet, I added the Windows Deployment Services role to our new VM. I wanted a new server, but not new images, so from there the configuration was really nothing more than exporting the boot and install wims from the legacy WDS server, and then adding and configuring them. On the legacy server I mapped a drive to the data drive on the new server and used that as the target for all the exports. We made the decision to only export the current wims that we use, not all the historic wims. So I really only had 4 boot wims and 5 install wims to deal with. On top of that I needed to document and copy out the unattendimage.xml files that correspond to those wims.This really wasn’t anything more than an exercise in documentation and paying attention to detail. When you export an image from WDS it makes a copy of the wim file, but does not include the other properties/configurations of the image, like unattend files. So I made sure that I documented and copied out all the unattend files that I were going to need, and then I did our image exports. Once those were done I created new images on our new WDS server, using the exported WIMs, and then going in and configuring each image to use the corresponding unattend file. Fortunately, I only had a handful of images so it wasn’t a big deal. If I were to do this again I’d spend a little time digging around in PowerShell to see if there is a command that would allow me to script that or at least do it in bulk.


  2. Test the new server. This was an easy one. Since we only use our WDS environment late in the day, I could take a couple of client machines and use for testing. So that’s what I did. I shutdown the legacy WDS server, and then changed the IP address of the new server to take over from the legacy server. This meant I did not have to go and change firewall, networking and DHCP settings. I restarted the WDS services and then booted up some clients.The first tests proved to be reasonably successful. The clients all successfully found the server, connected to the TFTP service and deployed images. It did reveal a couple of minor configuration errors I had made. Specifically, I had forgotten to set the boot image priorities and timeouts, and I had used the wrong unattend file for one image. Minor configuration issues that were easily fixed in about 5 minutes. The second test went as planned. No issues. However, the jury is still out on the performance issues. We won’t really know until we put it under are normal loads. It seemed quicker, but that was just the eyeball test. Regardless we have a leaner and meaner WDS server than we did before, so overall a worthwhile project anyway.
  3. Phase out the legacy server. I are quietly confident that the new server will function as required, and initial testing indicates that it will cope with the workload. So we are shutting down the original VM on a semi-permanent basis, but will leave it registered on our virtualization platform for the next 2 weeks. If we encounter any major issues we can quickly shut down the new server and spin up the original. If we are all clear after two weeks I will be delete the VM and all its files from our virtualization platform. However, I have taken a copy of the virtual disk that contained all the images for the legacy WDS server, and it is stored on external storage. If push comes to shove I can attach to the VM and get access to those wim files.
  4. Monitor and maintain. Our initial testing indicates that the new server is performing better than the last, but I haven’t received definitive proof as wet. That will likely occur when we do our first large-scale rollouts using the new WDS environment. But all signs are looking good for new.
  5. Write blog post about it. Done.

Obviously, this wasn’t a hugely technical article, more of an overview of the process and what I went through. There was a lot of good stuff in TechNet around WDS server management, so if you’re embarking on a WDS project you might want to start there.



Curious about Windows 10?

Windows 10 Technical Preview Fundamentals for IT Pros.

The Tech Preview (TP) for Windows 10 has been out for a few weeks now, downloaded millions of times (according to Microsoft). So what do you do with it?  It’s all well and groovy to spin up a VM,  and put Windows 10 on it and take it for a spin. You’ll be able to see the new Start Screen, take advantage of the app docking (up to 4 at a time now), the multiple desktops. All cool stuff, especially for the end-user. But if you’re the Windows Desktop team lead, or a Windows sysadmin you might be wondering “So what? What’s in it for me?” Fair enough, too.

You can dig through the Windows 10 TP doco and blogs etc.  Places like this:

Or this:

Or even this:

And I’ll admit, there is some enjoyment to be had to dive in and dig around, delve into the details. If you have the time. And that’s always the big gotcha. IF YOU HAVE THE TIME. I know I don’t always have the time.  So if you don’t have as much time as you’d like, but you want to get an overview of “What’s new for IT Pros” in Windows 10 TP, then check out the Microsoft Virtual Academy session this Thursday/Friday (depending on your time zone).

It’s running from 6am Friday the 21st of November for all the cool kids (i.e. the ones who live in New Zealand), for 4 hours. So it will be a bit of an early start. But look at the bright side, you’ll still have most of your day left to do other cool stuff!

Windows 10 Technical Preview Fundamentals for IT Pros.

It’s running as part of the Microsoft Virtual Academy (MVA), so you will need to set up a logon for that (if you don’t already have one).  And don’t worry, if you can’t make the live broadcast, it is being recorded and will be made available ondemand through MVA at a later date.


Adding .Net 3.5 SP1 to an Azure VM through the Back Door.

If you’ve been working with Windows Server 2012/2012 R2 you might have noticed that you need the CD/install files handy if you want to install .NET 3.5. Which causes problems if you need that feature on an Azure VM. After a bit of digging I found this in the MSDN forums, and it has proven to be handy. I haven’t tried it enough to say that it is foolproof, but if you’re having this issue it’s worth a shot.

1. Go to Windows Update through the Control Panel.

2. Click “check for update” on the left side. It may take a while to check for the update like it did on my machine. (~5-10 min)

3. Click the “# important update is available” blue button next.

4. On the next screen you will be shown important updates that are ready to be installed. You should have an update called “Update for Microsoft .NET Framework 3.5 for x64-based Systems (KB3005628)”. With that update checked, click install on the bottom. There will be other updates available to you as well. I haven’t thoroughly tested for any combinations that specifically do/don’t work, so try at your own risk!

Assuming that update successfully install, lets go back to Server Manager and try the installation.

1 Go back to your Server Manager and in the top right corner click Manage -> Add Roles and Features.

2. Click next 4 times until you get to Features. Once in features check the box for “.NET Framework 3.5 Features” and then click next on the bottom.

3. On the next page if you see the yellow warning box to specify an alternate source path, just click ‘x’ to dismiss it. On the bottom of the page then, click install. If all goes well you should eventually see that installation succeeded.

If you’d like more detail on the why’s and wherefores:

UPDATE:  Here’s an blog by an Aussie friend of mine that addresses a this issue in a non-Azure environment.

If you’re fighting this battle, definitely worth a read.

Using PowerShell to Register MOC VMs

A few weeks ago my friend Telmo blogged about a very elegant solution for handling the VMs that MOC courses use. I was really impressed with what he came up with.  If you are an MCT then I would definitely recommend reading that post. Thanks Telmo!

You can find it here:

That made me think about some simpler scripts that I have written that may also be of use for registering VMs for any environment. I did make some tweaks to use it specifically for registering MOC VMs, but these scripts are easily portable for any environment where you want or need to bulk import VMs.

The code below gives you a script that will do two main tasks:

1. Create some standardized private virtual switches in Hyper-V. The names match the names that the vast majority of MOC VMs use. So there are more switches in there than you would likely need, but it is easily adjusted to modify the name and number of switches being created.

2. Enumerate and import all the VMs that are found in a given path. For this to work you need to feed the name of the parent folder path where all of the VMs reside. If you forget to specify a path when you run the script, there is a basic inline help that is triggered explaining the syntax.
I put a 5 second delay between each import. I have found that if I don’t then I sometimes get strange errors on the VM imports. It feels like the script is running faster than the Hyper-V administration services can process the requests. When I put in a 5 second delay, that problem basically disappears.

I’ve put the code in here twice, one version for Windows 2008 (& R2) Hyper-V servers and a version that will run on Windows Server 2012.

Windows Server 2008 Hyper-V
(requires that the Hyper-V PowerShell module from codeplex has been installed. I didn’t get fancy by doing a check, I just load it up in the first line.)

import-module HyperV
If (!$args){
” “
“Usage of the script is as follows:”
“<path>\importvm.ps1 <VM parent folder>”
” “
“<VM Parent folder> is the path to the directory that holds the Hyper-V virtual machine directories.”
“This script will automatically register all VMs that are in that path and create two private virtual switches if required.”
#Create the private network switches
“Create the ‘Private Network’ and ‘Private Network 2’ virtual switches if required”
$switchlist = get-vmswitch | where {$_.Name -ilike “Private Network*”}
If (!$switchlist){
new-vmprivateswitch “Private Network”
“Private Network has been created.”
new-vmprivateswitch “Private Network 2”
“Private Network 2 has been created”
new-vmprivateswitch “Private Network A”
“Private Network Ahas been created.”
new-vmprivateswitch “Private Network B”
“Private Network B has been created”
“Private Network switches may already exist. Confirm that the one you require is in the list below.”
“If it is not, you may need to create it manually.”
#List the folders in the Drives directory for the course and map them to an array.
$vms = get-childitem $args | where {$_.mode -eq “d—-“}
#Parse the array and import each VM
foreach ($vm in $vms){
import-vm -path $vm.FullName
start-sleep -Seconds 5
get-vm | format-table Name,State,Status -AutoSize


Windows Server 2012

If (!$args)
” “
“Usage of the script is as follows:”
“<path>\importvm.ps1 <VM parent folder>”
” “
“<VM Parent folder> is the path to the directory that holds the Hyper-V virtual machine directories.”
“This script will automatically register all VMs that are in that path and create two private virtual switches if required.”
#Create the private network switches
“Create the Microsoft Learning virtual switches if required”
$switchlist = get-vmswitch -SwitchType Private | where {$_.Name -ilike “Private Network*”}
If (!$switchlist)
new-vmswitch “Private Network” -SwitchType Private
“Private Network has been created.”
new-vmswitch “Private Network 2” -SwitchType Private
“Private Network 2 has been created.”
new-vmswitch “Private Network A” -SwitchType Private
“Private Network A has been created.”
new-vmswitch “Private Network B” -SwitchType Private
“Private Network B has been created.”
“Private Network switches may already exist. Confirm that the one you require is in the list below.”
“If it is not, you may need to create it manually.”
#List the folders in the Drives directory for the course and map them to an array.
$vms = Get-ChildItem $args -Recurse | where Name -ilike “*.exp”
#Parse the array and import each VM
foreach ($vm in $vms)
“`nImporting {0}” -f $VM.FullName
import-vm -path $vm.FullName
start-sleep -Seconds 5
get-vm | format-table Name,State,Status -AutoSize

Hopefully, this will get you started down the path to scripting more of your common Hyper-V tasks.



Enabling Data Dedup in Windows 8

I run a lot of Hyper-V VMs on my Windows 8 laptop, and my second hard drive is getting full, with all of my inactive vhd files.  I was thinking “It’s too bad Windows 8 doesn’t have volume deduplication like Windows Server 2012.” But I decided to have a bit of a nosy around the internet to see if there was a 3rd party solution.

Lo and behold! (And really, it shouldn’t surprise me by now, but it did a little bit anyway.) I found a really useful site that taught me how to hack the 2012 Dedup into my Win8 laptop.

NOTE: Doing this will put your computer in an unsupported state (due to mixing and matching SKUs of the windows code). It is up to you to assess the risk/reward equation of these actions.

Here’s the link:

One of the dism commands is a little hard to copy/paste from his website, so here it is:

dism /online /add-package / / / / / /

And since there is no gui for dedup in Windows 8, here’s a link to the online PowerShell help for the dedup cmdlets:

DHCP Resiliency in Windows Server 2012

I take pleasures in the little things in life. Reading a good book, going to see a cool act that nobody else cares about (Eli “Paperboy” Reed anyone?), so while there is lots and lots of really cool new stuff in Windows Server 2012 (Fibre Channel in Hyper-V, Virtual Networks, PowerShell improvements . . .and the list goes on) I want to have a look at one of the little things. Nothing too fancy or really big and noticeable, but something I think many people will find useful, and easy to implement: DHCP Scope Failover.

In the past, if an organization wanted to reduce their dependency on a single DHCP server, they needed to implement a DHCP Split Scope design or building Failover Clusters. It works–but in my experience, even with the Split Scope Wizard, it was a bit of a pain to design and implement. And with a split scope, the reality is that you’re just splitting your overall pool between multiple servers. If a server goes down, the part of the pool that it was responsible is unavailable/unresponsive. If the server stays down a long time, that can cause you problems and may require you to build a new server with a restored copy of the DHCP database from the original server.

Windows Server 2012 provides us with a much simpler, more elegant solution: DHCP Failover. At its simplest you need two Windows Server 2012 servers, configured with the DHCP Server role, as indicated in Figure 1.

Once you have two servers, choose one of them and create and configure a scope, just as you normally would. Configure the options you require, the lease expiry etc. Feel free to use whatever management tools you prefer for this. One thing to note: if you are planning on configuring a scope for failover, you will want to make sure the options are scope-based, not server-based (unless both DHCP servers are going have identical server options, but I like to keep my scopes self-contained). In Figure 2 you can see a private network scope with fairly common scope options configured.

Now that I’ve got a scope configured on one of my DHCP servers (JFIN-SRV), I can configure it for failover. You can configure failover for either a balanced workload approach (an active-active relationship between the two servers) or a hot-failover (active-passive). Using the DHCP Manager, either of these options is fairly straightforward to configure. In DHCP Manager connected to the DHCP server currently hosting the scope, right-click on the scope and choose Configure Failover…. The first screen confirms which scope or scopes you are configuring failover for. Choose accordingly and click Next. The next screen asks you to choose or add the partner server. If you already have a scope set up for failover, then that server should be available in the drop-down. If not, you will need to click on Add Server and then enter in the name of the other DHCP server. Once you have the failover server identified, click Next.

The “Create a new failover relationship” page is the most important page in the wizard. It is here that you get to configure the parameters of the relationship between the two DHCP servers. Choose a Relationship Name that makes sense for your ongoing management.

Figure 3: DHCP Failover Hot Standby

Figure 4: DHCP Failover Load Balance

Those of you with a keen eye will likely have noticed that there are two versions of this page. Figure 3,”Hot standby” is the Active-Passive approach, and Figure 4 shows “Load balance” the Active-Active. We’ll discuss the Maximum Client Lead Time and State Switchover Interval settings a little later. First though we need to go over the differences between the two modes. If you choose “Hot standby” then you need to choose whether the server you currently have the DHCP Manager connected to will be the Active or the Standby server. Additionally, you need configure what percentage of the scope will be reserved for the Standby server. You’ll notice that it defaults to only 5%. While this has the practical effect of reducing the number of addresses that can be leased, it’s a pretty small number.

If you choose “Load Balance”, then the two servers will share the workload in the percentages you choose. Both servers know about the entire scope (a bit different from the Hot standby mode) and use an internal algorithm based on the MAC address of the requestor to determine which server will handle the request and with what address. You change the percentages it changes the algorithm. It’s pretty hands off. To secure the failover messages between the servers, set a Shared Secret.

That leaves two settings that need to be configured. These settings control the speed with which full failover occurs. A scope (or servers) configured for DHCP Failover have three main states of being: Normal, Communication Interrupted and Partner Down.

Obviously, “Normal” is the mode you want to see most of the time. If a server loses communication with it’s partner, then the mode switches to “Communication Interrupted”. During this state you can manually trigger a failover to the remaining server if you know that the failed server is not coming up soon, and the remaining server will take over responsibility for the entire scope. The remaining server will wait for the Maximum Client Lead Time before taking control of the entire scope. If you want a remaining server automatically switch from “Communication Interrupted” to “Partner Down” (thus triggering the Maximum Client Lead Time interval) you can set the State Switchover Interval value to determine how long it will stay in “Communication Interrupted” before switching over.

You will want to consider the impact these two properties may have on the load balance or standby reservation percentages. Especially in a Hot standby scenario, if you set a long Maximum Client Lead Time and State Switchover Interval, then you might think about increasing the percentage held on the Standby to better service requests until full failover occurs. Having said that, you will want to have some idea how many IP addresses are normally refreshed within whatever timeframes you configure, and make sure that whatever percentages you set will support that.

Once you have finished on that page of the wizard, click Next and then click Finish to complete the configuration. When you are done, you can use the DHCP Manager to see that the scope now has a Status of “Active” and is configured for a failover relationship, as shown in Figure 6. You can do the failover management by right-clicking on the scope and choosing the appropriate replication options to move all active leases from one member of the partnership to another (perhaps for maintenance of one server), as well as an option to deconfigure failover.

Pretty easy to configure, easy to manage. Pleasure in the little things.

Managing PowerShell 3.0 Updateable Help

***Author’s Note:  I was in a rush and didn’t properly preview the graphics for this post in my browser. If you are finding them hard to read, try using your browser’s zoom-in capability. I found that if I zoomed in a bit, the text in the graphics became legible.  Sorry. :(***

Updateable Help is one of the new features of PowerShell 3.0. The biggest benefit of Updateable Help is the improved accuracy of the inline help, especially across patches/service packs/module additions. While I have to admit that the inconsistencies that would creep in were not a major issue for me personally, I can see how it would create frustration. So we’re going to look at some of the basics of using and managing help for PowerShell 3.0

Using Help

First first—I’m going to assume that if you’re reading this blog that you don’t me to tell you how to get help for PowerShell cmdlets. But you will notice that if you have a computer that has not gotten updated help (and out of the box, it won’t have), you’ll see that a normal Get-Help cmdlet will return the basic list of switches.

You’ll also notice that in the Remarks Section it tells you that the full help has not been downloaded. When you add the –Detailed switch to your Get-help command, the results are not much different. Each switch gets its own line, but the detail you are likely looking for is not there. Again, the Remarks section is telling you to get the updated help. Obviously, getting the updated help is a valid option, but not the only option.

There is a new switch for Get-Help, -Online. Assuming that the computer from which you are running powershell has an internet connection, this allows you to get the detailed help you need, without forcing a full update of the help files. It will open a web browser session and take you to the TechNet Library page for the help for the cmdlet you indicated.

Getting the Help onto Your Machine

That might be fine for module-specific cmdlets that I don’t use often, or while tweaking a script on a server (as opposed to writing a script in whatever environment you normally use), where you don’t really need the help there all the time. However, on the machine that you DO want the help at hand more readily you really want the help files on your machine. Microsoft has provided Update-Help for this.

For the help to update properly I’ve found that I need to make sure I am running the PowerShell console under the Administrator User Account Control (right-click, Run As Administrator).

Once I’ve done that, then I can keep it really simple with the cmdlet Update-Help

If you look at my results you’ll notice a couple of things.

  1. I didn’t run this with the appropriate UAC so some of the help files didn’t update. Easily fixed. Open a PowerShell Console in the correct context and that goes away.
  2. There were a couple of other errors about DTD files and incorrect URIs. Those have to do with the help files not being available properly from the Microsoft servers. Instances of those errors should drop over time. If you want a detailed explanation of what exactly is happening, read this: . It explains it all quite well. So we’ll ignore those for the moment.

There are additional switches for the Update-Help cmdlet that allow you more control over what is going on. For instance, the –Module switch allows you to update the help only for a specific module. Take the time to explore those switches and see which ones may be of use to you.

How Do I Update Help if my server doesn’t have access to the Internet?

That was the first question that popped into my head when I first came across this idea of Updateable Help. The solution is reasonably straightforward and done in four steps.

  1. From a machine with internet access (and the appropriate modules you want the help for) run Update-Help.
  2. From the same machine run the Save-Help cmdlet. Save-Help will copy the help files to a directory of your choosing. Much like Update-Help, you can use a –Module switch to only save the help files for a specific module. The main parameter you need to provide is the parent directory where you want the files saved. In the example below I only saved the help for the Hyper-V module.
  3. Somehow transport the directory with all the help files to the computer(s) that don’t have internet access. Network share, portable drive of some sort, whatever works for you really.
  4. On the isolated machine, run Update-Help –SourcePath <path to folder that holds the help files> -Module <Name(s) of modules that have help files in that directory>

You can see in the screen shot below how steps 1-3 work. You will notice that I created a folder to hold the help files, and then ran Save-Help only for the Hyper-V module (for the sake of simplicity). I then displayed what files were put in the destination path.

  1. Once I had done that I can transfer the .cab and .xml files to the remote machine. In this instance I created a folder called “HyperVHelp” on the destination machine and put the files that folder. Once I had done that I ran Update-help. You will note that I used the –Module switch to keep it simple. Had I not done that, my screen would have been filled with red as it tried and failed to update the help for all known modules. The Hyper-V help would still update, but all the error messages are a bit off-putting. So I only updated the module I had help files for.

I ran the Update-Help command a second time, just to show you what can happen if I try to update help files that are currently up-to-date. As you can see, nothing major happens, so there’s no big concerns in that space.


As you can see, the process for updating and managing PowerShell’s help is not terribly difficult. With a bit of practice and planning you should be able to develop and implement a plan for keeping your PowerShell inline help current and useful.

PowerShell 3.0 Workflow Basics

There are lots of new features and functionality in PowerShell 3.0, a brief overview of them found here:

One of the features that I find most exciting is the new scripting construct, Workflow. If you are familiar with creating a custom Function in PowerShell, then you know most of what you need to know about creating a workflow.

So what’s the difference?  The difference is that when you creating a new command using “Workflow” instead of “Function” in a script, the new command has access to a built-in library of common management parameters, such as PSComputerName, which enable multi-computer management scenarios.  This will make your scripts more powerful, and can be developed in a much simpler fashion.

Below you can see a very basic Workflow that I’ve created and saved on a member server in a domain.

As you can see, the Workflow itself is very basic, doing nothing more than gathering information about the powershell environment and writing it out to a text file that will site on the root of the C: drive. In reality this would need a lot more work, as it assumes that PowerShell is running, which is not a safe assumption if you’re getting this to execute on remote machines.  But ignore that, and assume that you do have a powershell console open on your remote machine.

I could have done the exact same thing with “Function” and it would work just fine on the local machine in which I ran the script.

However,  a look at the help for Get-PowerInfo reveals a host of parameters that are built in to the workflow.

Of immediate interest is -PSComputerName.  This parameter allows the code to run on remote machines (running PowerShell 3.0 and w/ remoting enabled) without me having to specifically add code to the workflow to manage remote sessions or import modules etc.

You can see that my workflow executes successfully, and when I go look at the C: drive on jfin-dc, the output text file has been created.

Obviously, a very simple use (and not all that practical in an of itself), but hopefully you can see that there is a great deal of power in this new Workflow construct in PowerShell 3.0.  As a scripting administrator, I can see how this will make my remote management scripts more powerful and easier to write.