Wrangling Windows 10 File Associations – SetUserFTA

As of late, I’ve been almost exclusively focused on Windows 10 deployment, and depending on what you’re trying to do, the process can be complex. There’s a wealth of knowledge out there already, but truthfully, getting a machine imaged or upgraded is the easy part. As someone with a bit of OCD, the true challenge is making sure the user’s first experience on the machine is perfect! In this case, that means making that experience as close to Windows 7 as possible 😉

With Windows 10, Microsoft has moved to develop an OS that’s more secure out of the box. That’s great, and hopefully will make our lives easier in the long run. The trade-off is that some tasks that were once pretty easy, are now painfully difficult. One of the worst? File associations.

I know I know, someone is going to tell me ‘just let the user choose!’ For anyone who hasn’t done it recently, the process to make another web browser the default or change a file association in Windows 10 is 4-6 clicks, and several dialog boxes popping up. If that isn’t training my users to just click ‘Yes’ on anything that pops up on their screen, I don’t know what is! I’m far less concerned with an OS that let’s me configure things silently, and far more with one that trains my users with bad behavior.

Official documentation on setting file associations has existed for quite some time, but I’ve seen varying success depending on the version of Windows 10 as well as the customer environment I’ve tried it in. Microsoft, for their part, has *finally* (it took 2.5 years) released a blog post on configuring default application associations that should be more broadly applicable. You can read that here, and then cry for a few moments as you understand how difficult this is to accomplish for the following scenarios:

  • I only want to set associations once, not continuously
  • I only want to set associations on a per-user basis
  • I want to set associations with a roaming profile (or UE-V) <- This one was mentioned to me on Twitter, I haven’t had time yet to confirm that MSFT’s blog post does/doesn’t work properly in this scenario, but it wouldn’t surprise me.
  • I want to set associations without having to set *all* file associations at once
  • I want to set associations without losing hours of my life to reading a blog post and fiddling with XML files, DISM, and GPO’s (Don’t get me wrong, these are a few of my favorite things, just not in this context)

Luckily, heroes like Christoph Kolbicz exist (His blog is here)! He recently posted about a new tool to write user-specific registry keys that set and keep Windows 10 file-type associations painlessly. I was too curious, so I put his new app, SetUserFTA to the test! (Note: I’m using v1.1.1 – the current version at the time of this post)

The easy way – Setting Adobe Reader as the default PDF Viewer

  1. Install Adobe Acrobat DC (Older versions recently went out of support, so make sure you’re on DC for the latest security fixes 🙂 )
  2. Manually associate Adobe Acrobat DC with PDF’s. This can be done several ways, but I’m showing the process from the ‘Default App Settings’ page in the ‘Settings’ area of Windows 10.
  3. Look up the Program ID (Also noted some places as the AppID) for Adobe Acrobat DC. The ID is the third item returned from the command below. This seems to change version to version on some apps, so make sure you check your own system before assuming that the Program ID I’m using will work for your version of Adobe Acrobat DC.
    1.  For your copy/paste pleasure:
  4. Run Christoph’s app with the file extension you want to associate, and the Program ID you’ve just looked up, on any computer you want to update associations on.
    1. More copy/paste code:
  5. Watch your screen flash for a moment, then bask in the glory that is a silently set, per-user, file association!

I love this for so many reasons. Focusing on desktop deployment and user experience, I can use this to ensure that when a new app is installed, the file association is set at the same time, without any user intervention! Here’s a quick and dirty PowerShell script that installs Adobe Acrobat DC and sets the PDF file association in one fell swoop! Note – this doesn’t accept the Adobe Reader EULA or anything like that, this post is about setting file associations, not installing Adobe Reader end to end 🙂

This is great! No DISM, no copying/editing of XML, and the end result is achievable in 5 minutes instead of many, many more.

I’m not even going to dive into the Microsoft-endorsed method at this point. I’ve done it before, it works, but it allows little flexibility. Microsoft’s methods may work for you, and if so, keep at it! Once you get tired of that, SetUserFTA is a great alternative.

It looks like Christoph is using this for Citrix environments, which I can imagine must have been hell to manage with MSFT’s official guidance, thus why he spent the time building all this 🙂 SetUserFTA ought to come in handy for anyone doing app deployment, Citrix or VDI environment management, or who just doesn’t want to deal with Microsoft’s own methods of setting app associations up. Give it a try and let me know how it goes via Twitter @systemcentersyn

What’s next? I am hoping Christoph continues development and let’s me set protocol associations, so web browser selection and deployment becomes just as easy 😉 Also, here’s to hoping that anti-virus apps stop flagging his .exe as a virus, as that’s held me up a few times as I authored this post. AV heuristics can be both a curse, and a blessing. Update: Christoph has re-uploaded v1.1.1 with updated code that my Anti-Virus finds less suspicious. No issues now!

Using this? Does it work for you? Tweet @_kolbicz and be sure to donate as thanks for his efforts! Again, the blog post for instructions and download of the SetUserFTA app is here.

Please don’t setup Intune Hybrid. Just don’t.

Note: This information is relevant as of June 2017 and may change moving forward. The cloud moves quickly and blog posts have a tendency to linger for quite some time 😊

In my role as an Enterprise Architect, I see lots of different environment configurations, and often, have to work within the confines of what I’ve been given. Customers don’t always want to re-architect their environments from scratch, even if the way they have it is far from best practice. Sometimes ‘good enough’, is exactly that.

As of late, I’ve been doing lots of work with Microsoft Intune, a rather comprehensive platform of services that focuses around configuration management of devices, along with complementary services around security and compliance. The most widely used aspect of Intune for my customers is for Mobile Device Management (MDM) which, as of the latest round of updates, has a feature set comparable to MobileIron or AirWatch making it a compelling alternative to centralize licensing, billing, and management of devices in an organization.

So far, all of these customers have also had System Center Configuration Manager (SCCM or Configuration Manager) for managing their existing computers on-premise. The natural thought then, is to integrate Configuration Manager with Intune in what is called a ‘hybrid’ configuration, so that mobile devices and on-premise devices can all be managed from the same console. I’m here to tell you why this is a bad idea, even though for several years, Microsoft advocated for it.

Before we begin, you don’t have to take my word for any of this! Microsoft has updated their own documentation here, advising customers to look towards Intune Standalone for future implementations: https://docs.microsoft.com/en-us/sccm/mdm/understand/choose-between-standalone-intune-and-hybrid-mobile-device-management

First, there are different development cycles for Intune (cloud-based) and SCCM (on-premise system). Intune can function in a standalone configuration where all configuration is done via the Intune portal in Azure, or in a hybrid configuration where it is linked with SCCM and all configuration is done via the SCCM console. This leads to situations where features and functionality are available in Intune standalone but are not available to be configured via Intune Hybrid, because the SCCM console does not make those features and functionality available.

Here’s a quick example. Let’s look at the documentation on iOS device settings for Intune Standalone, and Intune Hybrid.

Intune Standalone: https://docs.microsoft.com/en-us/intune/device-restrictions-ios

Intune Hybrid: https://docs.microsoft.com/en-us/sccm/mdm/deploy-use/create-configuration-items-for-ios-and-mac-os-x-devices-managed-without-the-client

Even taking a single section, the ‘Password’ configuration for example, there’s additional settings you can configure in Intune Standalone that are not possible to configure in Intune Hybrid. Whether these settings matter to you today is important, but knowing that the Intune Standalone features will continue to evolve faster than SCCM, at least in regards to mobile device management, may give you an indicator of which configuration you’d rather be on if you’re hoping for new features that don’t currently exist in Intune as a platform.

Second, one of the biggest issues that I run into when implementing Intune for customers in a hybrid scenario is, well, patience. SCCM, for anyone that has worked with is, is a wonderfully capable product. Truthfully, I don’t know where many of our customers would be without it! That said, SCCM is anything but responsive to a lot of actions in the system. Actions are scheduled, then run on a reoccurring basis, and in some system actions, execution times are randomized in order to not overwhelm servers or systems. How does this apply to MDM in an Intune Hybrid scenario? Well, policies and configuration made in the SCCM console need to be replicated to Intune, and then pushed out to clients. The configuration for these can be complex enough, but coupled with delay and a lot less visibility into Intune actions than SCCM admins are used to (SCCM has extensive logging, but the connection between Intune and SCCM provides minimal information), it can be difficult to know if the issue is you, your policy, SCCM, or Intune when things don’t work how you expect.

Consider, for a moment, Intune Conditional Access. It’s an awesome feature of Intune that blocks email from reaching a device that is deemed non-compliant in an environment. I have users in government, healthcare, and finance using this feature with great success, but getting it configured initially can be complex.

To get conditional access to work, you need to interact with 6 systems (if we’re assuming AD, Exchange, and Configuration Management via Intune/SCCM are all in Hybrid scenarios). A good bit of the complication is waiting for everything to sync not just up into the cloud-based systems, but then back down into SCCM where features can be adjusted and reported on. Keeping Intune in a Standalone configuration cuts down on a replication/sync cycle, and as an admin, well, it just feels faster. It also gives you a web-based interface for configuring your MDM solution, which is likely what teams are used to if they are coming from MobileIron or AirWatch.

Finally, since Intune is part of Azure, which is where Microsoft is bringing many of their new features, Intune Standalone is an environment in which admins should become comfortable. Microsoft has said in no uncertain terms that it’s moving towards Intune as its primary device management system in the long-term (See here: http://myitforum.com/myitforumwp/2017/06/23/microsoft-building-migration-path-from-configuration-manager-to-intune/) . How Microsoft will handle complex situations like Operating System Deployment (OSD) is anyone’s guess, but as more innovation is being driven from the cloud, I believe the experience in learning how to use Microsoft’s new toolsets outweighs the ‘single pane of glass’ that was once the deciding feature in determining if Intune should be stood up in Standalone or Hybrid configurations. I’ll admit, this is probably a weak point of contention today, but I can only imagine it gains relevance in the coming months as Microsoft continues to innovate.

If you take nothing else away from my experiences, know that every Intune Standalone deployment I’ve done has gone quickly and successfully. Every Intune Hybrid deployment I’ve done has been a challenge for both me, and customer, alike.

Oh? I’ve convinced you? Excellent! But wait, you’re still on Intune Hybrid and want to switch? Until recently, you’d have been out of luck. Thankfully, Microsoft now allows the MDM authority to be changed as of Configuration Manager build 1610 and Intune version 1705. I’ve done this a few times, and while it requires planning and preparation, the process is on the whole, smooth. The docs on making the change can be found here: https://docs.microsoft.com/en-us/sccm/mdm/deploy-use/change-mdm-authority

Ultimately, the choice is yours as to whether or not you implement Intune Standalone, Intune Hybrid, or move from whatever configuration you already have. Just know that after a few complex Intune Hybrid configurations, and a few simple Intune Standalone ones, I know which side I’m on 😉

Home Networking – Archer C7 with OpenWRT and the Zoom 5370

So I’ve recently relocated for work, and have moved across the States for a bit 🙂

As part of the transition, I’ve had to get a new apartment, and took the opportunity to re-work my internet stack. I had a really poor experience with my last cable modem, the Motorola SB6141 – it kept rebooting every 45 minutes or so, and after reading numerous threads full of people having similar issues, I don’t think I was the only one – so I decided to jump ship on the Motorola modems and try something new.

I found the Zoom 5370 after I had used the older model to replace my failing SB6141 at my previous residence. I don’t *need* the extra overhead (The model I just bought is 16×4 channels instead of 8×4) as my plan will likely be one of the lower service tiers to keep cost down, but I figured I’d get it anyway to future-proof my purchase 🙂

So far, the modem has been spectacular! It’s the stability I’ve come to know and love from the Zoom modems, at a price point that’s competitive. It reminds me of what I *used* to know the Motorola Surfboards for, it’s just a bit of a jump for a lot of people who had come to trust the Motorola brand. I *highly* recommend giving this a shot if you’re in the market for something new and have a supported cable provider (Comcast or Time Warner I can personally vouch for, not sure who else this works with).

Beyond that, I went with a router that I’ve had a great experience with before, the TP-Link Archer C7. While the last time I bought this and used it I kept the stock firmware on there, I decided that this time I was going to flash it with OpenWRT and use that, despite the fact that there is a known slowdown from WAN -> LAN ports. I followed these instructions, which were spot on but used a different TFTP server as I was doing this from a windows box.

After the first flash attempt, the router never successfully rebooted. I saw the successful transfer of the OpenWRT firmware, but then things kinda stopped. It would reboot, but instead of coming back up fully, I’d just get the power light, and the single ethernet port light that my cable was plugged into. The TFTP firmware update method worked like a charm, despite it looking like I had just bricked my brand new router, and I was able to use that same process to get the stock TP-Link firmware back up and running :). I was disappointed that things didn’t work the first try, I was  but I was determined to make this work.

After a frustrating few hours, I finally got fed up with trying every stable version OpenWRT (Most recent version is Chaos Calmer 15.05.1) or DD-WRT (I tried these just in case) and went off to download a nightly build of OpenWRT. To my amazement, it loaded and worked the very first time! Of course, I had to ssh to the router and install LuCI, the OpenWRT GUI, but otherwise the setup was flawless. Had I read the ‘notes’ in the Quickstart Guide on the OpenWRT wiki first, I would have seen that newer versions of the Archer C7 have a new flash chip that requires a nightly build of OpenWRT, instead of the most recent ‘stable’ build. Live and learn I guess 🙂

Oh, one last thing! To get 5ghz WiFi working you’ll also need to install the ath10k drivers and firmware. You can do it via SSH by typing:

opkg update && opkg install kmod-ath10k ath10k-firmware-qca988x

Or by installing the two packages – kmod-ath10k and ath10k-firmware-qca988x – from the LuCI GUI in the admin console, then rebooting.

Just thought I’d post and get some good info out there as a google search during my unsuccessful flashing didn’t yield any meaningful results. If your TP-Link Archer C7 isn’t rebooting properly after a flash with OpenWRT, at least you’ll know what’s up 🙂

AzureAutomation – Webhooks, jQuery, and Runbooks, oh my! (Part 2)

Alright! In the last post, we setup AzureAutomation and tested it out on our Hybrid Worker. Now, let’s create a webhook to make sure we can trigger this remotely. What’s a webhook you might ask? Well, that’s an excellent question!



Basically, it’s a URL that is both the location and token, to use a resource. Think of it like a trigger on a gun – once you’ve got your finger on it you can use whatever is behind it! Why is that useful in this case? Well, a webhook will let us use our Azure Automation runbooks from another source. That source could be a PowerShell workflow, another Azure Automation runbook, a C# app, or a website! Cool stuff.

Anything we do below assumes you’ve already got OMS and Azure setup, and have the Azure Automation components configured. If you need help doing that, go back and check out part 1 of this guide!

So we’re in Azure, I’ve headed to the ‘Runbooks’ Area of my automation account from last post, and found the same ‘Test-HybirdWorker’ runbook that I was using before. I’m attempting to use the scientific method and not change too much at once 😉


If I click the ‘Webhooks’ tile, we get a new page where we can add a new Webhook. You can have multiple Webhooks for a single runbook, but we just need one for now. Click that ‘Add Webhook’ button!


Let’s fill out some info…

webhook3 webhook4

Be sure to save the webhook URL someplace safe! You only see it once, and it’s super important. It acts as both authentication and a pointer to running your runbook, so keep it safe, but accessible, in the event that you need it again.

We’ll also want to make sure that we set this webhook to run on our Hybird Worker again. We can always come back and change this later, but we’re here, so we may as well set it!


After hitting ‘OK’, then ‘Create’, you’ll be brought back to your Webhooks screen, and you’ll see the fruits of your labor! Clicking on your Webhook, you can get the details and modify parameters and settings if need be. If you can’t find your Webhook URL because you didn’t listen to me above, you’re in trouble! Time to create a new Webhook and update your code 🙂


Beautiful! We’ve got a webhook, but what the heck do we do with it? Well, it’s still just a trigger to our basic runbook that writes a file to the C drive… but let’s make sure it works how we want and triggers our runbook appropriately. We’ll need to switch gears a bit and build a simple webpage to try this out 🙂

Here’s some code you can drop in a folder in your C:\inetpub folder. Just install IIS with default settings and that should be all you need here.




You’ll want to make sure IIS is installed! It doesn’t take anything special to run this website, so defaults is fine. I’ll update this page with a quick PowerShell one line to install the proper roles and features in a bit 🙂

Let’s unzip our files now…

webhook8 webhook9

And move it into our inetpub folder…


Perfect! Now let’s make the website in IIS, and point it to the files we’ve dropped locally.


webhook11 webhook12

You’ll see I’ve changed the port to 8080, that’s just because I have other things running on port 80. Run it wherever you like!


And now let’s navigate to our webpage and see how it looks…


Holy cow, it’s beautiful!

This gives us a nice HTML webpage, some CSS to make it pretty, and JavaScript, specifically jQuery, that can be called when the big button is pressed. Awesome!

Let’s dive into the code…

We’ve got a few files and folders in here:


  1. CSS – This holds our CSS files to theme the website. These have been borrowed from Bootstrap and saved us tons of time 🙂 CSS is just color and style and layout, nothing you’ll need to change unless you want to change the look and feel.
  2. JS – this holds our javascript files, and we’ll end up in here modifying a few things here. This will be a good place to be 🙂
  3. index.html – The is the ‘scaffolding’ for the website that the CSS makes pretty. If we want to change any of the fields, forms, etc, then we’ll change things in here. Not much to do here either, unless you want to really extend the functionality.

So if we actually go ahead and pull up the code in aaFunctions.js, we’ll want to change the webhook URL so it matches what we’ve got from the Azure Portal when we setup the webhook. I’m going to paste mine in, you do the same!


Alright, let’s navigate back to the page, hit refresh, and press the big button!


It says that something happened… but let’s check Azure.


Awesome! and if we check out our Hybrid Worker again…

webhook22We’ve got an updated last modified date! It worked 🙂

This is awesome. One thing I do want to point out, is that if you’ve got the developer console enabled in your web browser of choice (Usually hitting F12 will bring this up), it will spit out an error when you click the button to actually call the webhook URL.

webhook24As far as I can tell, this is just a ‘red herring’ of sorts, and while not desirable, it doesn’t impact functionality here. I’m going to look into trapping this/eliminating it in a future post.

That’s all for now – Part 3 coming soon!

AzureAutomation – Webhooks, jQuery, and Runbooks, oh my! (Part 1)

So this post is a bit different than my previous ones, as this is the first to not really be related to System Center Service Manager in a long time. That’s because, well, my focus will likely be shifting off of SCSM in the next few months and more towards SCCM and Microsoft’s new Azure offerings like EMS and OMS. A career change will do that to ya 😉

Anyway, as part of my recent exploration into the Azure-sphere, I’ve found a love for OMS. I’ve been looking for a compelling replacement for Orchestrator for a long time. In order to be really useful, a new solution had to be:

  1. Lightweight (My SCORCH boxes are usually single-box, 2 Core/4GB Memory. So awesome)
  2. Simple to setup/use
  3. On-Premise
  4. Support PowerShell in all its glory!

SMA/WAP doesn’t fulfill requirement 1 or 2, and personally, nor does it really work with number 4. PowerShell Workflows are not exactly native powershell, and as I’ve yet to build something complex enough to *need* PowerShell workflows, the added complexity is just cumbersome.

Azure Automation sounded great when it first came out, but the lack of on-premise support was an issue. Once Hybrid Workers and native PowerShell (non-workflow) support came out, it was clear AzureAutomation was my new friend 🙂

So, now we’ve got this awesome, fancy new automation platform, let’s try and do something that I’ve never been able to do with SCORCH – Kick off a runbook via a URL! The XML Structured requests in SCORCH always made web browser’s unhappy, and so I was loving the new REST interface we’ve got with everything Azure, specifically, Azure Automation webhooks.

As I’ve done a bunch of work with the Cireson Portal lately, my knowledge of jQuery/HTML/CSS is pretty solid. I wanted to make a basic HTML website, have it take in parameters, and then run a runbook from AzureAutomation once I hit a button. That runbook should run against a local lab environment, and, in this case, would actually create a new user in my local AD environment, and email some info out on user creation. Easy? Simple? Eh, kinda.

I’m going to take this in 3 parts, so it’ll be a few different posts. I’ll link to the rest at the end!

First thing is first, we need to setup OMS, which is the platform for Azure Automation. Let’s hit up that OMS website and click that nice ‘Try for free’ button. Isn’t Microsoft nice!!


Well, that was easy! Let’s click the big blue ‘Get Started’ tile.


While there’s tons of functionality here, we just want the Automation for now – you can see I’ve checked the ‘Automation’ box in the lower right corner.


Once that’s installed, you’ll want to go back to that same ‘Get Started’ tile and setup our data sources. Don’t stop on the ‘Solutions’ tab this time, find the ‘Connected Sources’ tab and let’s take a peek at the ‘Attach Computers Directly’ section. That’s what we want to use! This will let us setup a local Hybrid Worker for automation. Download that Agent (64 bit obviously, you’re not using 32Bit OS’s in 2016, are you?) and save it someplace safe. Also, leave this page open, we’ll need that workspace ID and Primary Key.


When you download the agent, it’ll look like any other Microsoft Monitoring Agent. But it’s not just any agent, this is the one unique to Azure Automation! You can see I’ve given it a bit more detail in the name so I can find it later if I need to 🙂

Note that this cannot be installed on a machine that already has the ‘Microsoft Monitoring Agent’ on it – something like a box monitored by SCOM or a machine with an SCSM Agent installed (Management Server). Since they all are variations on the same ‘agent’, they must be unique on each box. I haven’t dived into SCOM monitoring of my HybridWorker, but that’ll come in a later post 🙂

Oh, and one last thing. For connectivity purposes, Microsoft just says the runbook worker needs web access to Azure! Make sure ports 80 and 443 are open to *.azure.com, and you should be golden. No messy ports to deal with – I love it!

oms5This is what I’m talking about! Let’s link this to OMS.


You might have my ID, but not my key! Muwhahaha. This comes from the page we left open above. The Key is the ‘Primary Key’ from the Connected Sources tab.


Alright! That’s it. Pretty simple, right? Our hybrid worker should be setup and connected to OMS. If you head back to the OMS portal, it should show a connected Data Source now on that pretty blue tile:

oms27Yess!!!!!!!!!! Now, in a lot of ways, that’s the easy part. OMS is just a *part* of the equation. We now need to link that OMS workspace to our actual Azure subscription, so we can manage Azure Automation from our Azure Portal. Got it?

OMS + Azure = Azure Automation!

I’m assuming you already have an Azure subscription, and if not, well, it’s easy and there’s tons of posts on it 🙂 We’re going to want to login to our Azure Portal (The new Azure Portal aka ARM), and search for the Automation content pane.


I hope you clicked on that little ‘star’ icon above so it got pinned to your left hand navigation. We’re going to be using this a lot 🙂 Now, let’s open the pane and hit the ‘Add’ button, then click ‘Sign Up’ on the right hand side.


This is going to do some interesting things if you don’t have an existing Azure subscription linked to this account, but ultimately you’ll get dropped back to the automation screen if you need to do anything here. Don’t panic! You’re on track 🙂


Phew! Billing is sorted, back to Automation. Let’s create a unique name and resource group for this bad boy. Think of resource groups as ways to keep resources distinguished between multiple customers. Azure is multi-tenant, so you’ll see a *lot* of separation built in. For smaller customers, or since we’re just doing an example here, we need not worry too much, we just need one resource group to assign resources to.


Here’s me making a resource group! Pretty easy 🙂



Awesome! We’ve now got our Automation Account setup. This configuration thus far has been *all* on the Azure side. Don’t you remember that equation from above? OMS + Azure = Azure Automation! We’ve got Azure all setup, and OMS all setup, now let’s link them so we get access to that Hybrid Worker.


You can see we’ve clicked on my Automation Account, clicked on the ‘Hybrid Worker Groups’ tile, and have clicked on the little ‘Configure’ icon at the top. It gives us awesome instructions on how to do this, but again, since we’re dealing with both Azure and OMS, it’s still a bit confusing. Basically, we did all the hard stuff before, this is just going to establish the linkage between our Azure workspace and the OMS workspace we setup earlier. They don’t *have* to be linked, which is why they exist separately, but for Azure Automation, we need that linkage.

In the above screenshot, see that ‘Step 1’ section? Make sure you’ve clicked on the second bullet in there where it says ‘Login to OMS and deploy the Azure Automation solution’. It’ll bring us…


Deja vu! Let’s sign back in…


Oh! Cool! We’re linking our OMS subscription to our Azure one. We want this.


You can see that there’s a new tile here ‘Solution requires additional configuration’ for Automation. Let’s click that.


It wants to link our Automation Account to Azure! Yes, yes, we want this. Save it and don’t look back!


Bow chica wow wow! You can see our Automation tile now shows the little Azure Automation icon with our Azure Automation account name at the top. It also shows a runbook now, which is cool. I like runbooks.

Now, if you haven’t taken a break at this point, don’t do it now! We’re so close to success I can taste it. We’ve got Azure setup, we’ve got OMS set up, and we’ve got our Hybrid Worker setup. The last bit is to add this Hybrid Worker to a Hybrid Worker Group so we can use it. I know, I sound crazy, but think of it kinda like a resource within a resource group. It exists, it’s functional, but it needs to be assigned somewhere before we can use it.

Microsoft has a great post on adding a runbook worker to a Hybrid Worker Group. I’ve screenshotted the good stuff below :


Luckily it doesn’t show my entire key in this screenshot 🙂

oms21Here’s me adding things!

oms22 oms23

Boom! The command completed successfully and if I go back to the Azure portal and refresh things…


Hybrid workers for days! You’ll see I’m using my Orchestrator box for my new Hybrid Runbook worker. It works perfect! It’s a Server 2012 R2 box with 1 Core and 2GB memory. Insane, right?!

Now, this looks good, this looks fine, but the proof is in the pudding. We need to do a quick test to make sure this is all working. I’ll write a quick runbook to write a file locally on the Hybrid Worker and make sure that comes through!

To make a new runbook, easy enough, we just click the ‘Add a runbook’ button at the top there. You’ll see it opens up the ‘Add Runbook’ pane, where we can select ‘Quick Create.’

Lets fill in a few things…

Note: Powershell is no the same as Powershell Workflow! If you don’t know the difference, select ‘Powershell.’ If you do, then select which one you need 🙂


We’ve got a blank space!!! Wasn’t that a song? Right, let’s fill it with some basic stuff to just write to a local file on the Hybrid Worker.

That wasn’t too hard now, was it 🙂


You’ll need to hit the ‘Save’ button first. Once you do that, you’ll see it greyed out, and then you’ll need to ‘Publish’ the runbook to actually use it. It’s functionality that is pretty similar to what Orchestrator used to do actually…

oms30 oms31

Done and published! Don’t mind me, I’ve made a few other runbooks here too… those come later 😉


Let’s select the runbook, and hit that ‘Start’ button. Once we hit it, we’ll get the option to input any input parameters (there aren’t any in our case) but more importantly, specify if we want to run this just in Azure, or on a Hybrid Worker. Let’s pick the Hybrid Worker!

oms33 oms34

If we hit ‘OK’, you’ll be returned to the Job Summary page, where we can wait for it to finish. Don’t blink! It happens quick.

oms35 oms36

Yes, I know it’s a different Job number. The runbook ran too fast and I had to start a new job to take another screenshot 🙂


Beautiful! We’re in business! You’ve got Azure Automation on a Hybrid Worker in your environment now.

These Hybrid Workers are awesome in that they work:

  1. Faster than Orchestrator from trigger to execution
  2. Better than Orchestrator in their fault tolerance (Hybrid Worker Groups) and logging

A few last minute notes:

  1. You these Hybrid Worker ‘groups’ are exactly that – they can be groups of machines and the request can be passed around to the first available one to load-balance. In our case, we only have one, but it works just fine with a single worker in a group.
  2. If you want to use any commandlets locally on the Hybrid Worker, make sure they are installed by you! Azure Automation won’t do any of that part for you, but other tools in the MSFT toolkit will! (Think DSC 🙂 )

That’s all for now, check back in just a bit for the next two posts on making the real magic happen!

Update: Part 2 is now live!

Monitoring an SCSM Connector – Better than guessing!

First post in quite a while, but really I’ve been amassing tons of content to post. Hopefully this gets busier in the near future 🙂

I’ve been doing a *lot* of work with PowerShell automation and Service Manager. So much I daresay it’s become a second language. It’s even crept into my dreams! Seriously. I woke up the other morning with an idea for a new script and I spent the morning hours from 2AM until 9AM writing it.

As part of a user-onboarding script I’ve been working on lately, there was the need to ensure that the user object was synched back into SCSM before proceeding anymore with the onboarding. You would *think* it would be easy to monitor for such things, but alas! The SMLets deities were not so kind.

So, without status information, I was stuck. I was able to sleep the script for 10 minutes and wait for the connector to finish, but as the customer environment grows, that setting might no longer work. Not to mention at the moment, it only takes about 5 minutes for the connector to run, so we’re wasting time!

Then I had the genius idea of monitoring event logs for the final status message in a connector workflow and just waiting for that! As it turns out, this method isn’t *totally* reliable and Get-WinEvent was being a pain, so I never actually got this to work how I had hoped. Not to mention, this a bit of a roundabout way to monitor for things – I don’t love watching event logs, and usually I consider that a last ditch effort.

Enter the SCSM SDK! Now, I’ve spent *all* of my automation life using SMLets and the Service Manager Commandlets, so the SDK was foreign territory for me. Even worse, any examples I could find were SDK + C#, not SDK + Powershell. I tried variations of code on and off for about 3 days, and kept getting lost. By day 2, my hair was thinning and wrinkles were forming on my forehead. I thought that I had been bested.

Enter my coworkers! Cireson happens to have some amazing people on board, several of whom know the SCSM SDK intimately. Allen Anderson, a friend of mine and coworker on the Consulting team, has been doing some work with the SDK and PowerShell, and offered to help me out. Without him, this script would have not been possible – thanks Allen! This is the same guy responsible for a few of our apps, as well as some amazing custom solutions for customers. He’s a consultant and a developer? It’s like the best of both worlds.

Anyway, Allen gave me a bunch of help, and with some of his code, and some of my code, I came up with this script. It watches a connector, waits for it to sync, and then tests for an object at the end of the loop to make sure it exists in SCSM. In my case, I’m waiting for a user to sync to SCSM via AD, so you’ll see me test for a user object.

The code is ugly, and there’s lots of room for improvement, but it works! Hopefully it helps someone else who is onboarding users with SCSM 🙂

Note: you’ll see some slightly weird spacing where I commented out a separate powershell invoke. See the update below.


PS. Here’s a link to the second write-up of this for my company blog. Just a little different flair 😉

Update: Here’s the code, wrapped in a powershell, shell. If you’re dropping this into Orchestrator, you can use this as a literal ‘drop in’ inside of a .Net activity 🙂


Find Users on Multiple Domains

It’s been too long! I miss blogging. Alas, sometimes life gets busy and customers get demanding!

It’s been a crazy few months, and I’ve got more stuff to blog than I ever thought was remotely possible, but here’s a really cool one I wanted to share!

I’ve been helping my coworker, Seth, design a User-Onboarding script for a customer. They have 4 domains, but with a trust relationship between them so I can search from everywhere. That said, I can’t cascade my search from the top down, well, I’m not sure why. Apparently “get-aduser” isn’t that smart!

That said, I made a script to search for a user object in an array of domains, and then drop out of the loop once it finds the username. It’s pretty nifity, and you can do creative bits to read in multiple domains (static coded, csv, xml, text, etc.).

I’ll let the code do the talking 🙂



Reporting on AD Lockouts via PowerShell

So I’m behind on posts, but this one was just too fun to pass up!

I was recently out and about, doing some SCSM training at a customer site up in the frosty North of Canada! In between consulting, I got the chance to wander around town, eat some delicious food, and make some amazing friends, but perhaps the coolest part of the trip was something that you’ll hear PowerShell guru’s talk about again and again, I became a ‘toolmaker’ for my customer.

I overheard a conversation about a report they get that covers AD ‘lockout’ events. When a user mistypes their password a certain number of times (in their case, 3) it logs an event and locks the account for a period of time before reinstating access. They had a separate program that monitored these events and then dumped a report to PDF. Some person on their team then went through the PDF report (not sort-able as after all, it is a PDF) and then had to find unique values (not easy because it’s not sort-able) and then once those were found, get e-mail addresses and send out an email to users saying something like, “Hey, we saw your account get locked out. Was this you? If not, please let us know so we can do something about it.”

As I was listening, all I could think of was that it would be a pretty simple PowerShell script to hit the DC’s, look for those events, add them to an array, parse them, and then do whatever was necessary with the resulting information. As it turns out, 3 hours of tinker-time later, I had a beautiful, tested, script under 200 lines of code, that worked wonders.


That script gets the lockouts, adds them to an array, parses them, and then outputs the information to CSV if desired, as well as e-mails the users using HTML set templates with variable replacement. I thought the templates were a nice touch instead of using some powershell-generated HTML 🙂

I’m linking to BitBucket because the code has been changing too rapidly to post, but it’s pretty self explanatory. I tried to comment the code so as to give the customer a chance to download it themselves and play with it. I’m trying to create more toolmakers!

I could see this being useful in a lot of environments, so I have shared it with the world. Those of you who are looking at this saying, “Why not use SCOM?”, well, you could use SCOM, but in this case the SCOM environment was run by a different team and politics being what they are, as well as the SCOM project being in its infancy, that wasn’t an option. This is a non-SCOM option to do some monitoring and have some fun!

Hopefully people find this useful and can contribute back! Let me know if anyone has improvements or modifications to make it better – I’m going to try to start to actually use some of the collaborative features of BitBucket 🙂



PS. Yes, I know I left some variables in my script. I’ll clean them up later 🙂 – My e-mail address isn’t that hard to find anyway :p

Compacting all your Hyper-V Disks in one script

So I’ve got a good size lab that I run off of my laptop. It’s about 20 Hyper-V VMs, all running something or other System Center, or just acting as a generic client computer to give my SCCM/SCSM environment some actual real data to work with. It’s awesome to have handy (when it isn’t broken!) but it takes up a lot of space, even on my 500GB SSD.

I had a dream last night about SMA and decided it was time to stand up an SMA server in my lab environment. So, this morning, I woke up nice and fresh and logged in to my laptop ready to create a new VM, only to be stopped mid-creation when I ran out of space. Ugh.

Now I know that I *should* have tons of space left, but that it’s all eaten up from when I was installing the entire lab months ago. I’d drag ISO’s around and copy files into VM’s, all of which take up space, and then that space was never reclaimed (I’m using dynamic disk sizing in Hyper-V). I should have just mounted all those ISO’s over the network, but I’m not that smart before a few cups of tea.

Anyway, as I poked around the Hyper-V management interface, I realized that it would be way more clicks than I was interested in doing this early in the morning. So, in another instance of working smart, not hard, I decided to craft a PowerShell script that would do it all for me!

You can find the very latest version here!

Here’s the whole code, though it may be out of date compared to what you find at the link above. Just copy it into the PowerShell ISE or into a PowerShell script file (.ps1), make sure your execution level is set to allow it, and then let it ride!


This will get your VM’s, do some nifty work on them, and reclaim your space via disk compaction!

Hopefully it helps someone out there save some time 🙂

Max script block size when passing to powershell.exe or invoke-command

This. This may be the most important piece of information I ever contribute to anyone automating Service Manager (SCSM).

If you’ve ever worked with Orchestrator, and specifically Orchestrator and SMLets, you know what a pain in the butt it is. No, seriously. Orchestrator (Also known as SCORCH or ORCH) is great in a lot of ways but miserable in others. For me, right now, it has a big flaw – working with PowerShell.

If you use the built-in “.Net Activity” to run a PowerShell script, it only runs in PowerShell version 2. No problem you say! Several people have written tutorials on how to drop from PowerShell 2 into PowerShell 3. Those examples are numerous, and are an excellent resource (See here, here, here, here, and many others).

There’s even a separate Orchestrator Integration Pack that allows you to run PS scripts easier in SCORCH. Cool beans.

Here’s where it gets ugly. Let’s say, I’m developing a script in the ISE because it’s awesome. That development is happening on a box that has PowerShell 3. PowerShell 3 has several cool functions in it that I happen to rely on (I know in many cases there are workarounds using PS2, but that’s not always the case!) and so when I paste the entire script into SCORCH and try to run it (like all those examples above tell us to do), I get errors galore.

It’s not that I’m doing anything different from the links above. Nope, not a thing. I’m just passing the entire script to a powershell.exe scriptblock (Or script block, I’ve seen it spelled both ways) or to invoke-command. That will, in theory, drop the entire script into PowerShell 3 land and the script can then proceed on its merry way.

Specifically, I get lovely errors like:

“System.Management.Automation.ApplicationFailedException: Program ‘powershell.exe’ failed to execute: The filename or extension is too long”


Such errors have been rectified by splitting up script blocks into smaller parts that can be passed on, and ensuring that only the absolute necessary items are passed to PowerShell or invoke-command. This then gets more complex, as you’re making multiple calls and passing multiple variables in and out. This takes more time, and good luck passing any objects in or out; they get deserialzed in that process and lose all access to any methods that the original object had. So now, we’re talking about passing strings between sessions, calling for objects inside the new PS3 session only to recreate the object we had in our existing PS2 session, just so we can run a method on it. This adds complexity, time, and lots of other not-fun things to our automation.

It turns out that both powershell.exe, and invoke-command have a limit on the size of the scriptblock. It’s been something that my co-workers and some of our implementation partners have run up against again, and again.

The size limit we kept hitting was one that we had only guessed at, one that was unknown, one that we tackled blindly… UNTIL TODAY.

There is a 12190 byte MAX limit of any script block passed to powershell.exe or invoke-command. 

The easiest way to see how big your block is, is to just copy it into Notepad++ and look at the ‘length’ value at the bottom of the editor window. That tells you the byte size of the resulting file (and therefore text), and as long as your script is less than 12190 bytes, it will be passed along with no errors to whatever you like

Go over that limit and you’ll be cast into a world of uncertainty and doubt where your ability to code succinctly is called into question!

And now you know 🙂