Home Networking – Archer C7 with OpenWRT and the Zoom 5370

So I’ve recently relocated for work, and have moved across the States for a bit 🙂

As part of the transition, I’ve had to get a new apartment, and took the opportunity to re-work my internet stack. I had a really poor experience with my last cable modem, the Motorola SB6141 – it kept rebooting every 45 minutes or so, and after reading numerous threads full of people having similar issues, I don’t think I was the only one – so I decided to jump ship on the Motorola modems and try something new.

I found the Zoom 5370 after I had used the older model to replace my failing SB6141 at my previous residence. I don’t *need* the extra overhead (The model I just bought is 16×4 channels instead of 8×4) as my plan will likely be one of the lower service tiers to keep cost down, but I figured I’d get it anyway to future-proof my purchase 🙂

So far, the modem has been spectacular! It’s the stability I’ve come to know and love from the Zoom modems, at a price point that’s competitive. It reminds me of what I *used* to know the Motorola Surfboards for, it’s just a bit of a jump for a lot of people who had come to trust the Motorola brand. I *highly* recommend giving this a shot if you’re in the market for something new and have a supported cable provider (Comcast or Time Warner I can personally vouch for, not sure who else this works with).

Beyond that, I went with a router that I’ve had a great experience with before, the TP-Link Archer C7. While the last time I bought this and used it I kept the stock firmware on there, I decided that this time I was going to flash it with OpenWRT and use that, despite the fact that there is a known slowdown from WAN -> LAN ports. I followed these instructions, which were spot on but used a different TFTP server as I was doing this from a windows box.

After the first flash attempt, the router never successfully rebooted. I saw the successful transfer of the OpenWRT firmware, but then things kinda stopped. It would reboot, but instead of coming back up fully, I’d just get the power light, and the single ethernet port light that my cable was plugged into. The TFTP firmware update method worked like a charm, despite it looking like I had just bricked my brand new router, and I was able to use that same process to get the stock TP-Link firmware back up and running :). I was disappointed that things didn’t work the first try, I was  but I was determined to make this work.

After a frustrating few hours, I finally got fed up with trying every stable version OpenWRT (Most recent version is Chaos Calmer 15.05.1) or DD-WRT (I tried these just in case) and went off to download a nightly build of OpenWRT. To my amazement, it loaded and worked the very first time! Of course, I had to ssh to the router and install LuCI, the OpenWRT GUI, but otherwise the setup was flawless. Had I read the ‘notes’ in the Quickstart Guide on the OpenWRT wiki first, I would have seen that newer versions of the Archer C7 have a new flash chip that requires a nightly build of OpenWRT, instead of the most recent ‘stable’ build. Live and learn I guess 🙂

Oh, one last thing! To get 5ghz WiFi working you’ll also need to install the ath10k drivers and firmware. You can do it via SSH by typing:

opkg update && opkg install kmod-ath10k ath10k-firmware-qca988x

Or by installing the two packages – kmod-ath10k and ath10k-firmware-qca988x – from the LuCI GUI in the admin console, then rebooting.

Just thought I’d post and get some good info out there as a google search during my unsuccessful flashing didn’t yield any meaningful results. If your TP-Link Archer C7 isn’t rebooting properly after a flash with OpenWRT, at least you’ll know what’s up 🙂

AzureAutomation – Webhooks, jQuery, and Runbooks, oh my! (Part 2)

Alright! In the last post, we setup AzureAutomation and tested it out on our Hybrid Worker. Now, let’s create a webhook to make sure we can trigger this remotely. What’s a webhook you might ask? Well, that’s an excellent question!



Basically, it’s a URL that is both the location and token, to use a resource. Think of it like a trigger on a gun – once you’ve got your finger on it you can use whatever is behind it! Why is that useful in this case? Well, a webhook will let us use our Azure Automation runbooks from another source. That source could be a PowerShell workflow, another Azure Automation runbook, a C# app, or a website! Cool stuff.

Anything we do below assumes you’ve already got OMS and Azure setup, and have the Azure Automation components configured. If you need help doing that, go back and check out part 1 of this guide!

So we’re in Azure, I’ve headed to the ‘Runbooks’ Area of my automation account from last post, and found the same ‘Test-HybirdWorker’ runbook that I was using before. I’m attempting to use the scientific method and not change too much at once 😉


If I click the ‘Webhooks’ tile, we get a new page where we can add a new Webhook. You can have multiple Webhooks for a single runbook, but we just need one for now. Click that ‘Add Webhook’ button!


Let’s fill out some info…

webhook3 webhook4

Be sure to save the webhook URL someplace safe! You only see it once, and it’s super important. It acts as both authentication and a pointer to running your runbook, so keep it safe, but accessible, in the event that you need it again.

We’ll also want to make sure that we set this webhook to run on our Hybird Worker again. We can always come back and change this later, but we’re here, so we may as well set it!


After hitting ‘OK’, then ‘Create’, you’ll be brought back to your Webhooks screen, and you’ll see the fruits of your labor! Clicking on your Webhook, you can get the details and modify parameters and settings if need be. If you can’t find your Webhook URL because you didn’t listen to me above, you’re in trouble! Time to create a new Webhook and update your code 🙂


Beautiful! We’ve got a webhook, but what the heck do we do with it? Well, it’s still just a trigger to our basic runbook that writes a file to the C drive… but let’s make sure it works how we want and triggers our runbook appropriately. We’ll need to switch gears a bit and build a simple webpage to try this out 🙂

Here’s some code you can drop in a folder in your C:\inetpub folder. Just install IIS with default settings and that should be all you need here.




You’ll want to make sure IIS is installed! It doesn’t take anything special to run this website, so defaults is fine. I’ll update this page with a quick PowerShell one line to install the proper roles and features in a bit 🙂

Let’s unzip our files now…

webhook8 webhook9

And move it into our inetpub folder…


Perfect! Now let’s make the website in IIS, and point it to the files we’ve dropped locally.


webhook11 webhook12

You’ll see I’ve changed the port to 8080, that’s just because I have other things running on port 80. Run it wherever you like!


And now let’s navigate to our webpage and see how it looks…


Holy cow, it’s beautiful!

This gives us a nice HTML webpage, some CSS to make it pretty, and JavaScript, specifically jQuery, that can be called when the big button is pressed. Awesome!

Let’s dive into the code…

We’ve got a few files and folders in here:


  1. CSS – This holds our CSS files to theme the website. These have been borrowed from Bootstrap and saved us tons of time 🙂 CSS is just color and style and layout, nothing you’ll need to change unless you want to change the look and feel.
  2. JS – this holds our javascript files, and we’ll end up in here modifying a few things here. This will be a good place to be 🙂
  3. index.html – The is the ‘scaffolding’ for the website that the CSS makes pretty. If we want to change any of the fields, forms, etc, then we’ll change things in here. Not much to do here either, unless you want to really extend the functionality.

So if we actually go ahead and pull up the code in aaFunctions.js, we’ll want to change the webhook URL so it matches what we’ve got from the Azure Portal when we setup the webhook. I’m going to paste mine in, you do the same!


Alright, let’s navigate back to the page, hit refresh, and press the big button!


It says that something happened… but let’s check Azure.


Awesome! and if we check out our Hybrid Worker again…

webhook22We’ve got an updated last modified date! It worked 🙂

This is awesome. One thing I do want to point out, is that if you’ve got the developer console enabled in your web browser of choice (Usually hitting F12 will bring this up), it will spit out an error when you click the button to actually call the webhook URL.

webhook24As far as I can tell, this is just a ‘red herring’ of sorts, and while not desirable, it doesn’t impact functionality here. I’m going to look into trapping this/eliminating it in a future post.

That’s all for now – Part 3 coming soon!

AzureAutomation – Webhooks, jQuery, and Runbooks, oh my! (Part 1)

So this post is a bit different than my previous ones, as this is the first to not really be related to System Center Service Manager in a long time. That’s because, well, my focus will likely be shifting off of SCSM in the next few months and more towards SCCM and Microsoft’s new Azure offerings like EMS and OMS. A career change will do that to ya 😉

Anyway, as part of my recent exploration into the Azure-sphere, I’ve found a love for OMS. I’ve been looking for a compelling replacement for Orchestrator for a long time. In order to be really useful, a new solution had to be:

  1. Lightweight (My SCORCH boxes are usually single-box, 2 Core/4GB Memory. So awesome)
  2. Simple to setup/use
  3. On-Premise
  4. Support PowerShell in all its glory!

SMA/WAP doesn’t fulfill requirement 1 or 2, and personally, nor does it really work with number 4. PowerShell Workflows are not exactly native powershell, and as I’ve yet to build something complex enough to *need* PowerShell workflows, the added complexity is just cumbersome.

Azure Automation sounded great when it first came out, but the lack of on-premise support was an issue. Once Hybrid Workers and native PowerShell (non-workflow) support came out, it was clear AzureAutomation was my new friend 🙂

So, now we’ve got this awesome, fancy new automation platform, let’s try and do something that I’ve never been able to do with SCORCH – Kick off a runbook via a URL! The XML Structured requests in SCORCH always made web browser’s unhappy, and so I was loving the new REST interface we’ve got with everything Azure, specifically, Azure Automation webhooks.

As I’ve done a bunch of work with the Cireson Portal lately, my knowledge of jQuery/HTML/CSS is pretty solid. I wanted to make a basic HTML website, have it take in parameters, and then run a runbook from AzureAutomation once I hit a button. That runbook should run against a local lab environment, and, in this case, would actually create a new user in my local AD environment, and email some info out on user creation. Easy? Simple? Eh, kinda.

I’m going to take this in 3 parts, so it’ll be a few different posts. I’ll link to the rest at the end!

First thing is first, we need to setup OMS, which is the platform for Azure Automation. Let’s hit up that OMS website and click that nice ‘Try for free’ button. Isn’t Microsoft nice!!


Well, that was easy! Let’s click the big blue ‘Get Started’ tile.


While there’s tons of functionality here, we just want the Automation for now – you can see I’ve checked the ‘Automation’ box in the lower right corner.


Once that’s installed, you’ll want to go back to that same ‘Get Started’ tile and setup our data sources. Don’t stop on the ‘Solutions’ tab this time, find the ‘Connected Sources’ tab and let’s take a peek at the ‘Attach Computers Directly’ section. That’s what we want to use! This will let us setup a local Hybrid Worker for automation. Download that Agent (64 bit obviously, you’re not using 32Bit OS’s in 2016, are you?) and save it someplace safe. Also, leave this page open, we’ll need that workspace ID and Primary Key.


When you download the agent, it’ll look like any other Microsoft Monitoring Agent. But it’s not just any agent, this is the one unique to Azure Automation! You can see I’ve given it a bit more detail in the name so I can find it later if I need to 🙂

Note that this cannot be installed on a machine that already has the ‘Microsoft Monitoring Agent’ on it – something like a box monitored by SCOM or a machine with an SCSM Agent installed (Management Server). Since they all are variations on the same ‘agent’, they must be unique on each box. I haven’t dived into SCOM monitoring of my HybridWorker, but that’ll come in a later post 🙂

Oh, and one last thing. For connectivity purposes, Microsoft just says the runbook worker needs web access to Azure! Make sure ports 80 and 443 are open to *.azure.com, and you should be golden. No messy ports to deal with – I love it!

oms5This is what I’m talking about! Let’s link this to OMS.


You might have my ID, but not my key! Muwhahaha. This comes from the page we left open above. The Key is the ‘Primary Key’ from the Connected Sources tab.


Alright! That’s it. Pretty simple, right? Our hybrid worker should be setup and connected to OMS. If you head back to the OMS portal, it should show a connected Data Source now on that pretty blue tile:

oms27Yess!!!!!!!!!! Now, in a lot of ways, that’s the easy part. OMS is just a *part* of the equation. We now need to link that OMS workspace to our actual Azure subscription, so we can manage Azure Automation from our Azure Portal. Got it?

OMS + Azure = Azure Automation!

I’m assuming you already have an Azure subscription, and if not, well, it’s easy and there’s tons of posts on it 🙂 We’re going to want to login to our Azure Portal (The new Azure Portal aka ARM), and search for the Automation content pane.


I hope you clicked on that little ‘star’ icon above so it got pinned to your left hand navigation. We’re going to be using this a lot 🙂 Now, let’s open the pane and hit the ‘Add’ button, then click ‘Sign Up’ on the right hand side.


This is going to do some interesting things if you don’t have an existing Azure subscription linked to this account, but ultimately you’ll get dropped back to the automation screen if you need to do anything here. Don’t panic! You’re on track 🙂


Phew! Billing is sorted, back to Automation. Let’s create a unique name and resource group for this bad boy. Think of resource groups as ways to keep resources distinguished between multiple customers. Azure is multi-tenant, so you’ll see a *lot* of separation built in. For smaller customers, or since we’re just doing an example here, we need not worry too much, we just need one resource group to assign resources to.


Here’s me making a resource group! Pretty easy 🙂



Awesome! We’ve now got our Automation Account setup. This configuration thus far has been *all* on the Azure side. Don’t you remember that equation from above? OMS + Azure = Azure Automation! We’ve got Azure all setup, and OMS all setup, now let’s link them so we get access to that Hybrid Worker.


You can see we’ve clicked on my Automation Account, clicked on the ‘Hybrid Worker Groups’ tile, and have clicked on the little ‘Configure’ icon at the top. It gives us awesome instructions on how to do this, but again, since we’re dealing with both Azure and OMS, it’s still a bit confusing. Basically, we did all the hard stuff before, this is just going to establish the linkage between our Azure workspace and the OMS workspace we setup earlier. They don’t *have* to be linked, which is why they exist separately, but for Azure Automation, we need that linkage.

In the above screenshot, see that ‘Step 1’ section? Make sure you’ve clicked on the second bullet in there where it says ‘Login to OMS and deploy the Azure Automation solution’. It’ll bring us…


Deja vu! Let’s sign back in…


Oh! Cool! We’re linking our OMS subscription to our Azure one. We want this.


You can see that there’s a new tile here ‘Solution requires additional configuration’ for Automation. Let’s click that.


It wants to link our Automation Account to Azure! Yes, yes, we want this. Save it and don’t look back!


Bow chica wow wow! You can see our Automation tile now shows the little Azure Automation icon with our Azure Automation account name at the top. It also shows a runbook now, which is cool. I like runbooks.

Now, if you haven’t taken a break at this point, don’t do it now! We’re so close to success I can taste it. We’ve got Azure setup, we’ve got OMS set up, and we’ve got our Hybrid Worker setup. The last bit is to add this Hybrid Worker to a Hybrid Worker Group so we can use it. I know, I sound crazy, but think of it kinda like a resource within a resource group. It exists, it’s functional, but it needs to be assigned somewhere before we can use it.

Microsoft has a great post on adding a runbook worker to a Hybrid Worker Group. I’ve screenshotted the good stuff below :


Luckily it doesn’t show my entire key in this screenshot 🙂

oms21Here’s me adding things!

oms22 oms23

Boom! The command completed successfully and if I go back to the Azure portal and refresh things…


Hybrid workers for days! You’ll see I’m using my Orchestrator box for my new Hybrid Runbook worker. It works perfect! It’s a Server 2012 R2 box with 1 Core and 2GB memory. Insane, right?!

Now, this looks good, this looks fine, but the proof is in the pudding. We need to do a quick test to make sure this is all working. I’ll write a quick runbook to write a file locally on the Hybrid Worker and make sure that comes through!

To make a new runbook, easy enough, we just click the ‘Add a runbook’ button at the top there. You’ll see it opens up the ‘Add Runbook’ pane, where we can select ‘Quick Create.’

Lets fill in a few things…

Note: Powershell is no the same as Powershell Workflow! If you don’t know the difference, select ‘Powershell.’ If you do, then select which one you need 🙂


We’ve got a blank space!!! Wasn’t that a song? Right, let’s fill it with some basic stuff to just write to a local file on the Hybrid Worker.

That wasn’t too hard now, was it 🙂


You’ll need to hit the ‘Save’ button first. Once you do that, you’ll see it greyed out, and then you’ll need to ‘Publish’ the runbook to actually use it. It’s functionality that is pretty similar to what Orchestrator used to do actually…

oms30 oms31

Done and published! Don’t mind me, I’ve made a few other runbooks here too… those come later 😉


Let’s select the runbook, and hit that ‘Start’ button. Once we hit it, we’ll get the option to input any input parameters (there aren’t any in our case) but more importantly, specify if we want to run this just in Azure, or on a Hybrid Worker. Let’s pick the Hybrid Worker!

oms33 oms34

If we hit ‘OK’, you’ll be returned to the Job Summary page, where we can wait for it to finish. Don’t blink! It happens quick.

oms35 oms36

Yes, I know it’s a different Job number. The runbook ran too fast and I had to start a new job to take another screenshot 🙂


Beautiful! We’re in business! You’ve got Azure Automation on a Hybrid Worker in your environment now.

These Hybrid Workers are awesome in that they work:

  1. Faster than Orchestrator from trigger to execution
  2. Better than Orchestrator in their fault tolerance (Hybrid Worker Groups) and logging

A few last minute notes:

  1. You these Hybrid Worker ‘groups’ are exactly that – they can be groups of machines and the request can be passed around to the first available one to load-balance. In our case, we only have one, but it works just fine with a single worker in a group.
  2. If you want to use any commandlets locally on the Hybrid Worker, make sure they are installed by you! Azure Automation won’t do any of that part for you, but other tools in the MSFT toolkit will! (Think DSC 🙂 )

That’s all for now, check back in just a bit for the next two posts on making the real magic happen!

Update: Part 2 is now live!

Monitoring an SCSM Connector – Better than guessing!

First post in quite a while, but really I’ve been amassing tons of content to post. Hopefully this gets busier in the near future 🙂

I’ve been doing a *lot* of work with PowerShell automation and Service Manager. So much I daresay it’s become a second language. It’s even crept into my dreams! Seriously. I woke up the other morning with an idea for a new script and I spent the morning hours from 2AM until 9AM writing it.

As part of a user-onboarding script I’ve been working on lately, there was the need to ensure that the user object was synched back into SCSM before proceeding anymore with the onboarding. You would *think* it would be easy to monitor for such things, but alas! The SMLets deities were not so kind.

So, without status information, I was stuck. I was able to sleep the script for 10 minutes and wait for the connector to finish, but as the customer environment grows, that setting might no longer work. Not to mention at the moment, it only takes about 5 minutes for the connector to run, so we’re wasting time!

Then I had the genius idea of monitoring event logs for the final status message in a connector workflow and just waiting for that! As it turns out, this method isn’t *totally* reliable and Get-WinEvent was being a pain, so I never actually got this to work how I had hoped. Not to mention, this a bit of a roundabout way to monitor for things – I don’t love watching event logs, and usually I consider that a last ditch effort.

Enter the SCSM SDK! Now, I’ve spent *all* of my automation life using SMLets and the Service Manager Commandlets, so the SDK was foreign territory for me. Even worse, any examples I could find were SDK + C#, not SDK + Powershell. I tried variations of code on and off for about 3 days, and kept getting lost. By day 2, my hair was thinning and wrinkles were forming on my forehead. I thought that I had been bested.

Enter my coworkers! Cireson happens to have some amazing people on board, several of whom know the SCSM SDK intimately. Allen Anderson, a friend of mine and coworker on the Consulting team, has been doing some work with the SDK and PowerShell, and offered to help me out. Without him, this script would have not been possible – thanks Allen! This is the same guy responsible for a few of our apps, as well as some amazing custom solutions for customers. He’s a consultant and a developer? It’s like the best of both worlds.

Anyway, Allen gave me a bunch of help, and with some of his code, and some of my code, I came up with this script. It watches a connector, waits for it to sync, and then tests for an object at the end of the loop to make sure it exists in SCSM. In my case, I’m waiting for a user to sync to SCSM via AD, so you’ll see me test for a user object.

The code is ugly, and there’s lots of room for improvement, but it works! Hopefully it helps someone else who is onboarding users with SCSM 🙂

Note: you’ll see some slightly weird spacing where I commented out a separate powershell invoke. See the update below.


PS. Here’s a link to the second write-up of this for my company blog. Just a little different flair 😉

Update: Here’s the code, wrapped in a powershell, shell. If you’re dropping this into Orchestrator, you can use this as a literal ‘drop in’ inside of a .Net activity 🙂


Find Users on Multiple Domains

It’s been too long! I miss blogging. Alas, sometimes life gets busy and customers get demanding!

It’s been a crazy few months, and I’ve got more stuff to blog than I ever thought was remotely possible, but here’s a really cool one I wanted to share!

I’ve been helping my coworker, Seth, design a User-Onboarding script for a customer. They have 4 domains, but with a trust relationship between them so I can search from everywhere. That said, I can’t cascade my search from the top down, well, I’m not sure why. Apparently “get-aduser” isn’t that smart!

That said, I made a script to search for a user object in an array of domains, and then drop out of the loop once it finds the username. It’s pretty nifity, and you can do creative bits to read in multiple domains (static coded, csv, xml, text, etc.).

I’ll let the code do the talking 🙂



Reporting on AD Lockouts via PowerShell

So I’m behind on posts, but this one was just too fun to pass up!

I was recently out and about, doing some SCSM training at a customer site up in the frosty North of Canada! In between consulting, I got the chance to wander around town, eat some delicious food, and make some amazing friends, but perhaps the coolest part of the trip was something that you’ll hear PowerShell guru’s talk about again and again, I became a ‘toolmaker’ for my customer.

I overheard a conversation about a report they get that covers AD ‘lockout’ events. When a user mistypes their password a certain number of times (in their case, 3) it logs an event and locks the account for a period of time before reinstating access. They had a separate program that monitored these events and then dumped a report to PDF. Some person on their team then went through the PDF report (not sort-able as after all, it is a PDF) and then had to find unique values (not easy because it’s not sort-able) and then once those were found, get e-mail addresses and send out an email to users saying something like, “Hey, we saw your account get locked out. Was this you? If not, please let us know so we can do something about it.”

As I was listening, all I could think of was that it would be a pretty simple PowerShell script to hit the DC’s, look for those events, add them to an array, parse them, and then do whatever was necessary with the resulting information. As it turns out, 3 hours of tinker-time later, I had a beautiful, tested, script under 200 lines of code, that worked wonders.


That script gets the lockouts, adds them to an array, parses them, and then outputs the information to CSV if desired, as well as e-mails the users using HTML set templates with variable replacement. I thought the templates were a nice touch instead of using some powershell-generated HTML 🙂

I’m linking to BitBucket because the code has been changing too rapidly to post, but it’s pretty self explanatory. I tried to comment the code so as to give the customer a chance to download it themselves and play with it. I’m trying to create more toolmakers!

I could see this being useful in a lot of environments, so I have shared it with the world. Those of you who are looking at this saying, “Why not use SCOM?”, well, you could use SCOM, but in this case the SCOM environment was run by a different team and politics being what they are, as well as the SCOM project being in its infancy, that wasn’t an option. This is a non-SCOM option to do some monitoring and have some fun!

Hopefully people find this useful and can contribute back! Let me know if anyone has improvements or modifications to make it better – I’m going to try to start to actually use some of the collaborative features of BitBucket 🙂



PS. Yes, I know I left some variables in my script. I’ll clean them up later 🙂 – My e-mail address isn’t that hard to find anyway :p

Compacting all your Hyper-V Disks in one script

So I’ve got a good size lab that I run off of my laptop. It’s about 20 Hyper-V VMs, all running something or other System Center, or just acting as a generic client computer to give my SCCM/SCSM environment some actual real data to work with. It’s awesome to have handy (when it isn’t broken!) but it takes up a lot of space, even on my 500GB SSD.

I had a dream last night about SMA and decided it was time to stand up an SMA server in my lab environment. So, this morning, I woke up nice and fresh and logged in to my laptop ready to create a new VM, only to be stopped mid-creation when I ran out of space. Ugh.

Now I know that I *should* have tons of space left, but that it’s all eaten up from when I was installing the entire lab months ago. I’d drag ISO’s around and copy files into VM’s, all of which take up space, and then that space was never reclaimed (I’m using dynamic disk sizing in Hyper-V). I should have just mounted all those ISO’s over the network, but I’m not that smart before a few cups of tea.

Anyway, as I poked around the Hyper-V management interface, I realized that it would be way more clicks than I was interested in doing this early in the morning. So, in another instance of working smart, not hard, I decided to craft a PowerShell script that would do it all for me!

You can find the very latest version here!

Here’s the whole code, though it may be out of date compared to what you find at the link above. Just copy it into the PowerShell ISE or into a PowerShell script file (.ps1), make sure your execution level is set to allow it, and then let it ride!


This will get your VM’s, do some nifty work on them, and reclaim your space via disk compaction!

Hopefully it helps someone out there save some time 🙂

Max script block size when passing to powershell.exe or invoke-command

This. This may be the most important piece of information I ever contribute to anyone automating Service Manager (SCSM).

If you’ve ever worked with Orchestrator, and specifically Orchestrator and SMLets, you know what a pain in the butt it is. No, seriously. Orchestrator (Also known as SCORCH or ORCH) is great in a lot of ways but miserable in others. For me, right now, it has a big flaw – working with PowerShell.

If you use the built-in “.Net Activity” to run a PowerShell script, it only runs in PowerShell version 2. No problem you say! Several people have written tutorials on how to drop from PowerShell 2 into PowerShell 3. Those examples are numerous, and are an excellent resource (See here, here, here, here, and many others).

There’s even a separate Orchestrator Integration Pack that allows you to run PS scripts easier in SCORCH. Cool beans.

Here’s where it gets ugly. Let’s say, I’m developing a script in the ISE because it’s awesome. That development is happening on a box that has PowerShell 3. PowerShell 3 has several cool functions in it that I happen to rely on (I know in many cases there are workarounds using PS2, but that’s not always the case!) and so when I paste the entire script into SCORCH and try to run it (like all those examples above tell us to do), I get errors galore.

It’s not that I’m doing anything different from the links above. Nope, not a thing. I’m just passing the entire script to a powershell.exe scriptblock (Or script block, I’ve seen it spelled both ways) or to invoke-command. That will, in theory, drop the entire script into PowerShell 3 land and the script can then proceed on its merry way.

Specifically, I get lovely errors like:

“System.Management.Automation.ApplicationFailedException: Program ‘powershell.exe’ failed to execute: The filename or extension is too long”


Such errors have been rectified by splitting up script blocks into smaller parts that can be passed on, and ensuring that only the absolute necessary items are passed to PowerShell or invoke-command. This then gets more complex, as you’re making multiple calls and passing multiple variables in and out. This takes more time, and good luck passing any objects in or out; they get deserialzed in that process and lose all access to any methods that the original object had. So now, we’re talking about passing strings between sessions, calling for objects inside the new PS3 session only to recreate the object we had in our existing PS2 session, just so we can run a method on it. This adds complexity, time, and lots of other not-fun things to our automation.

It turns out that both powershell.exe, and invoke-command have a limit on the size of the scriptblock. It’s been something that my co-workers and some of our implementation partners have run up against again, and again.

The size limit we kept hitting was one that we had only guessed at, one that was unknown, one that we tackled blindly… UNTIL TODAY.

There is a 12190 byte MAX limit of any script block passed to powershell.exe or invoke-command. 

The easiest way to see how big your block is, is to just copy it into Notepad++ and look at the ‘length’ value at the bottom of the editor window. That tells you the byte size of the resulting file (and therefore text), and as long as your script is less than 12190 bytes, it will be passed along with no errors to whatever you like

Go over that limit and you’ll be cast into a world of uncertainty and doubt where your ability to code succinctly is called into question!

And now you know 🙂

On ‘the cloud’ – Why sometimes it just doesn’t add up

I got back from TechEd last week and had an absolute blast! The Cireson team and I set up an amazing booth, had some excellent presentations from some of our partners, and we, as a company, got to do some team building for the first time ever! Nothing like being able to shoot your coworkers with paintballs to build team spirit 🙂 I think I only cursed out my boss, oh, 100 times or so. Thankfully, I’m still employed.

Of course, Microsoft’s big theme this year was ‘Cloud.’ It was Azure this, or Office 365 that, or “look what you can do with our cloud” again and again and again. Microsoft’s ‘hard’ tools, including a good chunk of the System Center suite, was left to play second fiddle to whatever was being offered via a cloud subscription model.

Now, I’m not outright opposed to the cloud, at all. I think there’s some awesome uses for it, but I had some really interesting conversations with a few people, and did some research of my own, that really puts the whole cloud vision in perspective – and I think it’s perspective worth sharing.

One of our developers lives in New Zealand and loves it. When he wasn’t being the butt end of one of our jokes (“Hey, whose that guy with an Australian accent over there!”), he and I had some awesome conversations about food, life, and of course, technology.

Somehow, the topic of internet access came up, and I expressed my love for my 100mbps down/ 5 mbps up connection that I get here in Baltimore, for a decent price. My Kiwi friend’s jaw nearly dropped. His connection, and keep in mind he’s one of our developers, is 500k down, who knows what/up, and is capped at 5GB per month. Sure, I get it, he’s on an island, but New Zealand is far from isolated compared to some of the markets Microsoft wants to get into, and if you’re using a cloud solution for resources, I can see that not only will your connection be spotty on a 500k link, but you’re apt to run up to that 5GB bandwidth cap really quickly! I’m sure businesses in NZ have better connections than a lowly developer, but putting all your eggs in a ‘cloud’ basket seems like a rather expensive, and potentially slow or limited, solution to your IT needs. It’s especially terrifying as we see Microsoft move more of their offerings to cloud only (PowerBI anyone?) with no on-premise solution.

Let’s head to another island, this time, one rather well connected, Great Britain. Another one of my co-workers, the ever-so-talented Shaun Laughton, happens to live on this very island! He joined in on this conversation and lamented his own internet situation – that he can see his local telecom box from his house, and yet his internet speed was only moderate at best, expensive, and had a data cap. If he lived in a major city, say London (where Google happens to have a massive campus with super-fast internet) then internet access would be cheap, fast, and without a cap.

This scenario isn’t uncommon and puts cloud solutions in a really interesting situation. For people in very well connected areas (read: Urban, 1st world) then cloud makes some sense. Why own the infrastructure when someone else can do it for you, and then just lease the time and resources you need? In this same instance, Microsoft’s own direction makes a lot of sense, as they are positioned to provide the *best* platform for their products in the cloud, and can really benefit from the subscription licencing model – hopefully attracting smaller customers with more reasonable, monthly or annual pricing.

As soon as you leave one of those major cities however, this plan breaks down. Who is going to depend on a cloud service when it’s going to eat at their monthly data cap, and even then, not be accessible nearly as fast as a local server or instance of a software application would be? For these customers, not only does the cloud not make sense from a costing perspective (as relying on it would require multiple, redundant, unlimited, internet lines, likely costing a fortune themselves) but their users are far less likely to have reasonable access to high-bandwidth connections from outside the office, thereby breaking down the ‘work anywhere’ principles that cloud relies on.

One final example that I thought was really interesting comes from my own situation. I currently reside in good ole’ Baltimore, MD. I’m therefore conveniently positioned in the densely populated ‘Northeast Corridor’ of the USA that spans from Boston, MA down to Washington D.C. This area of the USA boasts the most dense concentration of transportation infrastructure in the States, both physical and telecom. I have the luxury of fast internet at a reasonable price, and if I really wanted to, I could hop on over to a University library and get on the Internet-2 bandwagon for some really insane speeds.

Recently, I’ve been having issues with my Data Warehouse for SCSM remaining intact on my laptop lab. I’m sure it’s a case of too many reboots and restarts, too many ups, then downs, and so it’s had me looking at setting up a more permanent lab somewhere that isn’t on my laptop. I started looking at cloud solutions because hey, why not! Everyone’s doing it, it’s got to be cost effective… right?

Azure (US East, USD):

1, Medium (A2) Basic Instance Annual cost: $110/month = ~$1300/year

I’m going to cheap out, but let’s say I go with a Medium (A2) instance for my SCSM server. On that single server, I could toss the SCSM Workflow Server and a SQL Standard instance. It would be stupid to do, since an A2 instance only gets me 2 cores and 3.5Gb RAM, but I’m trying to keep costs low. Right. So that’s just one server, let’s scale up a bit.

1, Medium (A2) Basic Instance Annual Cost: $110/month = ~$1300/year

1, Large (A3) Basic Instance Annual Cost: $219/month = ~$2628/year

Alright, now I’ve got a large instance for my SCSM DW server, and the medium one for my Workflow server. I’m looking at almost $4000 USD a year to run two servers. This doesn’t even start to include an SCCM Server, Domain Controller (Though I think Azure has other services in play for that) or any client machines or servers for hosting demo web-portals. Wowza.

Amazon (US East, USD):

1, m3.large Instance Annual Cost: $197/month = ~$2365/year

Now this isn’t exactly apples to apples, since this m3.large instance has 7.5Gb RAM and can flex up to 6.5 ‘Elastic CPUs’, but humor me here. Double that for two servers, like above, and we’re at about $4700 a year to run two instances of this, again, with no additional machines for clients or other servers. Oh yea, and if you don’t have licenses for the MSFT software, good luck (Thankfully I have MSDN, phew!).

So, imagine I’m not even paying for my internet access here at home, it’s still a lot of money to use cloud services for a small guy like me, running an instance 24/7. What’s my alternative? Well, it is, of course, my favorite – do it yourself!

I priced out a rough machine that would do what I wanted from a hardware perspective:

1, Supermicro MBD-H8DCL-6F-O – $360

1, AMD Opteron 4386 8 core, 3.1 GHZ – $350

64GB Kingston RAM – $670

1, 1TB Samsung SSD, $470

Case, power supply, and other things: $250

Total hardware cost: $2100

Now let’s factor in power – I tried to do this with a pretty low power requirements, but I’m going to estimate on the high end.

600 Watts * .12 per kWh * 24 hours/day  = $52 per month = ~$630 per year

So for a total cost of $2730 in the first year, and $630 every year thereafter, I can have a server that can run an entire lab of VM’s (I’ve got 13 running on my laptop right now with 4 cores and 32GB RAM – this server could double that easily). There’s no point in me going to the cloud, at all.

The cloud may be the future, but the future isn’t now, at least not for everyone. Thankfully, Microsoft hasn’t totally killed off their on-premise solutions yet. Let’s just hope they don’t get around to doing that for a long, long time.

Authorizing Computers for Software Applications – An Example

So life has been busy lately! The consulting life is quite different than being a Systems Admin, quite different…

Anyway, it just so happens that this past Friday, an e-mail went out with a request for something whipped up in Orchestrator. Since I was totally exhausted with things that I *had* to do, I decided to take a stab at it 🙂

A customer was asking for a workflow that would take input from SCSM (System Center Service Manager) via a Request Offering and allow the end user to select an application, as well as a computer object, and then add that computer object to ‘Authorized Computers.’ It’s worth noting that ‘Authorized Computers’ is a relationship that is part of Cireson Asset Management and is used for ‘authorizing’ CI’s to use a software license.

I had the initial idea roughed out, and a bit of prodding from my co-workers got me to the final solution, which I present to you below!

The Orchestrator Part:


Here’s the runbook I came up with – it’s not too difficult, but let’s walk through it.

We’re taking the ‘ID’ property in from the RBA from SCSM. This property, while called ‘ID’ in SCSM, actually ends up being the GUID of the RB object in SCSM. From there, we’re getting the relationship between that RB and the Service Request, and then getting the Service Request object itself.

From there, we have to get two related items – one that’s a ‘Windows Computer,’ and one that’s a ‘Software Asset.’

For the ‘Windows Computer’ we’re going to do a ‘Get Relationship’, look for a ‘Windows Computer’ related object and make sure any objects that we pass on are related by the ‘Is related to configuration item.’

Runbook2 Runbook3

Now we’ll do the same thing, but for a ‘Software Asset.’



Then, lastly, once we’ve gotten all we need, then we’ll create the relationship to the Cireson Software Asset object.


Awesome! We’re halfway there 🙂 Now to build out the templates and requests on the Service Manager side.

To start, you’ll need to make sure you’ve got an Orchestrator connector set up, and your runbooks are syncing properly. I’m going to assume you know how to do that 😉

Now, we’ll need to create two templates, a runbook template, and a service request template.

The runbook template is easy enough – just create it, fill in the basic fields, make sure you check the ‘Is Ready for Automation’ box, and link it to the Runbook in Orchestrator that you’re targeting. When we’re doing property mapping, you’re going to want to map the ID property to the one input field, as shown below.


Onto the Service Request template! This too is pretty basic – create it and fill in the basic properties as you see fit, then head over to the activities tab and link your RB template, as shown below.


Last but not least, we’ll create the Request Offering so we can hit it from the portal. Again, make a new RO, and then when we’re asking for user input, let’s put something like the following:


And those queries…

Query1 Query1-1 query2


You’ll notice that we only allow the selection of one software asset, and multiple computer objects. This just keeps things a little bit cleaner, and prevents people from going nuts with the selection fields 🙂 I’m also not doing any filtering of the objects. In my lab environment, it’s not too busy, but feel free to scope those queries beyond the objects themselves if you’re returning more values than you need.

Once all that’s in place, publish your Request Offering under a Service Offering of your choosing, navigate to it via the portal (Preferably Cireson’s new beautiful Self Service Portal, but the Out Of Box SCSM Portal will work too!) and let your IT organization authorize software via a nice, easy to use interface!