Category Archives: Orchestrator

System Center Orchestrator 2012 SP1

AzureAutomation – Webhooks, jQuery, and Runbooks, oh my! (Part 2)

Alright! In the last post, we setup AzureAutomation and tested it out on our Hybrid Worker. Now, let’s create a webhook to make sure we can trigger this remotely. What’s a webhook you might ask? Well, that’s an excellent question!

Basically, it’s a URL that is both the location and token, to use a resource. Think of it like a trigger on a gun – once you’ve got your finger on it you can use whatever is behind it! Why is that useful in this case? Well, a webhook will let us use our Azure Automation runbooks from another source. That source could be a PowerShell workflow, another Azure Automation runbook, a C# app, or a website! Cool stuff.

Anything we do below assumes you’ve already got OMS and Azure setup, and have the Azure Automation components configured. If you need help doing that, go back and check out part 1 of this guide!

So we’re in Azure, I’ve headed to the ‘Runbooks’ Area of my automation account from last post, and found the same ‘Test-HybirdWorker’ runbook that I was using before. I’m attempting to use the scientific method and not change too much at once 😉


If I click the ‘Webhooks’ tile, we get a new page where we can add a new Webhook. You can have multiple Webhooks for a single runbook, but we just need one for now. Click that ‘Add Webhook’ button!


Let’s fill out some info…

webhook3 webhook4

Be sure to save the webhook URL someplace safe! You only see it once, and it’s super important. It acts as both authentication and a pointer to running your runbook, so keep it safe, but accessible, in the event that you need it again.

We’ll also want to make sure that we set this webhook to run on our Hybird Worker again. We can always come back and change this later, but we’re here, so we may as well set it!


After hitting ‘OK’, then ‘Create’, you’ll be brought back to your Webhooks screen, and you’ll see the fruits of your labor! Clicking on your Webhook, you can get the details and modify parameters and settings if need be. If you can’t find your Webhook URL because you didn’t listen to me above, you’re in trouble! Time to create a new Webhook and update your code 🙂


Beautiful! We’ve got a webhook, but what the heck do we do with it? Well, it’s still just a trigger to our basic runbook that writes a file to the C drive… but let’s make sure it works how we want and triggers our runbook appropriately. We’ll need to switch gears a bit and build a simple webpage to try this out 🙂

Here’s some code you can drop in a folder in your C:\inetpub folder. Just install IIS with default settings and that should be all you need here.



You’ll want to make sure IIS is installed! It doesn’t take anything special to run this website, so defaults is fine. I’ll update this page with a quick PowerShell one line to install the proper roles and features in a bit 🙂

Let’s unzip our files now…

webhook8 webhook9

And move it into our inetpub folder…


Perfect! Now let’s make the website in IIS, and point it to the files we’ve dropped locally.


webhook11 webhook12

You’ll see I’ve changed the port to 8080, that’s just because I have other things running on port 80. Run it wherever you like!


And now let’s navigate to our webpage and see how it looks…


Holy cow, it’s beautiful!

This gives us a nice HTML webpage, some CSS to make it pretty, and JavaScript, specifically jQuery, that can be called when the big button is pressed. Awesome!

Let’s dive into the code…

We’ve got a few files and folders in here:


  1. CSS – This holds our CSS files to theme the website. These have been borrowed from Bootstrap and saved us tons of time 🙂 CSS is just color and style and layout, nothing you’ll need to change unless you want to change the look and feel.
  2. JS – this holds our javascript files, and we’ll end up in here modifying a few things here. This will be a good place to be 🙂
  3. index.html – The is the ‘scaffolding’ for the website that the CSS makes pretty. If we want to change any of the fields, forms, etc, then we’ll change things in here. Not much to do here either, unless you want to really extend the functionality.

So if we actually go ahead and pull up the code in aaFunctions.js, we’ll want to change the webhook URL so it matches what we’ve got from the Azure Portal when we setup the webhook. I’m going to paste mine in, you do the same!


Alright, let’s navigate back to the page, hit refresh, and press the big button!


It says that something happened… but let’s check Azure.


Awesome! and if we check out our Hybrid Worker again…

webhook22We’ve got an updated last modified date! It worked 🙂

This is awesome. One thing I do want to point out, is that if you’ve got the developer console enabled in your web browser of choice (Usually hitting F12 will bring this up), it will spit out an error when you click the button to actually call the webhook URL.

webhook24As far as I can tell, this is just a ‘red herring’ of sorts, and while not desirable, it doesn’t impact functionality here. I’m going to look into trapping this/eliminating it in a future post.

That’s all for now – Part 3 coming soon!

AzureAutomation – Webhooks, jQuery, and Runbooks, oh my! (Part 1)

So this post is a bit different than my previous ones, as this is the first to not really be related to System Center Service Manager in a long time. That’s because, well, my focus will likely be shifting off of SCSM in the next few months and more towards SCCM and Microsoft’s new Azure offerings like EMS and OMS. A career change will do that to ya 😉

Anyway, as part of my recent exploration into the Azure-sphere, I’ve found a love for OMS. I’ve been looking for a compelling replacement for Orchestrator for a long time. In order to be really useful, a new solution had to be:

  1. Lightweight (My SCORCH boxes are usually single-box, 2 Core/4GB Memory. So awesome)
  2. Simple to setup/use
  3. On-Premise
  4. Support PowerShell in all its glory!

SMA/WAP doesn’t fulfill requirement 1 or 2, and personally, nor does it really work with number 4. PowerShell Workflows are not exactly native powershell, and as I’ve yet to build something complex enough to *need* PowerShell workflows, the added complexity is just cumbersome.

Azure Automation sounded great when it first came out, but the lack of on-premise support was an issue. Once Hybrid Workers and native PowerShell (non-workflow) support came out, it was clear AzureAutomation was my new friend 🙂

So, now we’ve got this awesome, fancy new automation platform, let’s try and do something that I’ve never been able to do with SCORCH – Kick off a runbook via a URL! The XML Structured requests in SCORCH always made web browser’s unhappy, and so I was loving the new REST interface we’ve got with everything Azure, specifically, Azure Automation webhooks.

As I’ve done a bunch of work with the Cireson Portal lately, my knowledge of jQuery/HTML/CSS is pretty solid. I wanted to make a basic HTML website, have it take in parameters, and then run a runbook from AzureAutomation once I hit a button. That runbook should run against a local lab environment, and, in this case, would actually create a new user in my local AD environment, and email some info out on user creation. Easy? Simple? Eh, kinda.

I’m going to take this in 3 parts, so it’ll be a few different posts. I’ll link to the rest at the end!

First thing is first, we need to setup OMS, which is the platform for Azure Automation. Let’s hit up that OMS website and click that nice ‘Try for free’ button. Isn’t Microsoft nice!!


Well, that was easy! Let’s click the big blue ‘Get Started’ tile.


While there’s tons of functionality here, we just want the Automation for now – you can see I’ve checked the ‘Automation’ box in the lower right corner.


Once that’s installed, you’ll want to go back to that same ‘Get Started’ tile and setup our data sources. Don’t stop on the ‘Solutions’ tab this time, find the ‘Connected Sources’ tab and let’s take a peek at the ‘Attach Computers Directly’ section. That’s what we want to use! This will let us setup a local Hybrid Worker for automation. Download that Agent (64 bit obviously, you’re not using 32Bit OS’s in 2016, are you?) and save it someplace safe. Also, leave this page open, we’ll need that workspace ID and Primary Key.


When you download the agent, it’ll look like any other Microsoft Monitoring Agent. But it’s not just any agent, this is the one unique to Azure Automation! You can see I’ve given it a bit more detail in the name so I can find it later if I need to 🙂

Note that this cannot be installed on a machine that already has the ‘Microsoft Monitoring Agent’ on it – something like a box monitored by SCOM or a machine with an SCSM Agent installed (Management Server). Since they all are variations on the same ‘agent’, they must be unique on each box. I haven’t dived into SCOM monitoring of my HybridWorker, but that’ll come in a later post 🙂

Oh, and one last thing. For connectivity purposes, Microsoft just says the runbook worker needs web access to Azure! Make sure ports 80 and 443 are open to *, and you should be golden. No messy ports to deal with – I love it!

oms5This is what I’m talking about! Let’s link this to OMS.


You might have my ID, but not my key! Muwhahaha. This comes from the page we left open above. The Key is the ‘Primary Key’ from the Connected Sources tab.


Alright! That’s it. Pretty simple, right? Our hybrid worker should be setup and connected to OMS. If you head back to the OMS portal, it should show a connected Data Source now on that pretty blue tile:

oms27Yess!!!!!!!!!! Now, in a lot of ways, that’s the easy part. OMS is just a *part* of the equation. We now need to link that OMS workspace to our actual Azure subscription, so we can manage Azure Automation from our Azure Portal. Got it?

OMS + Azure = Azure Automation!

I’m assuming you already have an Azure subscription, and if not, well, it’s easy and there’s tons of posts on it 🙂 We’re going to want to login to our Azure Portal (The new Azure Portal aka ARM), and search for the Automation content pane.


I hope you clicked on that little ‘star’ icon above so it got pinned to your left hand navigation. We’re going to be using this a lot 🙂 Now, let’s open the pane and hit the ‘Add’ button, then click ‘Sign Up’ on the right hand side.


This is going to do some interesting things if you don’t have an existing Azure subscription linked to this account, but ultimately you’ll get dropped back to the automation screen if you need to do anything here. Don’t panic! You’re on track 🙂


Phew! Billing is sorted, back to Automation. Let’s create a unique name and resource group for this bad boy. Think of resource groups as ways to keep resources distinguished between multiple customers. Azure is multi-tenant, so you’ll see a *lot* of separation built in. For smaller customers, or since we’re just doing an example here, we need not worry too much, we just need one resource group to assign resources to.


Here’s me making a resource group! Pretty easy 🙂



Awesome! We’ve now got our Automation Account setup. This configuration thus far has been *all* on the Azure side. Don’t you remember that equation from above? OMS + Azure = Azure Automation! We’ve got Azure all setup, and OMS all setup, now let’s link them so we get access to that Hybrid Worker.


You can see we’ve clicked on my Automation Account, clicked on the ‘Hybrid Worker Groups’ tile, and have clicked on the little ‘Configure’ icon at the top. It gives us awesome instructions on how to do this, but again, since we’re dealing with both Azure and OMS, it’s still a bit confusing. Basically, we did all the hard stuff before, this is just going to establish the linkage between our Azure workspace and the OMS workspace we setup earlier. They don’t *have* to be linked, which is why they exist separately, but for Azure Automation, we need that linkage.

In the above screenshot, see that ‘Step 1’ section? Make sure you’ve clicked on the second bullet in there where it says ‘Login to OMS and deploy the Azure Automation solution’. It’ll bring us…


Deja vu! Let’s sign back in…


Oh! Cool! We’re linking our OMS subscription to our Azure one. We want this.


You can see that there’s a new tile here ‘Solution requires additional configuration’ for Automation. Let’s click that.


It wants to link our Automation Account to Azure! Yes, yes, we want this. Save it and don’t look back!


Bow chica wow wow! You can see our Automation tile now shows the little Azure Automation icon with our Azure Automation account name at the top. It also shows a runbook now, which is cool. I like runbooks.

Now, if you haven’t taken a break at this point, don’t do it now! We’re so close to success I can taste it. We’ve got Azure setup, we’ve got OMS set up, and we’ve got our Hybrid Worker setup. The last bit is to add this Hybrid Worker to a Hybrid Worker Group so we can use it. I know, I sound crazy, but think of it kinda like a resource within a resource group. It exists, it’s functional, but it needs to be assigned somewhere before we can use it.

Microsoft has a great post on adding a runbook worker to a Hybrid Worker Group. I’ve screenshotted the good stuff below :


Luckily it doesn’t show my entire key in this screenshot 🙂

oms21Here’s me adding things!

oms22 oms23

Boom! The command completed successfully and if I go back to the Azure portal and refresh things…


Hybrid workers for days! You’ll see I’m using my Orchestrator box for my new Hybrid Runbook worker. It works perfect! It’s a Server 2012 R2 box with 1 Core and 2GB memory. Insane, right?!

Now, this looks good, this looks fine, but the proof is in the pudding. We need to do a quick test to make sure this is all working. I’ll write a quick runbook to write a file locally on the Hybrid Worker and make sure that comes through!

To make a new runbook, easy enough, we just click the ‘Add a runbook’ button at the top there. You’ll see it opens up the ‘Add Runbook’ pane, where we can select ‘Quick Create.’

Lets fill in a few things…

Note: Powershell is no the same as Powershell Workflow! If you don’t know the difference, select ‘Powershell.’ If you do, then select which one you need 🙂


We’ve got a blank space!!! Wasn’t that a song? Right, let’s fill it with some basic stuff to just write to a local file on the Hybrid Worker.

That wasn’t too hard now, was it 🙂


You’ll need to hit the ‘Save’ button first. Once you do that, you’ll see it greyed out, and then you’ll need to ‘Publish’ the runbook to actually use it. It’s functionality that is pretty similar to what Orchestrator used to do actually…

oms30 oms31

Done and published! Don’t mind me, I’ve made a few other runbooks here too… those come later 😉


Let’s select the runbook, and hit that ‘Start’ button. Once we hit it, we’ll get the option to input any input parameters (there aren’t any in our case) but more importantly, specify if we want to run this just in Azure, or on a Hybrid Worker. Let’s pick the Hybrid Worker!

oms33 oms34

If we hit ‘OK’, you’ll be returned to the Job Summary page, where we can wait for it to finish. Don’t blink! It happens quick.

oms35 oms36

Yes, I know it’s a different Job number. The runbook ran too fast and I had to start a new job to take another screenshot 🙂


Beautiful! We’re in business! You’ve got Azure Automation on a Hybrid Worker in your environment now.

These Hybrid Workers are awesome in that they work:

  1. Faster than Orchestrator from trigger to execution
  2. Better than Orchestrator in their fault tolerance (Hybrid Worker Groups) and logging

A few last minute notes:

  1. You these Hybrid Worker ‘groups’ are exactly that – they can be groups of machines and the request can be passed around to the first available one to load-balance. In our case, we only have one, but it works just fine with a single worker in a group.
  2. If you want to use any commandlets locally on the Hybrid Worker, make sure they are installed by you! Azure Automation won’t do any of that part for you, but other tools in the MSFT toolkit will! (Think DSC 🙂 )

That’s all for now, check back in just a bit for the next two posts on making the real magic happen!

Update: Part 2 is now live!

Monitoring an SCSM Connector – Better than guessing!

First post in quite a while, but really I’ve been amassing tons of content to post. Hopefully this gets busier in the near future 🙂

I’ve been doing a *lot* of work with PowerShell automation and Service Manager. So much I daresay it’s become a second language. It’s even crept into my dreams! Seriously. I woke up the other morning with an idea for a new script and I spent the morning hours from 2AM until 9AM writing it.

As part of a user-onboarding script I’ve been working on lately, there was the need to ensure that the user object was synched back into SCSM before proceeding anymore with the onboarding. You would *think* it would be easy to monitor for such things, but alas! The SMLets deities were not so kind.

So, without status information, I was stuck. I was able to sleep the script for 10 minutes and wait for the connector to finish, but as the customer environment grows, that setting might no longer work. Not to mention at the moment, it only takes about 5 minutes for the connector to run, so we’re wasting time!

Then I had the genius idea of monitoring event logs for the final status message in a connector workflow and just waiting for that! As it turns out, this method isn’t *totally* reliable and Get-WinEvent was being a pain, so I never actually got this to work how I had hoped. Not to mention, this a bit of a roundabout way to monitor for things – I don’t love watching event logs, and usually I consider that a last ditch effort.

Enter the SCSM SDK! Now, I’ve spent *all* of my automation life using SMLets and the Service Manager Commandlets, so the SDK was foreign territory for me. Even worse, any examples I could find were SDK + C#, not SDK + Powershell. I tried variations of code on and off for about 3 days, and kept getting lost. By day 2, my hair was thinning and wrinkles were forming on my forehead. I thought that I had been bested.

Enter my coworkers! Cireson happens to have some amazing people on board, several of whom know the SCSM SDK intimately. Allen Anderson, a friend of mine and coworker on the Consulting team, has been doing some work with the SDK and PowerShell, and offered to help me out. Without him, this script would have not been possible – thanks Allen! This is the same guy responsible for a few of our apps, as well as some amazing custom solutions for customers. He’s a consultant and a developer? It’s like the best of both worlds.

Anyway, Allen gave me a bunch of help, and with some of his code, and some of my code, I came up with this script. It watches a connector, waits for it to sync, and then tests for an object at the end of the loop to make sure it exists in SCSM. In my case, I’m waiting for a user to sync to SCSM via AD, so you’ll see me test for a user object.

The code is ugly, and there’s lots of room for improvement, but it works! Hopefully it helps someone else who is onboarding users with SCSM 🙂

Note: you’ll see some slightly weird spacing where I commented out a separate powershell invoke. See the update below.


PS. Here’s a link to the second write-up of this for my company blog. Just a little different flair 😉

Update: Here’s the code, wrapped in a powershell, shell. If you’re dropping this into Orchestrator, you can use this as a literal ‘drop in’ inside of a .Net activity 🙂


Find Users on Multiple Domains

It’s been too long! I miss blogging. Alas, sometimes life gets busy and customers get demanding!

It’s been a crazy few months, and I’ve got more stuff to blog than I ever thought was remotely possible, but here’s a really cool one I wanted to share!

I’ve been helping my coworker, Seth, design a User-Onboarding script for a customer. They have 4 domains, but with a trust relationship between them so I can search from everywhere. That said, I can’t cascade my search from the top down, well, I’m not sure why. Apparently “get-aduser” isn’t that smart!

That said, I made a script to search for a user object in an array of domains, and then drop out of the loop once it finds the username. It’s pretty nifity, and you can do creative bits to read in multiple domains (static coded, csv, xml, text, etc.).

I’ll let the code do the talking 🙂



Max script block size when passing to powershell.exe or invoke-command

This. This may be the most important piece of information I ever contribute to anyone automating Service Manager (SCSM).

If you’ve ever worked with Orchestrator, and specifically Orchestrator and SMLets, you know what a pain in the butt it is. No, seriously. Orchestrator (Also known as SCORCH or ORCH) is great in a lot of ways but miserable in others. For me, right now, it has a big flaw – working with PowerShell.

If you use the built-in “.Net Activity” to run a PowerShell script, it only runs in PowerShell version 2. No problem you say! Several people have written tutorials on how to drop from PowerShell 2 into PowerShell 3. Those examples are numerous, and are an excellent resource (See here, here, here, here, and many others).

There’s even a separate Orchestrator Integration Pack that allows you to run PS scripts easier in SCORCH. Cool beans.

Here’s where it gets ugly. Let’s say, I’m developing a script in the ISE because it’s awesome. That development is happening on a box that has PowerShell 3. PowerShell 3 has several cool functions in it that I happen to rely on (I know in many cases there are workarounds using PS2, but that’s not always the case!) and so when I paste the entire script into SCORCH and try to run it (like all those examples above tell us to do), I get errors galore.

It’s not that I’m doing anything different from the links above. Nope, not a thing. I’m just passing the entire script to a powershell.exe scriptblock (Or script block, I’ve seen it spelled both ways) or to invoke-command. That will, in theory, drop the entire script into PowerShell 3 land and the script can then proceed on its merry way.

Specifically, I get lovely errors like:

“System.Management.Automation.ApplicationFailedException: Program ‘powershell.exe’ failed to execute: The filename or extension is too long”


Such errors have been rectified by splitting up script blocks into smaller parts that can be passed on, and ensuring that only the absolute necessary items are passed to PowerShell or invoke-command. This then gets more complex, as you’re making multiple calls and passing multiple variables in and out. This takes more time, and good luck passing any objects in or out; they get deserialzed in that process and lose all access to any methods that the original object had. So now, we’re talking about passing strings between sessions, calling for objects inside the new PS3 session only to recreate the object we had in our existing PS2 session, just so we can run a method on it. This adds complexity, time, and lots of other not-fun things to our automation.

It turns out that both powershell.exe, and invoke-command have a limit on the size of the scriptblock. It’s been something that my co-workers and some of our implementation partners have run up against again, and again.

The size limit we kept hitting was one that we had only guessed at, one that was unknown, one that we tackled blindly… UNTIL TODAY.

There is a 12190 byte MAX limit of any script block passed to powershell.exe or invoke-command. 

The easiest way to see how big your block is, is to just copy it into Notepad++ and look at the ‘length’ value at the bottom of the editor window. That tells you the byte size of the resulting file (and therefore text), and as long as your script is less than 12190 bytes, it will be passed along with no errors to whatever you like

Go over that limit and you’ll be cast into a world of uncertainty and doubt where your ability to code succinctly is called into question!

And now you know 🙂

Authorizing Computers for Software Applications – An Example

So life has been busy lately! The consulting life is quite different than being a Systems Admin, quite different…

Anyway, it just so happens that this past Friday, an e-mail went out with a request for something whipped up in Orchestrator. Since I was totally exhausted with things that I *had* to do, I decided to take a stab at it 🙂

A customer was asking for a workflow that would take input from SCSM (System Center Service Manager) via a Request Offering and allow the end user to select an application, as well as a computer object, and then add that computer object to ‘Authorized Computers.’ It’s worth noting that ‘Authorized Computers’ is a relationship that is part of Cireson Asset Management and is used for ‘authorizing’ CI’s to use a software license.

I had the initial idea roughed out, and a bit of prodding from my co-workers got me to the final solution, which I present to you below!

The Orchestrator Part:


Here’s the runbook I came up with – it’s not too difficult, but let’s walk through it.

We’re taking the ‘ID’ property in from the RBA from SCSM. This property, while called ‘ID’ in SCSM, actually ends up being the GUID of the RB object in SCSM. From there, we’re getting the relationship between that RB and the Service Request, and then getting the Service Request object itself.

From there, we have to get two related items – one that’s a ‘Windows Computer,’ and one that’s a ‘Software Asset.’

For the ‘Windows Computer’ we’re going to do a ‘Get Relationship’, look for a ‘Windows Computer’ related object and make sure any objects that we pass on are related by the ‘Is related to configuration item.’

Runbook2 Runbook3

Now we’ll do the same thing, but for a ‘Software Asset.’



Then, lastly, once we’ve gotten all we need, then we’ll create the relationship to the Cireson Software Asset object.


Awesome! We’re halfway there 🙂 Now to build out the templates and requests on the Service Manager side.

To start, you’ll need to make sure you’ve got an Orchestrator connector set up, and your runbooks are syncing properly. I’m going to assume you know how to do that 😉

Now, we’ll need to create two templates, a runbook template, and a service request template.

The runbook template is easy enough – just create it, fill in the basic fields, make sure you check the ‘Is Ready for Automation’ box, and link it to the Runbook in Orchestrator that you’re targeting. When we’re doing property mapping, you’re going to want to map the ID property to the one input field, as shown below.


Onto the Service Request template! This too is pretty basic – create it and fill in the basic properties as you see fit, then head over to the activities tab and link your RB template, as shown below.


Last but not least, we’ll create the Request Offering so we can hit it from the portal. Again, make a new RO, and then when we’re asking for user input, let’s put something like the following:


And those queries…

Query1 Query1-1 query2


You’ll notice that we only allow the selection of one software asset, and multiple computer objects. This just keeps things a little bit cleaner, and prevents people from going nuts with the selection fields 🙂 I’m also not doing any filtering of the objects. In my lab environment, it’s not too busy, but feel free to scope those queries beyond the objects themselves if you’re returning more values than you need.

Once all that’s in place, publish your Request Offering under a Service Offering of your choosing, navigate to it via the portal (Preferably Cireson’s new beautiful Self Service Portal, but the Out Of Box SCSM Portal will work too!) and let your IT organization authorize software via a nice, easy to use interface!




Installing Orchestrator integration packs without Deployment Manager

Another day in the life of a systems engineer with limited access! While I own the SCCM and SCSM servers that I’ve been blogging about, the Orchestrator server is owned by a different division of our Technology Services group. Now, it’s not usually a problem, and honestly he does a great job with it, but today I ran into an issue.

The Orchestrator admin was taking a day off, he has no backup, and I needed to add the Runbook Designer to a new workstation (my VDI session that I mentioned in an earlier post). Cool, no problem, just install the Designer with the script I set up before. Easy. Right.

I opened the console today to actually use it, and, oh no! All my runbooks had funny looking question marks where there should have been pretty green cubes!


I looked around and noticed that I didn’t have the SCSM integration pack installed. No problem, I’ve just got to find them and install the ones I need! Oh look, they’re right here!

Except – the install process involves making sure it’s deployed via the Orchestrator Runbook Server… that only the admin has access to.

Now is when I had to get creative. I had the integration packs extracted so I had a bunch of .oip files, but attempting to use the console to ‘import’ them didn’t work. I tried dragging them onto the console (just in case) – nope. Tried using the ‘import’ function (which is usually used for runbooks) – nope. Left with no other choice, I busted out my trusty 7Zip utility and tried to extract the .oic file and see what was inside.

Lo and behold! Extracting a .oic file gives you a few configuration-type files (a .ini, .cap, and .eula) as well as a .msi! Woah.


Sure enough, running that .msi as an admin, on my local machine with the Runbook Designer installed on it, installed the integration packs I needed!


Awesome! I can now do what I needed to do.

Now – a few things to keep in mind:

-This is not approved by Microsoft in any way. Do this at your own risk! (That said, I don’t think it’s too risky.)

-This won’t do anything unless the same integration packs have been deployed to your Runbook Server as well! Since I’m just adding a second Runbook Designer on a new machine, pointing to the same server, we’re fine.

-You will feel way cooler that you were able to do this and not pester your Orchestrator admin!



Silent or unattended install of Orchestrator 2012 Runbook Authoring Tool

So the other day I found myself setting up a new VDI instance for myself in order to do some work on vacation. I know, I know, why am I working on vacation? That’s besides the point.

I really just needed a few more installs of the Orchestrator Runbook Authoring Tool, one for me, maybe one for a coworker, just in case. The simple way would be to just run the install from the splash screen at ‘setup.exe’ but I wanted to try to make a silent install via the command line! Oh, if only I knew what I was getting into.

First, Microsoft’s official documentation is here: IGNORE IT.

The ‘SetupOrchestrator.exe’ doesn’t do anything (this is of course widely documented on the interwebs even when the official documentation says otherwise) and so we’ve got to use setup.exe instead. Look for it in your ‘Setup’ folder inside of your Orchestrator install iso or folder. You know… ‘<EXTRACTED FOLDER>/Setup/setup.exe’

I found plenty of command line options for the other parts of Orchestrator (Ok, so the official documentation is useful for this part…), but not enough to install *only* the Runbook Authoring Tool. I really wanted the minimum number of arguments to pass to ‘setup.exe’ while still setting everything I needed. No one seemed to have that information. An hour passes, countless pages of Google are searched, and I really didn’t want to brute force guess this…

Thankfully, with only a bit of trial and error, along with some help from a French website ( ), I came up with the following:

Notice a few things:

-No key necessary! We’re just authoring here so Microsoft is nice and doesn’t make us include a key on the command line.

-The argument to specify which components is a bit different than the rest of the System Center suite.

-It wants 3 arguments to specify the major questions, otherwise it happily runs along with default values. And yes, there’s no ‘AcceptEula:YES’ necessary either!

Tada! And that left me with a wonderful silent install of only the Authoring Tool. All was right with the world.