On ‘the cloud’ – Why sometimes it just doesn’t add up

I got back from TechEd last week and had an absolute blast! The Cireson team and I set up an amazing booth, had some excellent presentations from some of our partners, and we, as a company, got to do some team building for the first time ever! Nothing like being able to shoot your coworkers with paintballs to build team spirit 🙂 I think I only cursed out my boss, oh, 100 times or so. Thankfully, I’m still employed.

Of course, Microsoft’s big theme this year was ‘Cloud.’ It was Azure this, or Office 365 that, or “look what you can do with our cloud” again and again and again. Microsoft’s ‘hard’ tools, including a good chunk of the System Center suite, was left to play second fiddle to whatever was being offered via a cloud subscription model.

Now, I’m not outright opposed to the cloud, at all. I think there’s some awesome uses for it, but I had some really interesting conversations with a few people, and did some research of my own, that really puts the whole cloud vision in perspective – and I think it’s perspective worth sharing.

One of our developers lives in New Zealand and loves it. When he wasn’t being the butt end of one of our jokes (“Hey, whose that guy with an Australian accent over there!”), he and I had some awesome conversations about food, life, and of course, technology.

Somehow, the topic of internet access came up, and I expressed my love for my 100mbps down/ 5 mbps up connection that I get here in Baltimore, for a decent price. My Kiwi friend’s jaw nearly dropped. His connection, and keep in mind he’s one of our developers, is 500k down, who knows what/up, and is capped at 5GB per month. Sure, I get it, he’s on an island, but New Zealand is far from isolated compared to some of the markets Microsoft wants to get into, and if you’re using a cloud solution for resources, I can see that not only will your connection be spotty on a 500k link, but you’re apt to run up to that 5GB bandwidth cap really quickly! I’m sure businesses in NZ have better connections than a lowly developer, but putting all your eggs in a ‘cloud’ basket seems like a rather expensive, and potentially slow or limited, solution to your IT needs. It’s especially terrifying as we see Microsoft move more of their offerings to cloud only (PowerBI anyone?) with no on-premise solution.

Let’s head to another island, this time, one rather well connected, Great Britain. Another one of my co-workers, the ever-so-talented Shaun Laughton, happens to live on this very island! He joined in on this conversation and lamented his own internet situation – that he can see his local telecom box from his house, and yet his internet speed was only moderate at best, expensive, and had a data cap. If he lived in a major city, say London (where Google happens to have a massive campus with super-fast internet) then internet access would be cheap, fast, and without a cap.

This scenario isn’t uncommon and puts cloud solutions in a really interesting situation. For people in very well connected areas (read: Urban, 1st world) then cloud makes some sense. Why own the infrastructure when someone else can do it for you, and then just lease the time and resources you need? In this same instance, Microsoft’s own direction makes a lot of sense, as they are positioned to provide the *best* platform for their products in the cloud, and can really benefit from the subscription licencing model – hopefully attracting smaller customers with more reasonable, monthly or annual pricing.

As soon as you leave one of those major cities however, this plan breaks down. Who is going to depend on a cloud service when it’s going to eat at their monthly data cap, and even then, not be accessible nearly as fast as a local server or instance of a software application would be? For these customers, not only does the cloud not make sense from a costing perspective (as relying on it would require multiple, redundant, unlimited, internet lines, likely costing a fortune themselves) but their users are far less likely to have reasonable access to high-bandwidth connections from outside the office, thereby breaking down the ‘work anywhere’ principles that cloud relies on.

One final example that I thought was really interesting comes from my own situation. I currently reside in good ole’ Baltimore, MD. I’m therefore conveniently positioned in the densely populated ‘Northeast Corridor’ of the USA that spans from Boston, MA down to Washington D.C. This area of the USA boasts the most dense concentration of transportation infrastructure in the States, both physical and telecom. I have the luxury of fast internet at a reasonable price, and if I really wanted to, I could hop on over to a University library and get on the Internet-2 bandwagon for some really insane speeds.

Recently, I’ve been having issues with my Data Warehouse for SCSM remaining intact on my laptop lab. I’m sure it’s a case of too many reboots and restarts, too many ups, then downs, and so it’s had me looking at setting up a more permanent lab somewhere that isn’t on my laptop. I started looking at cloud solutions because hey, why not! Everyone’s doing it, it’s got to be cost effective… right?

Azure (US East, USD):

1, Medium (A2) Basic Instance Annual cost: $110/month = ~$1300/year

I’m going to cheap out, but let’s say I go with a Medium (A2) instance for my SCSM server. On that single server, I could toss the SCSM Workflow Server and a SQL Standard instance. It would be stupid to do, since an A2 instance only gets me 2 cores and 3.5Gb RAM, but I’m trying to keep costs low. Right. So that’s just one server, let’s scale up a bit.

1, Medium (A2) Basic Instance Annual Cost: $110/month = ~$1300/year

1, Large (A3) Basic Instance Annual Cost: $219/month = ~$2628/year

Alright, now I’ve got a large instance for my SCSM DW server, and the medium one for my Workflow server. I’m looking at almost $4000 USD a year to run two servers. This doesn’t even start to include an SCCM Server, Domain Controller (Though I think Azure has other services in play for that) or any client machines or servers for hosting demo web-portals. Wowza.

Amazon (US East, USD):

1, m3.large Instance Annual Cost: $197/month = ~$2365/year

Now this isn’t exactly apples to apples, since this m3.large instance has 7.5Gb RAM and can flex up to 6.5 ‘Elastic CPUs’, but humor me here. Double that for two servers, like above, and we’re at about $4700 a year to run two instances of this, again, with no additional machines for clients or other servers. Oh yea, and if you don’t have licenses for the MSFT software, good luck (Thankfully I have MSDN, phew!).

So, imagine I’m not even paying for my internet access here at home, it’s still a lot of money to use cloud services for a small guy like me, running an instance 24/7. What’s my alternative? Well, it is, of course, my favorite – do it yourself!

I priced out a rough machine that would do what I wanted from a hardware perspective:

1, Supermicro MBD-H8DCL-6F-O – $360

1, AMD Opteron 4386 8 core, 3.1 GHZ – $350

64GB Kingston RAM – $670

1, 1TB Samsung SSD, $470

Case, power supply, and other things: $250

Total hardware cost: $2100

Now let’s factor in power – I tried to do this with a pretty low power requirements, but I’m going to estimate on the high end.

600 Watts * .12 per kWh * 24 hours/day  = $52 per month = ~$630 per year

So for a total cost of $2730 in the first year, and $630 every year thereafter, I can have a server that can run an entire lab of VM’s (I’ve got 13 running on my laptop right now with 4 cores and 32GB RAM – this server could double that easily). There’s no point in me going to the cloud, at all.

The cloud may be the future, but the future isn’t now, at least not for everyone. Thankfully, Microsoft hasn’t totally killed off their on-premise solutions yet. Let’s just hope they don’t get around to doing that for a long, long time.

Authorizing Computers for Software Applications – An Example

So life has been busy lately! The consulting life is quite different than being a Systems Admin, quite different…

Anyway, it just so happens that this past Friday, an e-mail went out with a request for something whipped up in Orchestrator. Since I was totally exhausted with things that I *had* to do, I decided to take a stab at it 🙂

A customer was asking for a workflow that would take input from SCSM (System Center Service Manager) via a Request Offering and allow the end user to select an application, as well as a computer object, and then add that computer object to ‘Authorized Computers.’ It’s worth noting that ‘Authorized Computers’ is a relationship that is part of Cireson Asset Management and is used for ‘authorizing’ CI’s to use a software license.

I had the initial idea roughed out, and a bit of prodding from my co-workers got me to the final solution, which I present to you below!

The Orchestrator Part:


Here’s the runbook I came up with – it’s not too difficult, but let’s walk through it.

We’re taking the ‘ID’ property in from the RBA from SCSM. This property, while called ‘ID’ in SCSM, actually ends up being the GUID of the RB object in SCSM. From there, we’re getting the relationship between that RB and the Service Request, and then getting the Service Request object itself.

From there, we have to get two related items – one that’s a ‘Windows Computer,’ and one that’s a ‘Software Asset.’

For the ‘Windows Computer’ we’re going to do a ‘Get Relationship’, look for a ‘Windows Computer’ related object and make sure any objects that we pass on are related by the ‘Is related to configuration item.’

Runbook2 Runbook3

Now we’ll do the same thing, but for a ‘Software Asset.’



Then, lastly, once we’ve gotten all we need, then we’ll create the relationship to the Cireson Software Asset object.


Awesome! We’re halfway there 🙂 Now to build out the templates and requests on the Service Manager side.

To start, you’ll need to make sure you’ve got an Orchestrator connector set up, and your runbooks are syncing properly. I’m going to assume you know how to do that 😉

Now, we’ll need to create two templates, a runbook template, and a service request template.

The runbook template is easy enough – just create it, fill in the basic fields, make sure you check the ‘Is Ready for Automation’ box, and link it to the Runbook in Orchestrator that you’re targeting. When we’re doing property mapping, you’re going to want to map the ID property to the one input field, as shown below.


Onto the Service Request template! This too is pretty basic – create it and fill in the basic properties as you see fit, then head over to the activities tab and link your RB template, as shown below.


Last but not least, we’ll create the Request Offering so we can hit it from the portal. Again, make a new RO, and then when we’re asking for user input, let’s put something like the following:


And those queries…

Query1 Query1-1 query2


You’ll notice that we only allow the selection of one software asset, and multiple computer objects. This just keeps things a little bit cleaner, and prevents people from going nuts with the selection fields 🙂 I’m also not doing any filtering of the objects. In my lab environment, it’s not too busy, but feel free to scope those queries beyond the objects themselves if you’re returning more values than you need.

Once all that’s in place, publish your Request Offering under a Service Offering of your choosing, navigate to it via the portal (Preferably Cireson’s new beautiful Self Service Portal, but the Out Of Box SCSM Portal will work too!) and let your IT organization authorize software via a nice, easy to use interface!




MDT 2012 Error – Invalid credentials: A specified logon session does not exist. It may already have been terminated (80070520)

Mondays! I swear, all the fun stuff happens on Mondays…

Well, I’ve officially put in my two weeks notice at my current job, and am heading to become a full-time System Center consultant! I’m super excited. That said, it has created a long list of “to-do’s” before I leave my current position, which is what I am now trying to dig myself out from under.

One of the tasks I had was getting a Windows 8.1 Image up on our MDT server. No one was asking for it, but that’s often how things go in Higher Education – no one will ask for it forever, then one day decide they need it tomorrow. To save my coworkers the trouble, I figured I’d just do it now.

Anyway, after capturing the image and setting up the new deployment task sequence, I did my first trial deployment. It worked just fine from the boot cd, but not when running ‘litetouch.vbs’ from the network share. I was perplexed.

Specifically, I was getting this error:

Invalid credentials: A specified logon session does not exist. It may already have been terminated ( 80070520 )

I double checked my connection credentials, and they were, in fact correct. What ended up being the issue is that, somehow, similarly to the issue I posted about earlier on this blog, my permissions on my Deployment Share got messed up again.

I removed, saved, and re-added all the necessary permissions on our deployment share, and all is now working again! A simple fix to a simple error.

Writing back to a file share that isn’t the distribution point – SCCM 2012

Another day another good fight to fight! Today was an epic battle between myself, my coworkers, and our new SCCM 2012 environment.

We do things a bit oddly here, we never used deployment shares in our old system, and so we’re in a ‘transition phase’ between doing things our way, and doing things the right way. In the interest of getting things done quickly, we’ve got a number of scripts that deploy software in creative, but messy, ways.

For example, the following!

1. Script starts
2. Script copies files locally so a network interruption doesn’t mess with things
3. Script caches files back to the file server since they are shared by each instance of the installer that runs
4. Script cleans up and exits

This works really well on our SCCM 2007 server, but has been problematic with our SCCM 2012 R2 instance. The other tech’s that have packages in our environment aren’t too keen on changing anything more than they have to, so the responsibility is on me to figure out how to make it work.

My first thought was the Network Access account in SCCM 2012. It’s moved around a bit, but a friendly google search can help you locate it! I used the following as a nice easy pointer to the right spot in the admin console: http://www.jamesbannanit.com/2011/04/configure-the-network-access-account-in-sccm-2012/

I added the same account as our ‘Client Push’ account, as that’s already an admin on all of our boxes, and has access to the share that we wanted to write back to. I pushed a program. I waited. I shed a few tears. No luck.

I have a simple program that just runs ‘whoami’ and prints it to the C: drive, to see who is writing what. As it was in 2007, SCCM 2012 runs scripts spawned from SCCM as ‘NT Authority\System’. Since that is a local account, with Admin rights of course, it can’t write back or even read from network shares. Ideally then, our hope is that SCCM 2012 would use the Network Access account that we had specified earlier!

Nope. SCCM only uses that account with machines that aren’t in the same domain or in a local workgroup. It does us no good.

Then, after a few hours of staring into the endless pixels of my monitors, I tried what seemed silly. I added ‘Domain Computers’ to our share and gave them ‘Read/Write’ access. Why? Why all of them? Well, it’s not so crazy…

domain computers

Since ‘NT Authority\System’ can’t read or write back to the network, SCCM, by default, uses the machine object in an attempt to connect. I *thought* that it would move to the Network Access account, specified in SCCM if the machine object didn’t work, but that’s not the case for domain-joined machines. This means that you need *every* machine object added to the share that you want to write back to, which seems daunting, but is actually quite easy thanks to the existence of a ‘Domain Computers’ account by default in AD.

Now, I hear the cries of everyone, everywhere. “Security, security! It’s horribly insecure!” It’s actually not as bad as I thought. I’ve found it very difficult (Read: impossible for my feeble mind, crackers might be able to do it) to drop down to system-level authentication via any means that are easily user-accessible. This keeps any ole’ user from authenticating to the share and being able to write, while allowing the SCCM client to drop in as the domain-authenticated machine object and write to its heart’s content!

Forcing SCCM to use the Network Authentication account would be nicer, but I can’t find out how to use those credentials from within a batch file. And yes, I know this all could be way easier with PowerShell by storing an AD account’s credentials and using them to run things, but I’m just trying to make our few hundred batch files run happily in the shortest time possible 🙂

Java 7u51 – System Wide Exception Site List

I recieved a visit from a co-worker the other morning informing me that Java updates had broken his software. He wasn’t too upset, which was nice, but we needed to figure out what went wrong.

As it turns out, Java 7u51 introduced some new security features (yay!) but unless programs using Java applets had applied security certificates to their applications, Java would flag them as potentially malicious and not run them (not yay!).

The workaround isn’t hard; if you go into the ‘Java’ control panel area, head over to the ‘Security’ tab, and add the websites that you need exempted to the ‘Exception Site List’ then your applications should be running once again. The bad news is that doing things this way is only a per-user setting. We needed a way to do this on a system-wide basis, and then be able to deploy it to our organization via SCCM.

As it turns out, there *is* a way to do it, it’s just a bit complex. Oracle has official documentation in a few places but it’s a bit fragmented and there’s not an easy path from these documents to an actual working solution:

Exception Site List Documentation
Java Deployment Documentation

But, to save you all the time and trouble, I’m going to post exactly what you need to do to make it all work!

First, you’re going to need to create a file called ‘deployment.config’ – add the following lines:

Cool. Sweet. Progress. This is just telling Java that it *must* read the system wide config file I’m specifying, and then giving it the path to said config file. Yes, the double slashes and slash in front of the ‘C’ are necessary. Don’t ask me why, but it works as shown above.

Now you’re going to need to make a file called ‘deployment.properties’ – add the following to it:

Same idea as above, you’re telling Java the path to the security exemption site list. I’m putting all this in the same folder because we want it to be system readable and not writable. That makes it so users can’t change the sitelist.

Last, but not least, you’ll need to create the ‘exception.sites’ file. Once you do so, just add whatever site(s) you need, one per line. For example:

Now, dump all that in the “%systemroot%\Sun\Java\Deployment\” folder (You may have to create this folder, it doesn’t exist by default) and head back to that Java control panel area. Head over to the ‘Security’ tab and you’ll see that your site or sites that you listed show up! It’s like magic! In case you were wondering, Java reads that config file every time it loads. This includes in the browser or via the Control Panel, so there’s no need to reboot or do anything crazy, as long as you’re not trying to adjust an already spawned Java session.

All you’ve got to do now is write up a little batch file to make that folder and dump those files in the right place on each machine (SCCM!), and you’re all set! Remember, if you allow users to write to your exemption.sites file via Windows permissions, then they can edit the list, otherwise it’s read only (We went the read only route to give us complete control). Equilibrium has now been restored to your Java-tainted environment 🙂


If you’re wondering about the Mac side of things, it looks like someone else beat me to it! Head on over there and check it out!

Other References:


Lync 2013 Silent or Unattended Install

Ah Mondays. The best day of the week! It’s the day that everyone comes to you with some new task or thing to do that you don’t ever, ever have time for. And yes, I know that I’m posting this on a Tuesday. Remember that whole thing about not having time?

So a co-worker came to me, asking for my scripting skills in creating a nice silent install package for Lync 2013. The official Lync 2013 package itself was pretty straightforward. Just like any other office product, the steps were, as follows:

– Download the Office product

– Get that thing extracted so you’ve got a nice directory structure like the following, underneath the folder referring to your platform architecture of choice. For example, the following resides underneath the “x86” directory once I’ve extracted the Lync client installer properly.

filetree– Now, open up a command line, navigate to the folder, and run ‘ setup.exe /admin’ This opens up a customization window where you can tell the installer all kinds of options. The ones we’re interested in are the following.

–  Install Location and Organization Name: I changed the Organization Name as desired

– Licensing and user interface: This is the important one! Enter a product key (if necessary) or leave the KMS option checked by default, as that’s likely what you, as a corporate user, are using. You’ll then want to check the box that says ‘ accept the terms in the License Agreement’ and then change the display level to ‘None.’ The Suppress modal should then be checked, and the other two options (completion notice and no cancel) should be unchecked. In fact, you can make it look like this:

licensing– Add registry entries: This is also pretty important. When the user runs Lync for the first time they’ll get a screen that asks them to configure Windows Update for the Lync application. It’s fine and dandy when it’s your own box, but not so awesome when you’re trying to deploy this to bunches of people. To eliminate the issue, add the following registry keys and rest easy (Note: These are REG_DWORD values)!

registryIt’s worth noting that those aren’t typos above, the key is really named ‘ShownFirstRunOptin’ Who comes up with these things, I’ll never know.

If you don’t put these in, your users get the following popup. Not good.


Boom. Save the admin file as a .msp, add it to the ‘Updates’ folder in your Lync install files directory, and run the following: setup.exe /adminfile "updates\<adminfile>"

Example: setup.exe /adminfile "updates\Lync2013-Rev1.msp"

So after a bit of waiting you’ve got Lync 2013 installed silently. Or, wait… what was that popup that showed up for just about a second when the installer started to run. It loudly exclaimed ‘Please wait while setup prepares the necessary files”. You know, this screen:

setupfilesIt is, as far as I can tell, impossible to turn off. If you have a user manually run your script, this thing will display. End of story. Ugh. The only good news? It doesn’t show up when you deploy via SCCM! Happy days.

So, now we’re all done! Lync 2013 for everyone! I thought my co-worker would be super excited now that I got it all done. Right? Wrong! I now had to package the Lync 2013 VDI plugin. Goodie.

If you’re unfamiliar with what this thing does, you can check out a nifty PDF that details it right here. So when I was provided the installation files for this bad boy I got the following:

installersOh. No. There are few things I hate more in life than someone giving me a random installer, especially a .exe, and telling me to package it. There could be anything inside! Anything! I started by trying to pass it the parameters that it seemed to ask for ‘/silent /passive /norestart’. This of course, did absolutely nothing.

As it turns out, the VDI installer is similar to the Lync Basic installer. It’s an executable that has the install files inside, which actually work like a normal Office install. Why they decided to distribute them in a .exe, I have no idea.

To extract, run lyncvdi32.exe /extract:<PATH TO EXTRACT> and watch the magic happen ( I usually use lyncvdi32.exe /extract:.\Lync2013\ to extract to a folder in the current directory)! It will extract the files and you’ll see a file tree magically appear just like what you saw with the original Lync 2013 setup files. If you happen to try to extract the .exe with another utility like 7Zip, the files won’t come out right and the admin wizard won’t run properly. You’ve been warned!

Then, just like before, run setup.exe /admin and configure the admin file like above. You’ll only need to touch the Organization Name and Licensing screens – no registry editing needed – and then apply that admin file to the Lync 2013 VDI setup.exe the same way.

Viola! Now you’ve got silent installs for the Lync 2013 full client, and the Lync 2013 VDI plugin. Enjoy!

Installing Orchestrator integration packs without Deployment Manager

Another day in the life of a systems engineer with limited access! While I own the SCCM and SCSM servers that I’ve been blogging about, the Orchestrator server is owned by a different division of our Technology Services group. Now, it’s not usually a problem, and honestly he does a great job with it, but today I ran into an issue.

The Orchestrator admin was taking a day off, he has no backup, and I needed to add the Runbook Designer to a new workstation (my VDI session that I mentioned in an earlier post). Cool, no problem, just install the Designer with the script I set up before. Easy. Right.

I opened the console today to actually use it, and, oh no! All my runbooks had funny looking question marks where there should have been pretty green cubes!


I looked around and noticed that I didn’t have the SCSM integration pack installed. No problem, I’ve just got to find them and install the ones I need! Oh look, they’re right here!


Except – the install process involves making sure it’s deployed via the Orchestrator Runbook Server… that only the admin has access to.

Now is when I had to get creative. I had the integration packs extracted so I had a bunch of .oip files, but attempting to use the console to ‘import’ them didn’t work. I tried dragging them onto the console (just in case) – nope. Tried using the ‘import’ function (which is usually used for runbooks) – nope. Left with no other choice, I busted out my trusty 7Zip utility and tried to extract the .oic file and see what was inside.

Lo and behold! Extracting a .oic file gives you a few configuration-type files (a .ini, .cap, and .eula) as well as a .msi! Woah.


Sure enough, running that .msi as an admin, on my local machine with the Runbook Designer installed on it, installed the integration packs I needed!


Awesome! I can now do what I needed to do.

Now – a few things to keep in mind:

-This is not approved by Microsoft in any way. Do this at your own risk! (That said, I don’t think it’s too risky.)

-This won’t do anything unless the same integration packs have been deployed to your Runbook Server as well! Since I’m just adding a second Runbook Designer on a new machine, pointing to the same server, we’re fine.

-You will feel way cooler that you were able to do this and not pester your Orchestrator admin!



MDT 2012 Error – FAILURE: 8000

So we’ve got a few deployment servers here at the office, all of which I admin. Our production one is a bit old, so I set up a new MDT 2012 server to test things out on. It just so happens I do all the cool stuff on the test server, so sometimes our test becomes our production; welcome to IT in Higher Education!

Anyway, my co-workers were imaging from my test server one day when I got the complaint that imaging was no longer working. I was confused. I hadn’t touched anything lately, but I went to take a look.

The deployment would all of a sudden just stop. No errors (visibly), no screens, I was just left with the background of an MDT Windows PE session. Sigh.

I opened up ztigather.log with CMTrace (If you haven’t added this to your Mini PE disk, you’re missing out!) and found the following error: FAILURE: 8000: Running wscript.exe "X:\Deploy\Scripts\ZTIGather.wsf" /nolocalonly1

I wasn’t quite sure what in the world this meant, so I googled. Not much luck. I found a few notes about trying to run the ztigather.wsf script manually by copying the unattend.xml from the deployment share to the locally created ramdrive, so I attempted to do so.

Looking through the logs I could see that it mounted my deployment share properly at ‘Z:\’ so I didn’t suspect anything wrong at first, but when I tried to path out to actually get my unattend.xml file… I couldn’t!

As it turns out, my permissions on my Deployment Share permissions had changed, and the task sequence could no longer access the files on that share. That’s why nothing was showing up! It kept waiting to get files from the deployment share, and it never got anything back.

I remoted into my MDT server, went to my deployment share, made sure that permissions were set back to where they needed to be (Honestly, since it was easier and this is just my test server, I set it to allow ‘Everyone’ read permissions for a short time.) and all was well! We could once again deploy like MDT intended.

Running the Configuration Manager Control Panel applet from the command line

…or ‘How I learned to stop worrying when Configuration Manager didn’t show up in the Control Panel!’

So I’ve been playing with Windows Thin PC lately at the office. It’s kinda awesome.

It’s a 32bit only OS, but that’s just fine! It’s meant to be an ultra thin base for ‘kiosk’ type deployments. It’s really not meant to have much installed on top of it either, so it’s missing plenty of libraries and supporting pieces of the OS in an effort to remain small. The installed footprint is something close to 1GB and ram usage is beautifully small. That said, sometimes things don’t work quite right due to the missing libs.

I’ve been using Thin PC because my office refuses to use Citrix for some reason (Past history – I swear, working in Higher Education people have such vivid memories it’s like 10 years ago was yesterday…) and there’s no one who is willing or able to try an App-V kinda thing. I’d love to try it in my free time, but I lost that about a year ago. Oh well.

In lieu of that, since our SCCM setup works so darn well, and I’m also the master of Group Policies, I end up making Thin PC based kiosks with single applications installed on them. Any additional updates or patching are handled via WSUS (since I don’t install anything 3rd party on these) but I also install SCCM just in case we need to further manage them down the line, and so we get good reporting on them.

The issue I was having is that after running the SCCM client install, I wasn’t seeing the ‘Configuration Manager’ icon show up in the Control Panel. I saw ‘CCMExec.exe’ running in Task Manager, so I was pretty confident all was well, but I really, really wanted to see that applet.

Thankfully, you can launch it from the ‘Run’ prompt!

Woah! Check that out! It launches the normal Configuration Manager Properties page! It’s worth noting that this appears to work on multiple versions of SCCM and Windows ( SCCM 2007 and 2012, Windows XP and higher) so it may be of use for various configurations beyond this one.


Silent or unattended install of Orchestrator 2012 Runbook Authoring Tool

So the other day I found myself setting up a new VDI instance for myself in order to do some work on vacation. I know, I know, why am I working on vacation? That’s besides the point.

I really just needed a few more installs of the Orchestrator Runbook Authoring Tool, one for me, maybe one for a coworker, just in case. The simple way would be to just run the install from the splash screen at ‘setup.exe’ but I wanted to try to make a silent install via the command line! Oh, if only I knew what I was getting into.

First, Microsoft’s official documentation is here: http://technet.microsoft.com/en-us/library/hh674378.aspx IGNORE IT.

The ‘SetupOrchestrator.exe’ doesn’t do anything (this is of course widely documented on the interwebs even when the official documentation says otherwise) and so we’ve got to use setup.exe instead. Look for it in your ‘Setup’ folder inside of your Orchestrator install iso or folder. You know… ‘<EXTRACTED FOLDER>/Setup/setup.exe’

I found plenty of command line options for the other parts of Orchestrator (Ok, so the official documentation is useful for this part…), but not enough to install *only* the Runbook Authoring Tool. I really wanted the minimum number of arguments to pass to ‘setup.exe’ while still setting everything I needed. No one seemed to have that information. An hour passes, countless pages of Google are searched, and I really didn’t want to brute force guess this…

Thankfully, with only a bit of trial and error, along with some help from a French website ( http://blogdeployment.fr/sccm-2012/sccm-2012-distribuer-le-runbook-designer-pour-system-center-2012-orchestrator ), I came up with the following:

Notice a few things:

-No key necessary! We’re just authoring here so Microsoft is nice and doesn’t make us include a key on the command line.

-The argument to specify which components is a bit different than the rest of the System Center suite.

-It wants 3 arguments to specify the major questions, otherwise it happily runs along with default values. And yes, there’s no ‘AcceptEula:YES’ necessary either!

Tada! And that left me with a wonderful silent install of only the Authoring Tool. All was right with the world.