I got the news that I have the privilege (that’s how I definitely see it) to speak once again at SystemCenterUniverse in Dallas on the 19th of January 2016.
I consider this a huge privilege as I have a special relationship with this particular event. This is in fact where my wild journey through the System Center Universe as a speaker started. 2 years ago SCU held a fierce battle to determine who would be the new SCU_Jedi winning a session at this event… I was lucky enough to pull it off and suddenly I was presenting among the (what I consider) big shots in the System Center world…
Most of them are still presenting today if you look at the list of speakers it is quite impressive:
The first but not complete list: http://www.systemcenteruniverse.com/presenters.htm
As you can see al the usual suspects are there!
For the full agenda please check here: http://www.systemcenteruniverse.com/agenda.htm
this year again there’s a 2 track approach so you have the ability to cross over and see a session out of your comfort zone to learn really new cool stuff!
My session will be about the vast power of OMS and how it can create new possible insights in your environment. A truly not to miss session if you ask me
Too bad… You are missing out…
Not really! Because SCU is (I think) the only event who offers free streaming of the event over the web. There are even a lot of viewing parties organized near your location where you can easily follow the event from your location!
Well that’s very simple as well! IF you have the ability to fly in you get a chance to mingle with peers and talk to the speakers. There are no designated areas for speakers or whatsoever so everyone is really accessible to have chat or answer your questions…
A full day of training on different subjects for only 150$ that’s a bargain if you ask me!
This is one of the events who are really embracing the social media (twitter, facebook,…) to reach out to attendees onsite but also across the world to engage during and after the event.
Make sure you follow: @scu2016 and #scu2016 on twitter for the latest updates and feeds!
This is one thing I really like about the new strategy of Microsoft: All platforms (I know it’s not the official statement but still)
The OMS app was already available on WindowsPhone platform (in preview) and quite frankly it makes sense to actually develop for your own native platform first.
But today Microsoft has announced the availability of the OMS app across all the different platforms (Ios, android and winphone).
The install is crystal clear as you are used to through the store.
More info here: http://blogs.technet.com/b/momteam/archive/2015/10/21/log-analytics-on-the-move.aspx
Direct link: http://www.microsoft.com/en-us/server-cloud/operations-management-suite/mobile-apps.aspx
NOTE: Fellow MVP Cameron Fuller has a blog post about the experience on an ipad here: http://blogs.catapultsystems.com/cfuller/archive/2015/10/21/the-microsoft-oms-experience-on-an-ipad/
A couple of screenshots of the possibilities and look and feel on iPhone:
First start of the app (really like the look and feel):
Login screen looks very familiar:
Auto switch between corporate or personal
Signed in and detected that your workspaces, it’s indeed possible to switch between the different workspaces:
You have 3 options:
Starts into your dashboard:
Overview:
Also possible in landscape 🙂
Searches:
Settings tab can be reached by tapping the 3 red dots on the top of the screen.
The app is intended as an extension / dashboard for your OMS workspace. It’s not possible to add servers or delete servers from your workspace nor add solutions. This is not a drawback in my opinion as you only want to see things happening in your workspace on the go. This is a first version of course but I had no issues installing and connecting it. I will keep an eye on the data usage on my cell phone plan though just to see how it will affect my usage of mobile data and of course my battery life.
Let’s face it: a good program is like a car. You need to maintain it properly to keep it in running condition. Well this is also the case with SCOM. I visit a lot of clients and one of the main questions I get is in fact how to make sure SCOM stays healthy and running.
Well there are some indicators in SCOM itself suggesting that there are issues with the install but unfortunately they are easily missed or looked over.
So this is where the awesome SCOMunity steps in!
This post should become your one stop location to find some of the leading community management packs you’ll need to keep your SCOM environment going or at least very easily pinpoint where there are (potential) issues.
These are management packs I actually install at almost every client I visit:
Tao has been an active member of the Scomunity for quite some time now and his self maintenance management pack is already in version 2.4.0. This management pack features a lot of tasks and checks that every SCOM admin should perform but it’s always cool to have a management pack doing it for you. Before I used TAO’s management pack I had a standard PowerShell toolkit to automate some of the tasks but now if the customer approves it (remember it’s still an unsealed MP so sometimes you need approval of customers) I load up the management pack and configure it. TAO really went all in and included also a PDF to assist you in installing and configuring the MP.
Image (Tao Yang)
Some of the tasks I like the most (this is not a full list but just to highlight the things I personally find handy in there):
This is an invaluable management pack for every scom admin out there. Whether you are visiting a lot of clients and need to get a clear view on the health of the management group or have only one client. This will free up a lot of your time and also reduce the chance of problems because there are early warning systems build-in. More info here:
http://blog.tyang.org/2014/06/30/opsmgr-2012-self-maintenance-management-pack-2-4-0-0/
One of the other hard things to do is in fact give a small report to the SCOM admin / supervisor telling how SCOM is actually doing and whether things are well in your SCOM environment.
Just recently Oskar Landman and Pete Zerger have updated their SCOM Health Check reports to give you a proper status in one glance.
This set of reports will give you an even more in depth view how you’re environment is doing and what are the key points to work on to further enhance your environment. One of the key benefits is the fact that you can check in detail that every aspect of your dbase and what is coming into them is valid and not too much. This is really helpful if you start your noise cancelling to really focus on the big consumers concerning space and cpu time of your SQL dbase.
Make sure you read the manual thoroughly before proceeding as you need to take additional steps prior to installation.
More info here:
Image (SystemCenterCentral)
Check the article here: http://www.systemcentercentral.com/scom-health-check-reports-v3/
Download here: https://gallery.technet.microsoft.com/SCOM-Health-Check-Reports-c32e8f93
Let’s crank up that download count because this is definitely something you need in your SCOM environment
This one is clean and simple. All the different things you would need to check on your datawarehouse but actually probably never did combined in a PowerShell script.
All the different aspects of what you need to know about your Datawarehouse are reported and gathered on a html page. This is one of the things you actually need to do at every customer site you come across to get an instant view on how the datawareshouse and more important the SCOM environment is setup and performing.
More info can be found here: http://blog.tyang.org/2015/06/11/opsmgr-2012-data-warehouse-health-check-script/
These are just 3 community provided tools which are freely available to help you get more insight in your environment or the environment you need to troubleshoot.
Special thanks goes out to TAO Yang, Oskar Landman and Pete Zerger in particular to invest their time in making these solutions possible / available and of course also thanks to all the other active community members who keep developing new things for SCOM and system center in general.
If you are just starting with SCOM: This is not an exhaustive list of all the add-ons out there. If you are looking for a 1 place stop to start your journey take a look at my: SCOM Link overview blog which is currently under revision: http://scug.be/dieter/2012/12/30/scom-2012-overview-link-blog/
This blog post is part of the Microsoft Operations management Suite Quick start guide which can be found here: http://scug.be/dieter/2015/05/08/microsoft-operations-management-suite-quickstart-guide/
One of the things I’ve noticed right away when I fist opened the Microsoft Operations Management Suite (OMS) was the fact that I had different workspaces. They were all created in opinsights because the fact I added 3 different management groups in their respective SCOM console.
No sweat of course. I now build 1 management group in my lab environment where I configured everything so I wanted to get rid of the other workspaces.
Turns out there are 2 ways you can delete a workspace and in fact this was not clear in the beginning.
The remove option is well hidden in the menu’s to probably avoid deletion by accident which is actually a good thing but it’s a little bit too hidden in my humble opinion.
To get to the remove option follow the steps below:
Log on with your account. You will actually get all the different workspaces which are configured and hold data:
In this case I would like to remove the DWIT workspace as this is my ancient lab environment.
Select DWIT and open the workspace.
Select DWIT in the right upper corner and select the DWIT EUS | administrator wheel:
At this point you will have the settings of your workspace and right at the bottom there’s an option to close the workspace.
NOTE: Make no mistake your workspace will be removed and your data will be erased!
Now here is where things can go either way. There are 2 different options here:
This one is actually very simple.
If you see the printscreen of the post above just click close workspace…
OMS will present you with a nice message box with what’s going to happen and kindly asks you why you want to close.
Note: It’s not required to select an option but please do so to help Microsoft further develop the product to whatever direction you want it to go.
When your workspace was created with the azure management portal you will not be able to close your workspace from the OMS interface but you will need to delete the workspace in azure itself. You will get the message “This account can only be deleted from the Azure Management Portal”
Open your Azure management portal and navigate in the bar in the left to Operation Insights (note this name can be changed when you read this article as MS is aligning all the naming toward the OMS brand):
Select the account you want to delete and press the delete button at the bottom of the page
Are you really sure?
At this point the account is deleted and within a couple of minutes it should disappear from the available workspaces.
Note: The accounts that are created outside of the Azure portal will have a GUID like name. This name is generated when you link a workspace to your Azure account.
This blog post is part of the “Microsoft Operations Management Suite: Quickstart guide” which can be found here: http://scug.be/dieter/2015/05/08/microsoft-operations-management-suite-quickstart-guide/
After we have successfully created our workspace and have installed our Solutions it’s now time to bring in our data to start the magic and witness the insight in our data that OMS can bring
Here you have 3 options:
Note: If you receive errors when connecting these servers to your environment review this troubleshoot article to set the firewall correctly: http://blogs.technet.com/b/momteam/archive/2014/05/29/advisor-error-3000-unable-to-register-to-the-advisor-service-amp-onboarding-troubleshooting-steps.aspx
If you want to attach several servers which are not monitored by SCOM you can easily download the agent and installed. No need to fiddle with the certificates yourself any more!
Download the agent and install it on a server:
The agent package is around 25mb and will be downloaded to your local machine. Transfer the package to a machine which is not monitored by SCOM and install the package.
Note: The same restrictions as installing an agent from the console apply. It’s not possible to onboard a server which has a SCOM component installed such as a gateway server , management server,… Which makes sense because if you have these servers in place you have a SCOM environment and it’s far more easy to onboard the management group entirely instead of doing this per server.
Copy the MMASetup-AMD64 package to your server and run as administrator
The standard manual install dialog for a Microsoft Monitoring Agent Starts
click through the first screens
The next screen is interesting. Here we need to decide whether we are going to actually install the microsoft monitoring agent exclusively for OMS or also for the on prem SCOM. In this scenario we are choosing to exclusively use the agent for OMS
Now we need to fill in the GUID keys which are shown on the OMS page right under “connect a server”.
The workplace ID is straight forward: The workplace ID noted in the OMS console
The Workspace key is in fact noted as the “private key” in the OMS console.
Note: Again this probably will be aligned after the SCOM console is aligned with the new OMS system.
Click next and install
Finish. Wait 5 min and refresh your console:
Note: if you have more than one workspace make sure you select the correct workspace where you want to connect the server to as the id will be unique per workspace.
Open your SCOM environment and navigate to Administration > Operational Insights > Operational Insights Connection
Note: These names will probably change in the next UR or management pack release.
Click configure or Re-configure Operational Insights
Select whether you are using a work or Microsoft account. I’m using a Microsoft Account:
The associated workspaces with your account are loaded and selectable
Select your workspace and click update or create
Next choose which groups or servers you would like to send data to your OMS workspace. Click add a computer / group in the tasks bar on the right.
Select the servers / groups you want an click add
So now all the servers are coming into your Operational Insights Managed view.
This management group will show up in your OMS workspace as 1 connected management group:
The name / number of servers and the last data received is shown to give you a clear view on the status of your management groups.
A lot of solutions are dependent on the logs received. As this was one of the first valuable additions that opinsights brought this is almost mandatory to have in OMS as well.
Go to the last step of the “wizard” and select what logs that need to be gathered on the connected servers:
When configured we’ll get a nice 100% mark and we are ready to go!
Connecting is a breeze if your servers are able to reach the OMS service on port 443. You can connect individual servers or entire management groups where you decide which servers are actually sending data to the OMS service.
For now the agents for linux are not available yet but they will become available very soon.
So now you are all set to start playing with the Solutions you have installed while data is pooring in!
This blog post is part of the Microsoft Operations management Suite Quick start guide which can be found here: http://scug.be/dieter/2015/05/08/microsoft-operations-management-suite-quickstart-guide/
A wokspace is basically the same as your management group in SCOM. It contains all the differernt Solutions, connected datasource and azure account to start working. You can have several workspaces based with one account but interaction between different workspaces is not possible.
In this scenario we are going to build a new workspace. Just choose the name / email and the region and click create
Next up we need to link the Azure subscription we have associated to our Microsoft or corporate account. Note that having an Azure subscription is not a prerequisite for this step (you can just click not now) but it is highly recommended.
To make sure you are the proper owner of the email (note that it doesn’t have to be an email that is by default the email address associated to your account) Microsoft is sending you a confirmation mail which you need to follow.
Click confirm now and continue.
At this point your workspace will be ready and you will have all the standard tiles but no data is poring in just yet.
Head over to the Settings tile where you will be guided to connect your sources to the OMS service. In the past this involved setting up proxy servers and complicated settings as since the integration with SCOM this has become peanuts. OMS is also using the same entry point that Opinsights was using to get connected.
First step is in fact to add solutions. Formerly known as Integration packs (IPs) these solutions each will have their own purpose to tailor the way you want to use OMS. There are by default already some Solutions installed so you can click “connect a data source” to continue.
Now that you have your workspace configured it’s time to connect your datasources to get your data in!
So Microsoft Operations Management Suite (OMS) was launched during Ignite 2015 and is awaiting your data to show its power to give you the insights in your environment and actually manage your environment not limited to the boundaries of your own environment or your azure environment. But before we can play with the goodies we need to configure everything correctly.
This guide will grow in time to be your one stop to get you going, configuring and using Microsoft Operations Management Suite (OMS) . Bookmark this post to get regular updates on my journey through OMS to help you save some time while exploring the possibilities of OMS.
Below is a list of topics that can be used to already start your journey:
This blog post is part of the “Microsoft Operations Management Suite: Quickstart guide” which can be found here: http://scug.be/dieter/2015/05/08/microsoft-operations-management-suite-quickstart-guide/
It has been a while since i was been blown away by news about SCOM and monitoring in general. During the recent keynote of Ignite in Chicago however Microsoft delivered… I personally was surprised by the vast number of announcements regarding System Center in general and monitoring and management tools in particular. One of the coolest things for me personally was the announcement of Microsoft Operations Management Suite (OMS).
A little bit of history is in its place to show you this is not a product which was born overnight. The first sign that Microsoft was working on a service to monitor and aggregate data in the cloud emerged when System Center Advisor was launched. System Center Advisor was a small tool which gave you a quick overview of your compliance level of your environment and check to see how you are doing in installing and configuring System Center. With an update of once a day and not a lot of adoption this tool was not widely spread. Although it wasn’t this heavily used it actually paved the road for Opinsights preview. The Opinsights preview was leveraging the power of Azure to give you even more control on finding out how your data center was doing by using serveral free apps to make assessments based on data you’ve sent to the Azure cloud services. The integration was created in SCOM making it a usable tool and easier to configure. The service was free so I personally encouraged a lot of customers to start exploring it. The fact you could also connect machines directly without having SCOM added to the level of adoption.
Well in OMS will give even more integration to different services you will need to do to manage your datacenter, it will integrate even more into your Azure environment to become your one tool to deal with different aspects of exploring your datacenter.
The following 4 groups of tools are at this point integrated into OMS:
Log Analytics was already present in Opinsights but has been fine-tuned. You can now gather all logs of different tools and servers and see what events are actually the most common in your environment and take corrective actions accordingly. This is in my personal opinion a very valuable addition if you would like to find out what the most common problems on your servers are. In fact in SCOM you actually need to configure what to monitor. Log Analytics however uses the power of the Azure storage to collect and keep all the events for you to easily query them and find out patterns and such.
This feature is new and will actually integrate Automation across the different components you have in your datacenter. The Automation module will integrate with Websites, Virtual Machines, Storage, SQL Server, and other popular Azure services. The automation runbooks will be easily created through a drag and drop interface giving you basically the opportunity to create automation in seconds. Tying in to all the different components you can automate repetitive tasks across your on-prem and cloud services. This will decrease the margin for human error and like all the different automations if it’s done correctly you will actually lower downtime and increase your view on your environment.
Availability is not only keeping your applications and data online but also making sure that they stay online or can be restored after a breach in service. The availability tools will give you the power to actually synchronize data between different locations to facilitate the different dataflows between the different locations to ensure that your data will be safe. In this automation tap the different tools will be place to make sure you have all you need to keep your environment up and running and restore as quick as possible. The automation apps will actually tie in to your Azure backup services such as: azure backup, azure site recovery,…
Besides getting everything online and keeping it online a lot of companies are also concerned about keeping everything safe. In the modern world it is a challenge to find a right balance between a workable system and a secure system. The security apps will give you the insights you need to actually Identify malware and missing system updates, collect security related events, perform forensic, audit and breach analysis.
If you were already using the opinsights preview feature your account is automatically transferred to a free account in OMS. This frree account will give you a 7 day retention and a maximum amount of data uploaded of 500Mb. This is solely for testing purposes to get you going. The integration remains in the SCOM management group and will actually upload all the data in CAB files to the OMS cloud service. Your tools will still be there in your dashboard with the possibility to actually connect more data sources to the OMS service. For more detailed instructions make sure to check out my series on OMS found here on my blog.
Check out the following links for more info:
Recently I was asked by a customer to make a multi tenant SCOM setup with different environments. There are several ways of doing this with connected management groups and all but I opted to keep one management group and make the separation there as this was the best fit for the client. I’m not saying that this is the best fit everywhere but for this particular case it was.
They have a very strict DTAP (Development – Test – Acceptance – Production) lifecycle for their software release model so this should be reflected in the SCOM model as well making things a little bit more complicated.
So to sum up the requirements:
You could create a procedure to instruct the engineer to create the management packs as part of implementing a new management pack in the environment but this creates tedious repetitive work which will lead to errors or will just be forgotten.
That’s why I’ve automated the process of creating these override management packs with PowerShell following the naming convention which is in affect in your company.
[xml]
###
# This PowerShell script will create override management packs for all management packs which fall into a specific
# patern documented in $orgmanagementpackname
# Usage: CreateManagementPack.ps1
# Note: You can change the parameters below and pass them with the command if desired.
# Based on the script of: Russ Slaten
# http://blogs.msdn.com/b/rslaten/archive/2013/05/09/creating-management-packs-in-scom-2012-with-powershell.aspx
# Updated the script to create a management pack for all environments in the array $environments
###
###
# Declaration of parameters
###
$ManagementServer = "localhost"
$orgmanagementpackname = "microsoft.windows.server.2012*"
$Environments = "P", "A", "D", "T"
###
# Find the managementpacks which fit the filter documented in $orgmanagementpackname
###
$managementpacks = Get-SCOMManagementPack |where{$_.Name -like "*$orgManagementPackName*"} | select name
Foreach ($managementpackocc in $managementpacks)
{
$name = $managementpackocc.name
}
$name
###
# For all managementpacks in array managementpacks create a new override management pack with a correct naming convention
# and 1 override management pack per environment
###
Foreach ($env in $environments)
{
# fill in the name of the management packs
$ManagementPackID = "*Fill in company name here (no spaces!)*."+$env+".$managementpackocc"+"."+"overrides"
$ManagementPackName = "*Fill in company name here*: "+$env+" : "+$managementpackname+" overrides"
Add-PSSnapin Microsoft.EnterpriseManagement.OperationsManager.Client
$MG = New-Object Microsoft.EnterpriseManagement.ManagementGroup($ManagementServer)
$MPStore = New-Object Microsoft.EnterpriseManagement.Configuration.IO.ManagementPackFileStore
$MP = New-Object Microsoft.EnterpriseManagement.Configuration.ManagementPack($ManagementPackID, $ManagementPackName, (New-Object Version(1, 0, 0)), $MPStore)
$MG.ImportManagementPack($MP)
$MP = $MG.GetManagementPacks($ManagementPackID)[0]
$MP.DisplayName = $ManagementPackName
$MP.Description = "Auto Generated Management Pack"
$MP.AcceptChanges()
}
}
[/xml]
Download the script from the technet gallery:
This script will actually find all the management packs which fit the input mask in $orgmanagementpackname and create for each of these management packs an override management pack following the naming in $ManagementPackID and $ManagementPAckName.
This results in the following structure:
Note:
During a recent project a client had a small request to create a monitor and run a command when a device was not accessible anymore. Easy right! But (yep there’s always a but) they wanted to run a command when the monitor was returning back to a healthy state to restart a service when the device came back online… Hmmm and all in 1 monitor.
So the conditions were as follows:
Monitor:
(note: Always do this small matrix of a monitor design to exactly know what the customer wants)
I don’t have the device to simulate but came up with a small example in my lab to show you how to get this working with just 1 monitor. The situation in my lab is very simple. I want to turn on my desk lighting when my pc is on (and I’m working) and turn it off when my pc is not online.
My conditions:
Monitor:
So first things first we need to test the connection to see whether my pc is running. To check this I’m using this small script:
[xml]
param ([string]$target)
$API = New-Object -ComObject "MOM.ScriptAPI"
$PropertyBag = $API.CreatePropertyBag()
$value = Test-connection $target -quiet
$PropertyBag.AddValue("status", $value)
$PropertyBag
$API.Return($propertybag)
[/xml]
So I’m testing the connection and sending the response to SCOM. The PowerShell “Test-Connection $target –quiet” command will just return true or false as a result whether the target is accessible or not
The creation of this monitor consists of 2 parts:
To properly target this monitor we need to create a class in SCOM which identifies the servers that need to test the connection. In this case I’ve added a reg key to all servers who need to ping the desktop so I’m starting a Registry Target to create my class:
I fill in a server that has the key already in there to make it much easier to browse the registry instead of typing it in with an increased margin for errors.
Select the Registry key you want to look for
In my case I’ve added a key under HKEY_LOCAL_MACHINE\Software\pingtestwatchernode
Select the key and press add and ok
Identify your registry target:
Identify your discovery for the target
In my case I just check whether the key is there. No check on the content.
The discovery will run once a day.
Review everything and press finish
At this point our class is ready to be targeted with our script monitor.
Next up is to create the monitor:
Browse to the PowerShell script and fill in the parameters. In this case I have 1 parameter which is “target” and will hold the IP of the desktop.
Define the conditions:
Healthy condition is when the status is true and type boolean
Critical condition is when the status is False
Note: I’m using a “boolean” Type
Configure the script and select the target you have created earlier on and the availability parent monitor
Identify your script based monitor
Specify a periodic: run every 2 minutes
No alert generation necessary.
Review all the parameters and create the script based monitor.
Load the management pack in your environment and locate the monitor:
Check the properties => recovery tasks and create 2 recovery tasks for the Health state “critical”.
Note that the screenshot below already shows the correct healthy state after config of the mp.
Export the managment pack and open it in an editor and locate the “recoveries” section to find your recovery tasks we just created:
scroll to the right and locate the “ExecuteOnState” parameter and change the one you want to run when the monitor goes back to healthy from “Error” to “Success”
Save the management pack and reload it in your environment.
So all we need to do is test it…
My pc is on: IT-Rambo has his cool backlight:
My pc is off and the light is automatically turned off…
Final Note: If you use this method you need to make sure to NOT save the recovery tasks in the console anymore otherwise the different settings we just changed in our management pack will be again overwritten as SCOM can’t natively configure a recovery task for a healthy state.
You can use this basically for anything where you want to run 2 conditions on the same monitor or even 3 if you have a 3 state monitor.