During my session on how to prepare yourself I’ve showcased some tips and tricks which will make your live much easier when you upgrade your existing SCOM 2007 R2 installation to SCOM2012.
One of the tricks I’ve mentioned was to run the SCOM2012 web console in it’s own app pool. During the upgrade of your web console the application pool will be removed and you can only choose the default application pool to install the website which is in my opinion not a best practice.
So before installing the Web Console on your webserver perform the following tasks in IIS:
Open the IIS manager on your machine
Right click the application pools and choose “Add Web Site…”
Fill in the details:
Note: At this point you need to choose another port than the default 51908. You can change this again after the upgrade.
The site is up and running.
During the wizard for pre installation you’ll get at one point at the following dialog to choose your application pool. Here’s the reason explained why we can’t reuse the “old” site:
The Operations Manager WebConsole will be deleted during the upgrade so it will default to the default application pool.
Select the new created website and continue.
After the upgrade you’ll notice that the old site has been removed. At this point you can edit the binding of the application pool again to the default port to keep the same transparency towards your users.
More tips to follow so stay tuned!
On 20th of September I’ll be hosting a live meeting where we’ll go over the different steps to prepare yourself and your environment for the move from SCOM 2007 to SCOM 2012. The upgrade path has been said to be easier than the one from MOM2005 to SCOM2007 (god thank). But still there are some things to keep in mind and consider before moving towards the new version when it’s released.
So join me on the 20th of September to prepare yourself for the next version of the SCOM software family.
The abstract of the topics covered (more to come):
Link to join in: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032492077&Culture=en-US
I’ll be prepared for SCOM 2012… will you?
In SCOM 2007 it’s possible to fill in custom fields with rules like you did in MOM 2005 as explained here: http://scug.be/blogs/dieter/archive/2011/05/13/scom-2007-custom-alert-fields.aspx
However this is not possible in monitors because there’s a fundamental change in how the alerts are created. In Rules the GenerateAlert module is used to create the alerts. In this GenerateAlert module it’s possible to pass extra data like the custom fields. In monitors the alert creation is slightly different. The alert is generate with parameters in the monitor itself so it’s not possible to pass extra data.
For a client I’m migrating MOM2005 to SCOM2007R2 and of course I would like to take advantage of the fact I can create monitors instead of rules. My client has a mainframe based problem management system (could also be any other system which doesn’t have a connector) which uses a mail scrubber to read out mails and scan for specific keywords to create tickets.
The specific keywords were passed in MOM through the custom fields. This is also possible in SCOM but only by using rules and not monitors. A solution could be to create a monitor and create a separate alert generating rule for that monitor. This solves our issue but is not manageable because if things change you have to change both the monitor and the rule to make sure they reflect the new situation.
Therefore I came up with another solution. Because there are only 15 possible combinations of keywords at my client I choose to use the Subscription / notification channel to insert the keywords in the dbase before I send it to the Problem management system. I could have just passed the parameters to the mail and send it but I prefer to update the dbase as well to also reflect the changes in the alert.
I’ve based my script on the script I used earlier on and is featured here: http://scug.be/blogs/dieter/archive/2011/05/11/scom-dump-alerts-to-text-file-and-mail.aspx
The main difference with the script above is that instead of reading the custom fields out of the dbase I pass them with the Notification Channel. Doing this makes the keywords centrally manageable when the keywords change.
As mentioned above I mostly reused the script of my previous blog post but for the record I’ll explain the script here once more:
First of all. You can download the script here: http://scug.be/members/DieterWijckmans/files/create_customfields_monitors.zip.aspx
The main difference with the previous script is the fact that we are not reading the data out of the dbase (in the $_.customfield fields) but inserting the data in the dbase through parameters by using the script.
Parameters: The “param” statement needs to be on the first line of the script. In my case I’m reading 3 parameters: The alertID (which is mandatory for the script), The Problemtype and the Objecttype.
The last 2 fields will be inserted in the $_.customfield dbase fields and are needed by the third party problem management solution to make the proper escalation.
RMS: Read the RMS server name of your environment. If you are using a clustered RMS it’s better to fill in the name of the cluster and comment the automatic retrieval of the name out to avoid problems.
Resolution State: The resolution state needs to be defined here and also defined in the SCOM environment (for more details on how to configure this in the SCOM environment check here: http://scug.be/blogs/dieter/archive/2011/05/11/scom-create-custom-alert-resolution-states.aspx
Loading the SCOM cmdlet
Culture Info: To make sure that the data format is correct you need to fill in the Localization. In my case it’s nl-BE.
Read in alert + fill in custom fields: The alertID which is passed as parameter is read in here and the data is retrieved out of the dbase. The other 3 custom fields which are required by the problem management system are filled in here and updated in the dbase. Technically there’s no obligation to fill in the fields in the dbase but to make sure that the custom fields are filled in when you open the alert in the console I update the alert anyway.
Note that I needed to make modifications to the date format to reflect the localization format here. All the data will be dumped to a file which is kept for future reference. The File path in yellow can be changed to reflect your location.
Mailing section:
Mailing the file to the problem management system or if in case an error occurred alerting the SCOM admin. Make sure you fill in the OK recipient, the NOK recipient and the SMTPserver to send out the mail.
Last but not least we are writing an event in the event log whether the operation was successful or not. This gives us the opportunity to monitor the problem creation script from within SCOM.
This solution works for me because I have a limited number of possible combinations.
A couple of things you need to configure before this script can be used in production:
The script must be run on the RMS (if it’s a clustered RMS make sure that the script is on both clusters in the same location).
Note: If you want to use more parameters or different names you have the change the following things:
There are 10 customfields available in the dbase so you can pass up to 10 parameters in the script and thus into the customfields.
If you have remarks or questions regarding the script please do not hesitate to drop me a line or contact me on twitter http://twitter.com/#!/dieterwijckmans
Scom is a great product but from time to time you need a custom build tool or script to do just the thing, or change just that bit that’s not possible in the SCOM console.
I’m personally a huge fan of the Powershell cmdlet supplied with SCOM. For most of the tasks (whether it’s automating or extending SCOM) it does the trick quickly and easily.
From time to time there’s a tool passing by on the world wide web that fills a gap to make our lives as a SCOM admin more easy.
Yesterday another of these fine tools emerged: http://systemcentercentral.com/forums/tabid/60/indexid/88501/default.aspx?tag=
Note: You need to register to download the tool.
This is the first version of the nice tool to count the instances per management group. This can be helpful to troubleshoot your environment. The PowerShell script which was posted in the community a while back took sometimes 3 hours to complete the task while this nice .net program is taking minutes…
You need .net framework 4 to run the tool.
Keep an eye on the topic because I’m sure it will progress in the next days like the authors mentioned in the topic itself.
This blog post is part of a series how to backup your SCOM environment.
You can find the other parts here:
Next step in our backup process is to take a backup of our unsealed management packs to make sure we don’t loose all the customization we’ve made to the environment.
First a little bit of explanation about the difference between sealed and unsealed management packs. At my clients I sometimes see some misunderstanding about these 2 sorts of management packs.
Difference between sealed management packs and unsealed management packs.
The difference is rather simple. All the management packs you download from vendors such as Microsoft, Dell, HP,… are sealed once. They have been developed by the vendors and sealed to prevent any further customizations. These management packs often include a variety of Rules, monitors, views and even reports which are installed when you import the management pack.
All the management packs you create yourself are by default unsealed ones. In here you store all your customizations such as: overrides on the sealed management packs, custom reports, custom made rules, custom made monitors…
Notice the word “custom”… In my book the word “custom” equals a lot of time and effort are spend to create them… Don’t want to loose them in case of a disaster then!
So how do we back these up… There are basically 2 ways: Manually or automated.
If you have one unsealed management pack that you want to backup or you want to quickly back it up while working in the console you can use the following method:
Open the console and navigate to Administration > management packs > select the management pack you wish to backup and right click > choose Export Management Pack…
Select a location for your backup:
Click ok and your management pack is successful.
If you check your location you’ll see the management pack in XML format.
While the above method works like a charm for a quick backup before changing something in your management pack it’s not workable and a hassle when you want to backup several management packs. Not to forget the human factor… You have to remember to take backups of your management packs…
Therefore the preferred way to backup is by automating it via script using PowerShell.
Microsoft has actually gave the proper tools to do so in the powershell cmdlet set for SCOM.
The command to use:
$mps = get-managementpack | where-object {$_.Sealed -eq $false}
foreach ($mp in $mps)
{
export-managementpack -managementpack $mp -path “C:\Backups”
}
There are basically 2 approaches to automate this: SCOM scheduled rule or Scheduled tasks on the RMS itself
You can make your choice based on this nice discussion to compare the 2: http://www.systemcentercentral.com/BlogDetails/tabid/143/IndexId/56798/Default.aspx
I choose to use the Scheduled task method to avoid the extra (although it’s minimal) load on the server and create a management pack to monitor the process.
This blog post is part of a series how to backup your SCOM environment.
You can find the other parts here:
There are 2 versions supported in SCOM and they both need a different approach to backup.
IIS6 is normally used if you are running SCOM 2007R2 on a Windows 2003 platform. It is used to support the components of the web console and the SQL Server Reporting Services. If you are using SQL Server Reporting Services from SQL 2008 IIS is not used anymore and it is just sufficient to backup the dbases.
IIS7 is normally used if you are running SCOM 2007R2 on a Windows 2008 platform (it can also run on a windows 2003 server but is not installed by default). The approach to backup IIS7 is somewhat different as it stores it’s data differently. The files you need to backup are web.config files and the applicationhost.config files. For more info on how to backup IIS 7 you can read this nice reference: http://blogs.iis.net/bills/archive/2008/03/24/how-to-backup-restore-iis7-configuration.aspx
So let’s start with the backup shall we:
Connect to the RMS and navigate to Start > administrative Tools > Internet Information Services (IIS) Manager
Right click the server name and navigate to “All Tasks” > “Backup/restore Configuration…”
The configuration backup dialog box will come up.
Fill in a backup name and if you want to make a secure backup you can tick the box “Encrypt Backup using password” and supply a password for the backup.
Click OK and your backup is made…
The actual backup file is stored in “%systemroot%\system32\inetsrv\MetaBack”
I suggest you take a copy of this file in case your RMS is unrecoverable. Otherwise the file itself which resides on the server will also be gone and what’s the point in backing up then
Open an elevated prompt on your RMS
Navigate to %windir%\system32\inetsrv
Run the command “appcmd add backup “name of your backup set”. If you not specify a name a name will be generated with the date and time.
If you are using IIS 7 your live just became a little bit easier. If you are using Vista SP1 or later / Windows Server 2008 the backup is automatically done if you create an initial backup like shown below. IIS automatically makes a history snapshot of ApplicationHost.config each time it detects a change so you can easily restore a prior version. By default it will keep 10 prior versions and checks every 2 minutes.
The files are stored in “%systemdrive%\inetpub\history”
Pretty cool feature if you ask me and a big improvement from the previous version.
To enumerate a list of backups and configuration history files, use the following command:
“%windir%\system32\inetsrv\appcmd.exe list backup”
This blog post is part of a series how to backup your SCOM environment.
You can find the other parts here:
One of the key factors in a successful restore of your environment is the SCOM encryption key.
This encryption key is used to store the data in the Operations manager dbase. It’s ensuring that the data in the dbase remains confidential and encrypted. The RMS uses this keep to read and write data to the Operations Manager dbase.
Pretty severe actually. If you don’t have the key you can’t establish connection from your fresh RMS to your existing Operations Manager dbase and therefore you loose all your settings, customizations and have to start all over again.
Please note that’s it’s a best practice to take this backup once after installation of the environment and after ANY changes to the RunAs accounts in the environment.
So how do you back this key up in case Murphy pays you a visit
There are actually 2 ways: GUI or command line
Log on to your RMS with an account with admin privileges
Open an elevated command prompt and navigate to your Operations manager install folder. In this case I kept it at default so c:\program files\system center operation manager 2007\
Note: Securestoragebackup.exe is only installed if you have installed a console on your RMS. If not you need to copy the securestoragebackup.exe file from the SupportTools folder from the installation media
The Encryption Key Backup or Restore Wizard pops up:
Click continue and select Backup the Encryption key.
A dialog box will appear to save your bin file. Best practice is to not save the file on the RMS. This makes perfect sense because you’ll need the file when there’s an issue with your RMS so there’s a big chance you can not reach the file.
I always save it on my file server and keep an extra copy somewhere else just to be save. As soon as you have exported the key you can make a copy of the bin file and store it twice on different locations.
So the location is set let’s continue.
Fill in a password to secure the backup bin file. Make sure you remember the password in X amount of time when you’ll need it to restore the key.
It will take no more than a few seconds to backup the key and if all goes well a nice complete message appear.
Log on to your RMS with an account with admin privileges
Open an elevated command prompt and navigate to your Operations manager install folder. In this case I kept it at default so c:\program files\system center operation manager 2007\
Run securestoragebackup.exe /? to get the syntax of the command.
The command used: securestoragebackup backup <filename>
You need to supply the password twice
and the second time
And the key was successfully backed up.
Downside is you cannot automate this process without further scripting because you need to put in a password. Would be nice that it would be an option in the exe to give your password as a parameter but maybe in another release
This blog post is part of a series how to backup your SCOM environment.
You can find the other parts here:
In order to keep the possibility to restore your SCOM environment in case of a disaster you need to make sure that you have a good backup of your dbases.
Your dbases keep track of all the info in your environment so it’s crucial that you don’t loose these valuable assets of your environment.
A good backup strategy of your SQL dbases is crucial as you need to make sure that you always have a recent copy at hand.
If you have a backup admin in your environment and are using a backup product like Data Protection Manager it’s best to meet up with the admin to check how his SQL backup schedule is configured. If he’s confident that he provides your backups no need to backup them twice…
If not you’ll need to perform the backup yourself.
First of all a small word about the different options you have to backup a SQL dbase (this applies to all SQL dbases and is not SCOM specific)
A good mix of the 3 methods above is a good strategy. I always take a full backup of the operations dbase once a week (datawarehouse once a month), a incremental backup once a day (for datawarehouse once a week) and Transaction log backups every 2 hours.
Again you can skip the Transaction log backups but then you risk to loose a max of 23h59m of data.
So let’s get the backups up and running:
We’re going to create the full backup schedule for Operations dbase in this example:
Connect to the dbase.
Open the tree and go to the “OperationsManager” dbase > right click > Tasks > Back Up…
Choose Options in the left pane and set:
Verify backup when finished
When all the settings are correctly configured choose Script button at the top of the page and choose “Script action to Job”.
This will generate a SQL job which you can schedule in the SQL agent so it can fire the backup when needed.
Name the job: In this case “Back up Database – OperationsManager_weekly”
Choose Schedules in the left pane and select New at the bottom:
Name the schedule and define the frequency + schedule. This will be scheduled in the SQL agent jobs.
When your schedule is made > click ok and the job is created.
You can check this job in the SQL Server Agent under the tree Jobs…
This is for the Weekly full of the Operation manager dbase.
You need to complete the same steps to perform the backup schedules for your other dbases.
The funny thing is most of the admins think of backups just after they had a major crash and there were no backups available.
Most of the admin’s think backups are a hassle and they take some but loose interest in the long term. When disaster strikes they miss a vital piece to restore their environment, have an outdated backup or even worse… no backup at all.
In this series of blogs I’ll go over the different aspects of backing up your SCOM2007 environment to make sure that when Murphy is choosing you, you’re prepared…
One of my favorite cartoons to illustrate backups…
So let’s get started and get you prepared when disaster strikes.
Which components do you need for a successful restore of your environment:
This blog post is part of a series how to backup your SCOM environment.
This series of blogs will be divided in the categories shown above and will be linked back to this post.
Recently I got a mail of a user stating he’s not receiving his reports anymore via mail. They were created way back and normally these reports are in my category “set it and forget it”…
When I checked the schedule reports pane instantly I noticed that all the reports are showing an error as shown below:
“The Subscription Contains parameter values that are not valid” error message is in the status field.
During my search on the web the most common solution was to recreate the report which I did for one but because these are like 20 reports it will be a lot of work to recreate them all and risk the fact that they break again without knowing when and why.
So the next step I tried in my troubleshooting is to see whether I could fill in the missing parameters in the report which resides in a custom management pack holding all these special reports.
When I tried to run the report I noticed the following: Data Aggregation and Histogram are greyed out and it’s impossible to change them
When I tried to run the report the following error message came up:
So there is an issue with the ‘Data Aggregation’ parameter. No possibility to troubleshoot any further in the SCOM environment so we’ll have to dig deeper and turn our attention to the underlying SQL Reporting Services (SRS) install.
Connect to the SRS server and open up the SQL management studio.
Note: If you’re not sure where your SRS install resides navigate to SCOM console > administration > Reporting. The Reporting Server URL is filled in there so you can retrieve the server name / alias here.
Make sure you select “Reporting Services” in the Server Type and select the server name you’ve retrieved from your console.
Navigate to Home > “Your management pack” > reports > Subscriptions.
In this example we’re troubleshooting the “PROD3_IOReport”.
Right click and choose view report.
The web browser opens and will generate the report. However in this case the following error shows up:
Didn’t we have an issue with the “DataAggregation”? The error above shows we have an issue with our “ManagementGroupId”.
Let’s take a look at the report properties to find out.
Right click the report and choose Properties.
The familiar SQL properties page pops up.
Behind the “ManagementGroupID” (in the above print screen the sixth item) it’s indicated that there are multiple… We only have one management group so why should there be multiple?
If you open the value you get a drop down box with the 2 id’s listed
So which one is the correct one…
I opened a newly created report in the same management pack (which I recreated to solve the issue with the first report) and there there’s only one ID listed:
This report is working with all the parameters so this ID is the correct ID for our management group.
Next step is deleting the ”wrong ID” in my report parameters and click ok:
Now we go back to our SCOM console and check the report once more.
Open the report and now it’s possible to check the Data Aggregation and Histogram again.
After clicking “run” the report is generated successfully.
So all we need to do is change the parameters in our scheduled report.
Navigate back to the scheduled reports list, right click the report and choose edit.
Check the parameters and fill in the correct Data Aggregation / Histogram settings (and check the other settings as well while you’re at it).
Click finish and check back at the scheduled report view.
The report has gone from error to “ready” and is able to process when the scheduled time is there…
In this particular case it apparently was an issue when there were agents temporarily multi homed to a test environment and this test environment was deleted afterwards.
Although this was a mistake on our side I posted this blog post to illustrate that the error message in SCOM was not the cause of the real problem which was hidden in the SRS installation. This threw me off when troubleshooting the issue because I was focusing on the wrong error and has cost me a lot of valuable troubleshooting time.
I’ve posted my experience to save you some time in troubleshooting the issue