While installing a gateway server at a high security level environment I followed my procedure carefully but bumped into a new issue I did not yet encounter. We all know it can be tricky to install a gateway server with the certificate chain and such. Kudos to everyone who does it right the first time EVERY time…
During my gateway installation processs on the targeted inside management server I used the Gatewayapprovaltool.exe to allow the gateway server access. For your reference the only and correct way to do this is in fact (source: http://technet.microsoft.com/en-us/library/hh456445.aspx):
Microsoft.EnterpriseManagement.gatewayApprovalTool.exe /ManagementServerName=<managementserverFQDN>
/GatewayName=<GatewayFQDN> /Action=Create
The approval of server <GatewayFQDN> completed successfully.
/Action=Delete
flag for the /Action=Create
flag. But after the command prompt it just stayed there. Doing nothing. No error… Just waiting…
Well I don’t like waiting so tried it a couple of times, check the eventlog, rebooted the machine,… Nothing. On to Google then! But with no error message the search was hard but I found the solution on the blog of Marnix Wolf: http://thoughtsonopsmgr.blogspot.be/2010/02/scom-gateway-approval-tool-stalls.html
Apparently the gatewayapprovaltool is just writing some info in the SQL dbase and takes the user which is logged on to the machine to try to run this query and insert. When this is not working it just times out and sit there. No error code.
Some suggest to login or run as with a domain admin account but I prefer to use the SCOM SDK account because it is guaranteed it has rights on the SQL dbase no matter what.
After opening the command prompt as the SDK user => success!
Another little bump in the road flattened on the way to a perfect SCOM world …
System Center Operations Manager (SCOM) has been proven to be a great product to monitor your environment from end to end. It has grown version after version and has in my believe outgrown its status of monitoring only Microsoft Windows already for quite a while now. A lot of management packs are out there, some good some less good (let’s keep it diplomatic). When you are a SCOM admin you mostly come across the same management packs from the same vendors. From time to time it’s nice to see a new contender step into the arena with a management pack for a technology which can already be monitored by other management packs.
Recently Opslogix the Dutch company founded in 2009 released one of those management packs which I took for a test drive: VMware® Intelligent Management Pack.
I requested a trial to find out how well it weighs up to the competition. Opslogix is stating that this management pack is a far better choice based on cost vs monitoring. So I took it for a spin through my environment. I’m a SCOM expert but not a VMware expert so for me also the aspect of easy to install and comprehend the management pack and the overview of the environment was also a very valid point.
When you install a management pack from the online catalog it is normally a straightforward process. The management pack is downloaded in the background and automatically added to your environment. That’s it. But with non Microsoft management packs this can be another story. Luckily the Opslogix management pack comes with a very detailed install manual and the installation was a breeze. The clear manual stated exactly what steps needed to be taken to install the management pack.
The management pack consists out of 4 different management pack files (Base, licensing ui, VMware and VMware reports). The import is straight forward like you would import any other management pack. Once imported there are still some extra steps that need to be taken to make sure you can connect and monitor your environment. One of the things you also need to keep in mind is the fact that you need to register your license in the SCOM console as well otherwise the discoveries will not kick off. Well documented in the management pack guide but I can fully understand your enthusiasm to get this going that you forgot to do it.
Next up is to define which management server is used to monitor your VMware environment. As all people who are using SCOM 2012 probably already know the resource pools let you divide monitoring between different management servers. In this case this is very handy and well thought of Opslogix because it could well be possible you only have 1 management server facing the internet or your VMware environment. In this case you can populate your resource pool with that server and you are good to go:
Next thing you need to do is add VMware environment to your SCOM environment. Again no hassle with certificates or whatsoever. The Opslogix management pack comes with a straight forward GUI to connect your environment. Just check out monitoring and open up the VMware IMP Configuration Dashboard. Fill in your data, check connect and when the connection was successful you now have a connection to your environment. BUT no monitoring yet.
Last thing to configure is adding the license you have received from OpsLogix to your environment. The license GUI lets you connect your environment we just added to your license. This is needed to calculate the cores which were included in your license.
So that’s it… All ready to go.
One of the things I always do when a management pack is installed is browse through the views to see what can be visually shown to me but more important to the potential users of the console / environment.
Nothing surprising here actually. Nice views to check out the alerts, status of my devices and some performance views. Nothing more nothing less. Just the things I like to see to get a quick overview. Maybe a full dashboard to get a quick overview would have been nice but hey we can build that ourselves right!
Next I always browse through the rules and monitors. I’m not going over them 1 by 1 but in fact there are enough rules and monitors in the management pack to get you notified when things are wrong. Again I’m not a VMware expert but after the connection to the demo environment I instantly got alerts (probably generated on purpose) that were even for me clear to understand. The discovery process was definitely kicking in because all the different servers were discovered.
If you look through the included monitors and rules you’ll notice that all the different aspects of you VMware environment will be covered. All the rules and monitors are there to generate that desired view of your environment and warn you when things are about to go wrong. Nothing more, nothing less: Just the way I like it. Not drilling through a lot of rules and monitors to disable or enable them. Less config time is more monitoring time…
One thing I really like about a management pack and especially a third party management pack is that it comes with reports. As a SCOM admin I have a pretty good knowledge of what’s going on in my management group and with the servers but to a user / manager it is very hard to explain that there are issues or all is going well. Especially managers and application owners love reports they get in their mailbox on a regular base to keep track of things. Although it is not that hard to create your own reports it’s always nice to have some out of the box to get you going and save you some time in the process.
I looked around in the reports and noticed that indeed there were many reports targeted to an overall view of your environment. Most of the time that is exactly what you are looking for because normally the servers which are running on your virtual environment are also monitored by your rules for monitoring your OS. This is basically in a nutshell what this management pack is intended to do: monitor your infrastructure and give you a helicopter view of everything living in your virtual VMware environment.
The Opslogix management pack proved to be a sound experience on the field of install, config and connection. The installation was easy and not a lot of extra steps had to be taken to get it up and running. After the install it just started the discoveries and just started to… work. I have witnessed other management pack which have to be overridden en fine-tuned before they even produced any events or discover any new servers.
An added bonus that a lot of people are overlooking is the fact that a management pack discovers a lot of classes. This Opslogix management pack is no exception. You can use the discovered classes in your distributed applications and dashboards to even further extend your view on your environment. If you do not load a management pack this process can be tedious because you need to discover all the different types you want to add to your dashboard yourself.
If you are looking for a VMware management pack that’s easy to install, has more than enough monitors and rules on board to give you an essential clear view on your environment AND has the necessary reports: make sure to drop Opslogix a line for a demo license.
It’s all up to you on what exactly you want to monitor on your VMware environment. If you just want to monitor the health of your environment and like the ability to proactively react to issue this could be a great consideration!
For more info check out the Opslogix website: http://www.opslogix.com/products/vmware-intelligent-management-pack
When I started to review a SCOM 2012 R2 environment recently I came across an interesting issue I didn’t witness before… Time to blog the solution!
The System Center Data Access Service started successfully but stopped within the minute. After investigating I found out that there were at least 2 events logged during the time when the service crashes that could give us a clue on what is going on.
Event 26380: The System Center Data Access Service failed due to an unhandled exception… Cannot be added to the container…
Event 33333: Data access layer rejected: An entity of type service cannot be owned by a role, a group, or by principals mapped to certificates or asymmetric keys.
Strange… This worked the day before. What was going on?
After my search on the web I found this article of Travis Wright who had a similar problem with SCSM (which share the same code base so a nice entry point to start my troubleshoot).
By now I could pinpoint that there was an issue on the SQL side.
After heading over to the SQL admin with the article we continued our troubleshoot together. Turned out that the issue was not exact what Travis had experienced. In fact the SQL admin had made a review of the SA accounts and removed the SA role from the scom SDK user. No problem so far… But the SDK user was not defined in SQL as a SQL user but just as a member of a group.
Turned out that the SQL user had no rights to create an instance when executing the stored procedure: [p_TypeSpaceSetupBrokerService]
Original
SET @Query = N’CREATE SERVICE [‘ + @ServiceName + N’] ON QUEUE [‘ + @QueueName + N’] ([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]);’;
This was changed by the followin stored procedure to authorize the DBO to execute and after that the issue was resolved.
SET @Query = N’CREATE SERVICE [‘ + @ServiceName + N’] AUTHORIZATION [dbo] ON QUEUE [‘ + @QueueName + N’] ([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]);’;
Hopefully when you have stumbled on this page it has saved you some extra troubleshooting…
The new year 2014 is not even a couple of weeks old and the first Sysctr events are already announced or planned. Don’t you just love it when the community is buzzing again with new and exciting events just around the corner.
My first appointment will be the System Center Night organized by us, System Center User Group Belgium. For the first time in a long while (heck I can’t even remember that we did this) we are organizing a 2 track evening with 2 sessions on CDM and 2 sessions on ECM.
There are still a couple of seats left but they are limited so If you’re not signed up yet make sure to do so. More info on the SCUGbe events page: http://scug.be/events
Second appointment of this year is a week later… And one I’m really looking forward to. As a huge fan of the first hour I’m thrilled to be able to speak at the SystemCenterUniverse (SCU) event in Houston on the 30th of January.
For this I had the battle me into the SCU_JEDI position to get the slot. Those who have voted for me… Thank you I won’t let you down!
The cool thing about this event is not only the out of this world list of speakers and agenda (check it out here) but the fact that it has been broadcasted over the globe in HD from the very beginning. This is giving everyone the opportunity to tune in for free and witness the event live from their own living room, business or even with their own local User Group. That’s right, user groups around the world are organizing Simulcast parties around the globe. If you want to join them check out whether there’s one near your location and jump in: http://www.systemcenteruniverse.com/venue.htm
But another cool thing is the fact that you can really interact with the event… Right from the start Rod Trent has provided great coverage of the event on social media during and after the event. You can really engage with the event in Houston and ask questions to the panel. This is in my believe a huge plus for all the people who are viewing from abroad and is an extra channel how you can experience the event and get all the inside info…
Follow @rodtrent or check the official hashtag #scu2014 for more info on the event.
My session will be about monitoring your cloud with System Center Operations Manager:
What is that strange Interstellar cloud floating through space holding all your servers, services, data, etc.? Make this not a huge unknown in your universe but send out your probes to get the data back to your mother ship and start monitoring it. Use the force of this massive cloud to even monitor your servers at the mother ship. The possibilities are out there… Just grab them, combine the forces and become a true master of your universe.
It’s scheduled at 2:35pm – 3:20pm Texas time (approx 10PM Brussels time) so tune in to the simulcast if you want to check out my session.
The closest Simulcast party for Belgium and Netherlands is held by ScugNL in Hilversum. More info here: http://www.scug.nl/events/system-center-universe-2014-american-texas-pizza-sessie/
So I hope to see you all live or virtually at System Center Universe 2014.
If you want to get in touch: connect and drop me a line on Twitter: @DieterWijckmans
System Center Universe is back in full force on the 30th of January to bring you for the 3th year in a row top notch System Center content. This event is held in Houston Texas but spread through the entire galaxy via a high quality live stream reaching out to all the System Center astronauts throughout the world.
To give a chance to someone to share his System Center force the SCU_Jedi contest was held. This epic journey to find the true SCU_Jedi is now in it’s final stage…
After the first stage the SCU_Jedi council elected a top 3 of the applications to enter the final round.
I was selected with my “Combine the force of the cloud and SCOM” session. As the only not American contestant I’m up against 2 other great candidates. In this final round it is however no longer in our hands but I call upon you…
He who has the most votes by 15/12 on his YouTube video posted in the SystemCenterUniverse channel wins the right to participate in person on this awesome event…
Therefore I would be so grateful if you could like my video on YouTube to get me there one like at a time:
http://www.youtube.com/watch?v=L81cv1bbogo
Please give me the opportunity to share my knowledge by giving this session . Every like counts so spread the word and get more SCOM content on this great event which is growing every year!
It would be a privilege to participate…
and remember…
Keep monitoring the force!
In a constant quest to keep your environment running, Disk space is one of the things that need to be available to satisfy your organization’s continuously growing hunger for storage.
The price of storage has dropped significantly over the last years but unfortunately the demand for more storage has grown as well as files are getting bigger and more and more data is kept.
SCOM has had different processes over the year to make sure you are properly alerted when disk space is running low. In this post I will show you my method of keeping an eye on all the available disk space. This is however my point of view and open for discussion as usual.
I started this blog post because of a case I received from one of my customers:
My initial response was: Great let’s get Orchestrator in here to get a better part of the logic in there. Answer was as predicted => no.
Ok so let’s break this up in the different categories:
Note: I did already create a management pack for this scenario but am explaining the scenario thoroughly so you can use this guide for another monitoring scenario as well
Download the mp from the gallery:
We are in luck because SCOM already has the ability to monitor on both conditions mentioned above (Free Mb left AND %free space). This was the case in the logical disk monitor and it is still present today BUT (yep there will be a lot of BUTS in this post) this is not the case in the Cluster and Cluster shared Volumes (CSV) monitors. They use the new kind of disk space monitoring where the previous 1 monitor with double thresholds is divided in to 2 separate monitors with a rollup monitor on top. In my opinion a good decision.
So at this point we can use for all different kinds of disks the same method: 2 monitors with 1 rollup monitor on top. GREAT.
So let’s start configuring them! Fill in all the different thresholds and you are good to go right?
In theory yes… but in this case not quit. One of the big hurdles was the fact that a monitor can only fire of one notification as long as it is not reset to healthy. As we need a notification on both warning and error we have an issue here. The notification process is by design built that you only will receive an alert once for either warning or error on the monitor.
Because we need to have a warning AND error we need to create additional monitors to cope with this requirement.
This is in fact how I tackled this issue.
To make sure we can have the ability to act on both thresholds we will need to create 3 monitors: Rollup monitor, Free Space Monitor (%) and Free Space Monitor (MB) like the one which ships out of the box.
So let’s get at it:
Note: I’m using the console to quickly create the management pack to show you with a minimum of authoring knowledge to solve this issue however I advise to dig deeper in the different authoring solutions for SCOM.
Note: All the necessary monitors are already in the management pack which I included in this post. I solely mention the process here so you potentially can use this method to do the same thing for another scenario.
A rollup monitor will not check a condition itself but will react on the state of the monitors beneath it. Therefore we have to create this first. To make sure it shows up right under the other monitors we keep the same naming but add the word WARNING at the end.
Open the monitor tab and choose to create a monitor => Aggregate Rollup Monitor…
Fill in the name of the monitor
In this case we want the best state of any member to rollup because we want both %mb free AND %free to be true and thus in warning state before we want to be alerted:
We would like to have an alert when there’s a warning on both monitors underneath this monitor so we change the severity to Warning.
To make sure are new rollup monitor is correctly influenced by the monitors underneath we now need to create the monitors with the conditions MB free and % free.
These are included in the management pack as well. Keep an eye on the fact that you need to create a monitor and select the appropriate rollup monitor where they need to reside under like shown below:
For the performance counter in this case I used these parameters:
object: $Target/Property[Type="Windows5!Microsoft.Windows.Server.ClusterDisksMonitoring.ClusterDisk"]/ClusterName$
Counter: % Free Space
Instance: $Target/Property[Type="Windows5!Microsoft.Windows.Server.ClusterDisksMonitoring.ClusterDisk"]/ClusterDiskName$$Target/Property[Type="Windows5!Microsoft.Windows.Server.ClusterDisksMonitoring.ClusterDisk"]/ClusterResourceName$
NOTE: Make sure to turn off the alerting of these rules as we do not want to receive individual alerts but just the alert of the rollup monitor.
If you have created the monitors correctly it should look like this:
As you can see the monitors are now shown right beneath the actual monitors.
You can use this scenario for basically all approaches where you need to make double tickets for the same issue if they are caused by the same 3 state monitor.
Because we now have the condition set for the warning condition with the appropriate thresholds we need to do the same thing for the out of the box monitor to only show us an alert when both critical conditions are met.
Therefore we need to override them with the proper thresholds and configuration:
For the rollup monitor we want to make sure it generates an alert when both the critical conditions are met therefore we set the following overrides to true:
For the alerting part we only want to be alerted on Critical state because otherwise the 2 sets of monitors will interfere with each other therefore we need to set the Alert on State to “critical health state” and last but not least the rollup algorithm needs to be best health state of any member because again we only want to be notified when both conditions are met.
The 2 monitors under the Aggregate Rollup monitor also need to be updated with the correct thresholds + to not generate alerts otherwise we will have useless alerts because we only want to be alerted when both conditions are met.
After we have created the monitors we need to make sure that we have a clear difference between the critical servers and the non critical servers. These are necessary to give us the opportunity to create different thresholds and different levels of tickets per category of server.
You can create a group of servers with explicit members and go from there. This is however from a manageability standpoint not a good idea as this requires the discipline to add a server to the group when it changes category or is installed. This leaves way to much opening for errors.
Therefore we are going to create groups based on an attribute which is detectable on the servers. In this case I set a Regkey on the servers identifying whether it’s a critical server or not. This can be easily done by running a script through SCCM or doing it during build of the server.
Note: Do this in a separate management pack than the one you use for your monitors as this management pack if sealed can be reused through your entire environment.
To create the attribute go to the authoring pane and under management pack objects select the attributes
Create new attribute
In this case I name it Critical server.
In the discovery method we need to tell SCOM how the attribute will be detected. In this case I choose to use a regkey.
In the target you select Windows Server and automatically the Target will be put in as Windows Server_Extended
The management pack should be the same management pack as your groups will reside in because we need to operate within the same unsealed management pack.
So after we filled in all the parameters it should look like this:
Last thing to do is to identify the key which is monitored by SCOM.
In my case it’s HKEY_LOCAL_Machine\Category\critical
Next up is to create both our groups: Critical and non critical servers
Create a new group fro the critical servers:
Check out the Dynamic Members rules
Select the Windows_Server_Extended class and check whether the Propery Critical server Equals True
The group will now be populated with all servers where this key has the value “true”
Only thing left to do is do the opposite with a group where there’s only servers not having this key set to true.
Because we now have all the building blocks to divide the warning and error on both groups of servers the only thing left to do is create both notification channels with the desired actions configured.
I ended up with 3 scenarios with their notifications to match the requirements:
Notification 1:
I want to be alerted for a critical alert on the Critical servers and create a high priority ticket through my notification channels.
Notification 2:
I want to be alerted for a critical alert on the non critical servers and create a normal priority ticket through my notification channels
Notification 3:
I want to be alerted for a warning alert on both the critical servers and the non critical servers and send out a mail through my notification channels.
The next steps in how to get the tickets out scom in your organization should be configured for your environment specific but at this point the different scenarios are covered.
The last thing on the list was to reset the monitors on a daily basis so we are sure that we keep getting alerts as long as the condition was not resolved. This is accomplished by using my resetmonitorsofspecifictype script which I documented in this blogpost: http://scug.be/dieter/2013/10/23/scom-batch-reset-monitors-through-powershell/
This blogpost covers all the different questions in this scenario + that we did not have to build any complex scenarios outside of SCOM but used all technology within SCOM to accomplish our goal.
The last thing I would recommend is to seal the management pack used for the group creation. That way you can reuse this in other unsealed management packs as well to make a difference between critical and non critical servers.
Again you can use this approach for all different monitors.
Monitors are a very useful addition to SCOM since SCOM 2007 came out back in the days. However for a lot of fresh SCOM administrators the alerts generated by monitors sometimes can create headaches.
An alert is raised when a state is changed and closed when the state changes back to the health condition. This is the really short version…
If you speak to advanced SCOM admins they can all agree that the management of the monitor generated alerts can be tricky from time to time if you work with operators.
If at one point they close an alert in the console which was generated by a monitor but the condition is not changed for the monitor it will remain in unhealthy state until a force reset is done on the monitor itself.
We all know how many monitors are floating around in our environment so it’s just a disaster waiting to happen. Therefore it is wise to reset the unhealthy monitors for your core business services regularly until everybody is aware about the fact that they can not close alerts from a monitor…
However I use this setup also for another annoying thing that can have great impact on your environment. Again this is a scenario to rule out a human error.
Therefore I created this small PowerShell script in combination with a bat file. It will just reset the health of the unhealthy monitors of a specific monitor you specify. Only thing left to do is create a scheduled task for the bat file and you are good to go.
The script can be downloaded at the Gallery together with the bat file.
Example: Fragmentation level is high and we want to be alerted everyday again as long as the condition remains:
Check the monitor properties to retrieve the monitor display name:
In this case “Logical Disk Fragmentation Level” Copy paste the name.
Fill in the name in the batch file and run it.
The unhealthy monitors will be reset and their alerts are automatically closed in the console.
If we check the monitor again it is now forced to reset state and will fire again the next time it checks the unhealthy condition when this is still true.
This way you will receive a new alert every time this script runs. You could also schedule this during shift change of the helpdesk to get a clear view of the current situation on your environment that they start with a clean sheet.
On the 11th of June I gave a LiveMeeting on how to get started quickly with SCOM 2012.
I started after a fresh install of SCOM and got through the routine of getting you started quickly by:
The webcast is available on technet.
Get it here: http://technet.microsoft.com/en-us/video/so-you-ve-successfully-installed-scom-now-what
This livemeeting was the first of many more to come in this series to get you up and running fast.
So you are quietly working in your office on a cloudy morning when all of a sudden the IT manager walks in and drops a bomb:
“Hey we are going to use System Center Operations Manager to monitor our systems from now on so throw away all your little monitoring tools and get at it…”
This actually happend to me once and I would have prayed for a livemeeting like this. The installation is in some cases already a hassle but then you are looking at a pristine console ready to start monitoring the systems and save the day… Now what?
This livemeeting will start right after you have survived the installation procedure and will provide you a roadmap to get up quickly. Expect a demo packed session spiced with a lot of tips and tricks from my personal experience installing SCOM in different environments.
Let’s get you from scom zero to scom hero….
Register here (cape not included ): https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032549855&Culture=en-us&community=0
When setting up a new SCOM environment with a lot of Clusters, exchange, DC’s involved the alerts that Agent Proxying is not enabled will quickly pop up. This is in fact one of the most common alerts you get when starting to roll out agents and management packs.
This setting is set on agent level and grants the agent to forward data to the management server on behalf of another entity. This basically means that the agent can send info from another entity. Common scenarios are in fact a DC on behalf of the domain or a cluster which can send info about the cluster resources.
In various management pack guides the agent proxy setting is documented as obligatory to be able to do the initial discovery (cluster management pack) so If you did not read the guide and forgot to set this setting the discovery will just not work.
In fact this setting is disabled by default disabled. SCOM will check when data is sent by an agent which is not originated by it’s own entity and will alert you about this happening. But that’s it. No further action is taken.
You can manage this manually by browsing to the Administration pane => agent managed and open the properties of the agent and check the “allow this agent to act as a proxy and discover managed objects on other computers” tick box.
But this can be a hassle especially in a new management group.
There are various scripts out there to enable the agentproxying option on all agents. This however could pose a security risk if malicious data comes into your management group and floods your management server.
Therefore I’m pro for a more selective approach
So this is my short solution to automate this process.
First take a look at the alert. One of the most common misunderstandings is in fact that it’s not the alert source which need to have the agent proxying option enabled (in this case VSERVER03) but the server in the Alert description (in this case VSERVER001).
This alert is generated by the operations management packs which are installed by default so no tweaking required here.
My solution to automate this process it to use a PowerShell script in combination with a notification channel to react on the alert shown above.
#=====================================================================================================
# AUTHOR: Dieter Wijckmans
# DATE: 10/05/2013
# Name: set_proxy_enabled.PS1
# Version: 1.0
# COMMENT: Automatically activate agent proxy through notification channel
#
# Usage: .\set_proxy_enabled.ps1
#
#=====================================================================================================
Param ([String]$sAlertID)
###Prepare environment for run###
##Read out the Management server name
$inputScomMS = $env:computername
#Initializing the Ops Mgr 2012 Powershell provider#
Import-Module -Name “OperationsManager”
New-SCManagementGroupConnection -ComputerName $inputScomMS
#Get the alert details
$oAlert = Get-SCOMAlert | where { $_.Id -eq $sAlertID}
$AlertID
$oAlert.ID
$oalert.customfield1 = “agent proxy enabled”
$oalert.update(“”)
#Get the FQDN name of the agent to set the proxy for
$input = ($oAlert.Description).ToString()
$outputtemp = $input.Split(‘()’)[1]
$agentname = $outputtemp.Trim()
#Set the Agent proxy setting
‘”‘ + $agentname + ‘”‘ | Get-SCOMAgent | Enable-SCOMAgentProxy –Passthru
exit
download the script here:
In a nutshell the following steps will be performed:
Note I’m also updating customfield1 here to make sure the script ran correctly.
So on to the configuration of our notification:
Navigate to Administration => Notifications => channels
Right click and choose new notification channel:
Name your command notification channel:
Fill in the following (update with your respective paths of course):
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
“c:\scripts\set_proxy_enabled.ps1” ‘$Data/Context/DataItem/AlertId$’
C:\Windows\System32\WindowsPowerShell\v1.0
Move on to the Subscribers:
Click add
Fill in a name:
Configure the subscriber with the channel we just created:
Click Finish twice.
Set up the subscription:
Create a new subscription:
Choose the criteria. In this case we want to trigger this subscription when the Agent proxy not enabled rule logs an alert.
Select the addresses (I choose to send a mail to myself as well as backup option)
Select the channels
And save
Now wait for an alert and check the alert details for our update of custom field 1 and check whether the tick box is enabled at this point.
If you have any question make sure to drop me a line in the comments or ask your question via twitter (better monitored than the comments).