Creating Service Manager Incidents

Hello, just wanted to share a quick post on creating incidents in Service Manager 2012R2.

I created a custom incident class for my new monitoring device, SevOne.PAS.Workitem.Incident, which extended the existing incident class but I was totally unable to create new incidents with my extended properties.

I kept getting an error saying that the “simple property x” didn’t exist.  Well after way too much troubleshooting I found out that the property names are case sensitive.  The moral of the story, even if you’re working in PowerShell, it pays to pay attention to case when typing.  It’s a mistake I, hopefully, won’t soon repeat.

Advertisement
Posted in Uncategorized | Leave a comment

Pester: Unit Testing for PowerShell

Super handy intro to Pester.

Johan Leino

Pester is a BDD inspired testing framework for PowerShell just like Jasmine is on the JavaScript side and RSpec is on the Ruby side for example (I do believe that JavaScript is one of the best comparisons for PowerShell though, it being a scripting language and all).
This blog post is meant as a quick introduction to what I think are the key take-aways of Pester and how I use it.

image

So why on earth should you test your PowerShell scripts, are they not meant to be ad-hoc?
I do believe, much like in any other language, that sometimes you don’t have the need to test your code (say what!?). Sometimes it definitely a place for ad-hoc code that just just executes once or twice just to accomplish something for the time being.

However when you’re constructing something that will survive over time and that others may use and improve, testing…

View original post 1,662 more words

Posted in Uncategorized | Leave a comment

When to write a module and why you should write tests.

Today I’m talking about something a little broader than usual.  I’ve started trying to get comfortable with Pester, which is a framework for writing tests in PowerShell.  The first thing I’ve figured out with Pester is that I’m really not writing well formed scripts.  They’re super hard to test because they aren’t really broken out into distinct elements.  

This prompted me to refactor about all of my core scripts, an unpleasant but important process.  One of the things that has occurred to me is that I, up until now, have been using modules as a bit of a last resort.  I try not to write them if I don’t have to and I often just load functions into an existing catch all tools module.  I just didn’t like putting in the effort to write a module, you need to remember where you stored it and remember to call it in your script*.  

So now looking at the way I bundle my scripts I’ve decided that I like module files.  I can store Variables and functions that are used in my script but never exposed to it.  I can keep my functions bundled together, and I can nest other modules.  It’s pretty handy.  It also keeps my actual scripts really short and makes it easier to add functionality.  I can add a new function to a module or just switch the way I use my existing functions.

My job is largely writing new monitors for large enterprise applications in PowerShell.  I’ve now taken to having every single monitor script call it’s own module.  A monitors gets stored in it’s own folder with at least a monitor script and a module file.  I also add other scripts that support the monitor in there as well.  Like, my tests file so I can validate my monitor after I change it and the configuration data for the Scheduled job that runs it.  It took me a bit to get used to it but I’ve now started writing a lot of modules files and using them in a lot of my scripts.  Anything that can be used for multiple scripts is stuffed into a core module, everything else goes into a script specific module.

I was always hesitant to add external dependencies to my scripts, I worried about breaking something unintentionally.  Well it turns out if you aren’t writing tests for your scripts you are constantly at risk of having them break.  The worst part is if you aren’t testing on a regular basis your only way of knowing a script is broken is when you’re confronted with the results of a failure.  I’ve added external dependancies but I also added tests to provide me absolute certainty that my production scripts are doing what they’re supposed to do.  I feel a lot more confident about my monitors and scripts now than I did before, because when one of them stops doing what it’s supposed to do I know right away.  Production environments can sometimes change on the fly and it’s pretty easy to have a script stop working because someone changed the way an App works.  With good tests you’ll notice something isn’t right well before anyone else does, and that feels really nice.  And with well modularized scripts it’s really easy to adapt to changing circumstances.

Ok so that’s all I have for today, I’d really love feedback on this. How do you feel about modules?  Do you already test your scripts? Is this idea interesting to you or does it sound odd in some way?  

Thanks a lot for taking the time to read this!

*this only applies if it’s not in your PSModulePath and/or you’re in Powershell V2 or below

Posted in Uncategorized | 4 Comments

Denver PowerShell User Group

Hey guys, 

The Denver PowerShell user group will be meeting again on Sept 4th.  If you’re interested in coming please sign up on Meetup: 

Denver PowerShellers

Posted in Uncategorized | Leave a comment

Strings and Scriptblocks

Hello again,

This will be another short article. I ran into an issue this week where I was trying to cast a string to a scriptblock and it absolutely wouldn’t work.  Looks like this is covered in a bunch of places, I’ve got one example here.

So without a lot of talking here’s how you do it:

You use the scriptblock class accelerator and the static method, create, to convert a string to a scriptblock.

Just a quick tip, don’t actually add the curly braces, {}, to your scriptblock.


[scriptblock]::Create(" Some arbitrary code, I can ever add a $Variable ")

I hope that’s useful to some people. Go ahead and ping me here or on twitter @rjasonmorgan if you have any questions.

A handy article on Scriptblocks in general courtesy of Rob Campbell, @Mjolinor

http://mjolinor.wordpress.com/2012/03/03/some-observations-about-powershell-script-blocks/

Posted in PowerShell | Tagged , | Leave a comment

Create a Credential Object

Hello again,

Today I’m writing about something fairly simple that is covered in a lot of places.  I find myself searching for this every month or so so I figured I’d put it in my blog so I remember where to look.

How to create a Credential object in PowerShell without using get-credential.  Every once in awhile I find I need a script to load alternate credentials internally. so here is how we do it:

 

New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList (‘Domain\Username’,(convertto-securestring -string password -asplaintext -force))

 

And that’s it for today.

Thanks for reading!

Posted in PowerShell | Leave a comment

Where and why to use Write-Debug

Hello again!

I wanted to write an article on write an article on write-debug for awhile so here it is.

Just to start, you can get more info on the cmdlet here: http://technet.microsoft.com/en-us/library/hh849969.aspx or by typing
get-help write-debug -full

When you use write-debug in a script/function it doesn’t do anything by default. Debug messages just kind of sit there until you either modify your $DebugPreference or activate the -debug switch when calling a script/function.

Quick side note here if you don’t know what Common parameters are check out the About_common_parameters help topic or view the link here: http://technet.microsoft.com/en-us/library/hh847884.aspx

Debug messages create breakpoints that are activated when you call a script/function with the -debug switch activated or the $DebugPreference set to ‘inquire’. It gives you a handy way to drop into debug mode and can provide you some valuable information while you’re at it. Generally I try and stick write debug after I’ve defined one or more variables, right before I start an operation I’m worried I may have issues with, and right after significant operations. Here’s a simple example:


function Get-FilewithDebug
{
[cmdletbinding()]
Param
(
[parameter(Mandatory)]
[string]$path
)
Write-Verbose "Starting script"
Write-Debug "`$path is: $path"
$return = Get-ChildItem -Path $path -Filter *.exe -Recurse -Force
Write-Debug "`$return has $($return.count) items"
$return
}

When I run that function with -debug I get the following results:


[C:\git] > Get-FilewithDebug -path C:\Users\jmorg_000\ -Debug
DEBUG: $path is: C:\Users\jmorg_000\

Confirm
Continue with this operation?
[Y] Yes [A] Yes to All [H] Halt Command [S] Suspend [?] Help (default is "Y"):

From here I can drop into Debug mode and look at the $path variable, or any other variable or object state in the context of the function at that moment. It’s fairly straightforward in this example but it can be invaluable when you’re dealing with more complex scripts or functions. Assuming I’m happy with what I see in the $path variable I’ll continue on to the next breakpoint. Y will allow the operation to continue

At the second break I’ll hop into debug mode.

DEBUG: $return has 226 items

Confirm
Continue with this operation?
[Y] Yes [A] Yes to All [H] Halt Command [S] Suspend [?] Help (default is "Y"): s
[C:\git] >> $return.count
226
[C:\git] >> exit

In the second example the Debug output tells me I have 226 .exe files in $return, I go ahead and check the count on $return and it really is 226. Perfect. I can do anything else I like at that point, I can look at some of the folders I didn’t have access to. I can check all the extensions in $return. I can look at my execution context, whatever. The point is that adding write-debug into your scripts gives you a handy way to break into debug mode.

Well I hope that was useful, I certainly had fun writing it and I definitely think adding write-debug to your scripts and functions is well worth the effort.

Posted in PowerShell | Tagged , , , , | Leave a comment

Set-EnvVariable

Hey guys, this is just a quick post to cover a new function I posted up on technet

http://gallery.technet.microsoft.com/Set-Environment-Variable-0e7492a3

There are a couple neat features in this function. Not in the functionality per se, I’m sure setting an environment variable isn’t that new to anyone. This function is cool because of some of the surrounding elements. First off, there is the Parameter validation:


# Enter the name of the Environment variable you want modified
[Parameter(Mandatory,
ValueFromPipelineByPropertyName,
ParameterSetName='Default')]
[Parameter(ParameterSetName='Concat')]
[validatescript({$_ -in ((Get-ChildItem -path env:\).name)})]
[string]$Name

So because it’s a set-* type function I only wanted to modify existing variables. That’s where the [ValidateScript()] comes in. It’s not as good as validate set but it’s more dynamic. It’ll check everything that’s in the Environment drive when it runs. I don’t want to have a static set because I want it to adapt to anyones environment. I lose tab completion but I still get the validation I wanted.

The other thing you might notice is that there are two parameter sets. I did that because I wanted to add two different functionality streams.

The function exists to modify existing environment variables, but it needs to act in one of two ways. Either it should overwrite the existing value or it needs to concatenate the new value to the existing value, like when adding a directory to the Path. So Set-EnvVariable has two parameter sets, Default, which is for the overwrite behavior and Concat which is for the appending behavior.


# Enter separator character, defaults to ';'
[Parameter(ParameterSetName='Concat')]
[ValidateLength(0,1)]
[string]$Separator = ';',

# Set to append to current value
[Parameter(ParameterSetName='Concat')]
[switch]$Concatenate

You’ll also notice, if you use it that the function always prompts for confirmation. It has a big impact and it defaults to wiping out the value of existing environment variables, not something to do lightly.


[cmdletbinding(SupportsShouldProcess=$true,
ConfirmImpact='high',
DefaultParameterSetName='Default')]

Anyhow, that’s about all there is that’s novel in this function. It’s cool and I like it a lot, please take the time to download it and give it a rating.

Posted in PowerShell | Leave a comment

Setting Up a NuGet Feed For Use with OneGet

Really cool article, I think it’s likely a bunch of us will be hosting private NuGet repositories in place of SCCM servers before too long.

Learn Powershell | Achieve More

I blogged recently about using OneGet to install packages from an available NuGet feed. By default you can access the chocolatey provider, but you can actually build out your own local repo to host packages on for your internal organization. One of my examples of adding the package source and installing a package from the source were done using a local repo that I had built.

Building the local repo didn’t take a lot of time and for the most part, it didn’t really involve a lot of work to get it up and running. There is is a well written blog about it, but it only covers the Visual Studio piece and nothing about the IIS installation (it will actually use IIS Express by default) as well as downloading and installing the NuGet.server package (both of which I will be doing using none other than PowerShell).

Another reason for…

View original post 1,102 more words

Posted in Uncategorized | 2 Comments

Reading log files, Quickly! – Part 1

Hello again everyone!

Today I wanted to talk about something I’ve been struggling with for a couple days (although more accurately weeks and months). That being, how to read log files effectively and efficiently? I work for a Managed Services Organization, which means we provide IT management services for clients.  These are typically large clients and large, often application specific, environments. I often find myself setting up custom monitors that need to read through log files for specific entries.

The problem I was facing is that I was reading Gigs of logs every few minutes and my queries were having trouble keeping up with the volume of logs being generated. To compound that I had multiple scripts reading the same log files at different times. Which means the same logs were being accessed and read multiple times while looking for the slightly different entries.

So today I want to address the first issue I saw around reading log files: What is the quickest, and most memory efficient, way to read log files?

I performed the following tests on a 50 MB log file:

 

Measure-Command -Expression {Get-Content .\logs\somelog.log}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 14
Milliseconds      : 695
Ticks             : 146956836
TotalDays         : 0.000170088930555556
TotalHours        : 0.00408213433333333
TotalMinutes      : 0.24492806
TotalSeconds      : 14.6956836
TotalMilliseconds : 14695.6836

 

Not too bad but not super fast.  This command also drops the file into the pipeline line by line. It manages to be the least memory efficient option. I’m not entirely sure why but it balloons the amount of memory required to multiple times the size of the file being read.  It wouldn’t be a problem but when you’re trying to read 50-100 50 MB files you can run out of memory quickly.

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -readcount 100}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 1
Milliseconds      : 282
Ticks             : 12829064
TotalDays         : 1.48484537037037E-05
TotalHours        : 0.000356362888888889
TotalMinutes      : 0.0213817733333333
TotalSeconds      : 1.2829064
TotalMilliseconds : 1282.9064

 

We’re starting to see a significant improvement here. This sends 100 line arrays into the pipeline.  Definitely better but not ideal for my purposes. 

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -readcount 1000}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 550
Ticks             : 5508710
TotalDays         : 6.37582175925926E-06
TotalHours        : 0.000153019722222222
TotalMinutes      : 0.00918118333333333
TotalSeconds      : 0.550871
TotalMilliseconds : 550.871

 

Pretty rocking performance wise.  1000 line arrays are getting created here.  I’ve heard from a number of people that the best performance in logs reading varies with the log file but generally it’s clever to start looking between 1000 and 3000 on your readcount.

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -readcount 0}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 493
Ticks             : 4939466
TotalDays         : 5.71697453703704E-06
TotalHours        : 0.000137207388888889
TotalMinutes      : 0.00823244333333333
TotalSeconds      : 0.4939466
TotalMilliseconds : 493.9466

 

This is definitely the right area, on my log files this is usually the fastest but it’s still creating a string array.  Not necessarily a bad thing but ended up being important for me when I was actually scanning for entries.  I’ll get to that next post.

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -raw}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 410
Ticks             : 4106764
TotalDays         : 4.75319907407407E-06
TotalHours        : 0.000114076777777778
TotalMinutes      : 0.00684460666666667
TotalSeconds      : 0.4106764
TotalMilliseconds : 410.6764

 

Before I get to the performance I just want to talk about the -raw parameter. This switches get-content’s behavior so that it reads the entire file as a single string. It may or may not be what you want in your environment, it’s crucial if you’re doing a multi line Regex match. More on that later.

I know this looks like it was faster than the last example but -readcount 0 seemed like it was beating it more often then not.  Regardless both commands were extremely close performance wise, usually 50 – 100 millisecond difference.  For me the choice of one over the other had more to do with my next operation than the read time. What these tests showed me was that when you’re using get-content it is really important to not accept the default readcount value. Or more accurately it’s important if the files are large and performance matters, like if you’re doing a log scrape for monitoring.

Next week I’ll be looking at the performance around actually scanning the log files for relevant data.  Thanks for reading!

 

Posted in Uncategorized | Tagged , , | Leave a comment