Strings and Scriptblocks

Hello again,

This will be another short article. I ran into an issue this week where I was trying to cast a string to a scriptblock and it absolutely wouldn’t work.  Looks like this is covered in a bunch of places, I’ve got one example here.

So without a lot of talking here’s how you do it:

You use the scriptblock class accelerator and the static method, create, to convert a string to a scriptblock.

Just a quick tip, don’t actually add the curly braces, {}, to your scriptblock.


[scriptblock]::Create(" Some arbitrary code, I can ever add a $Variable ")

I hope that’s useful to some people. Go ahead and ping me here or on twitter @rjasonmorgan if you have any questions.

A handy article on Scriptblocks in general courtesy of Rob Campbell, @Mjolinor

http://mjolinor.wordpress.com/2012/03/03/some-observations-about-powershell-script-blocks/

Posted in PowerShell | Tagged , | Leave a comment

Create a Credential Object

Hello again,

Today I’m writing about something fairly simple that is covered in a lot of places.  I find myself searching for this every month or so so I figured I’d put it in my blog so I remember where to look.

How to create a Credential object in PowerShell without using get-credential.  Every once in awhile I find I need a script to load alternate credentials internally. so here is how we do it:

 

New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList (‘Domain\Username’,(convertto-securestring -string password -asplaintext -force))

 

And that’s it for today.

Thanks for reading!

Posted in PowerShell | Leave a comment

Where and why to use Write-Debug

Hello again!

I wanted to write an article on write an article on write-debug for awhile so here it is.

Just to start, you can get more info on the cmdlet here: http://technet.microsoft.com/en-us/library/hh849969.aspx or by typing
get-help write-debug -full

When you use write-debug in a script/function it doesn’t do anything by default. Debug messages just kind of sit there until you either modify your $DebugPreference or activate the -debug switch when calling a script/function.

Quick side note here if you don’t know what Common parameters are check out the About_common_parameters help topic or view the link here: http://technet.microsoft.com/en-us/library/hh847884.aspx

Debug messages create breakpoints that are activated when you call a script/function with the -debug switch activated or the $DebugPreference set to ‘inquire’. It gives you a handy way to drop into debug mode and can provide you some valuable information while you’re at it. Generally I try and stick write debug after I’ve defined one or more variables, right before I start an operation I’m worried I may have issues with, and right after significant operations. Here’s a simple example:


function Get-FilewithDebug
{
[cmdletbinding()]
Param
(
[parameter(Mandatory)]
[string]$path
)
Write-Verbose "Starting script"
Write-Debug "`$path is: $path"
$return = Get-ChildItem -Path $path -Filter *.exe -Recurse -Force
Write-Debug "`$return has $($return.count) items"
$return
}

When I run that function with -debug I get the following results:


[C:\git] > Get-FilewithDebug -path C:\Users\jmorg_000\ -Debug
DEBUG: $path is: C:\Users\jmorg_000\

Confirm
Continue with this operation?
[Y] Yes [A] Yes to All [H] Halt Command [S] Suspend [?] Help (default is "Y"):

From here I can drop into Debug mode and look at the $path variable, or any other variable or object state in the context of the function at that moment. It’s fairly straightforward in this example but it can be invaluable when you’re dealing with more complex scripts or functions. Assuming I’m happy with what I see in the $path variable I’ll continue on to the next breakpoint. Y will allow the operation to continue

At the second break I’ll hop into debug mode.

DEBUG: $return has 226 items

Confirm
Continue with this operation?
[Y] Yes [A] Yes to All [H] Halt Command [S] Suspend [?] Help (default is "Y"): s
[C:\git] >> $return.count
226
[C:\git] >> exit

In the second example the Debug output tells me I have 226 .exe files in $return, I go ahead and check the count on $return and it really is 226. Perfect. I can do anything else I like at that point, I can look at some of the folders I didn’t have access to. I can check all the extensions in $return. I can look at my execution context, whatever. The point is that adding write-debug into your scripts gives you a handy way to break into debug mode.

Well I hope that was useful, I certainly had fun writing it and I definitely think adding write-debug to your scripts and functions is well worth the effort.

Posted in PowerShell | Tagged , , , , | Leave a comment

Set-EnvVariable

Hey guys, this is just a quick post to cover a new function I posted up on technet

http://gallery.technet.microsoft.com/Set-Environment-Variable-0e7492a3

There are a couple neat features in this function. Not in the functionality per se, I’m sure setting an environment variable isn’t that new to anyone. This function is cool because of some of the surrounding elements. First off, there is the Parameter validation:


# Enter the name of the Environment variable you want modified
[Parameter(Mandatory,
ValueFromPipelineByPropertyName,
ParameterSetName='Default')]
[Parameter(ParameterSetName='Concat')]
[validatescript({$_ -in ((Get-ChildItem -path env:\).name)})]
[string]$Name

So because it’s a set-* type function I only wanted to modify existing variables. That’s where the [ValidateScript()] comes in. It’s not as good as validate set but it’s more dynamic. It’ll check everything that’s in the Environment drive when it runs. I don’t want to have a static set because I want it to adapt to anyones environment. I lose tab completion but I still get the validation I wanted.

The other thing you might notice is that there are two parameter sets. I did that because I wanted to add two different functionality streams.

The function exists to modify existing environment variables, but it needs to act in one of two ways. Either it should overwrite the existing value or it needs to concatenate the new value to the existing value, like when adding a directory to the Path. So Set-EnvVariable has two parameter sets, Default, which is for the overwrite behavior and Concat which is for the appending behavior.


# Enter separator character, defaults to ';'
[Parameter(ParameterSetName='Concat')]
[ValidateLength(0,1)]
[string]$Separator = ';',

# Set to append to current value
[Parameter(ParameterSetName='Concat')]
[switch]$Concatenate

You’ll also notice, if you use it that the function always prompts for confirmation. It has a big impact and it defaults to wiping out the value of existing environment variables, not something to do lightly.


[cmdletbinding(SupportsShouldProcess=$true,
ConfirmImpact='high',
DefaultParameterSetName='Default')]

Anyhow, that’s about all there is that’s novel in this function. It’s cool and I like it a lot, please take the time to download it and give it a rating.

Posted in PowerShell | Leave a comment

Setting Up a NuGet Feed For Use with OneGet

rjasonmorgan:

Really cool article, I think it’s likely a bunch of us will be hosting private NuGet repositories in place of SCCM servers before too long.

Originally posted on Learn Powershell | Achieve More:

I blogged recently about using OneGet to install packages from an available NuGet feed. By default you can access the chocolatey provider, but you can actually build out your own local repo to host packages on for your internal organization. One of my examples of adding the package source and installing a package from the source were done using a local repo that I had built.

Building the local repo didn’t take a lot of time and for the most part, it didn’t really involve a lot of work to get it up and running. There is is a well written blog about it, but it only covers the Visual Studio piece and nothing about the IIS installation (it will actually use IIS Express by default) as well as downloading and installing the NuGet.server package (both of which I will be doing using none other than PowerShell).

Another reason for…

View original 1,102 more words

Posted in Uncategorized | 2 Comments

Reading log files, Quickly! – Part 1

Hello again everyone!

Today I wanted to talk about something I’ve been struggling with for a couple days (although more accurately weeks and months). That being, how to read log files effectively and efficiently? I work for a Managed Services Organization, which means we provide IT management services for clients.  These are typically large clients and large, often application specific, environments. I often find myself setting up custom monitors that need to read through log files for specific entries.

The problem I was facing is that I was reading Gigs of logs every few minutes and my queries were having trouble keeping up with the volume of logs being generated. To compound that I had multiple scripts reading the same log files at different times. Which means the same logs were being accessed and read multiple times while looking for the slightly different entries.

So today I want to address the first issue I saw around reading log files: What is the quickest, and most memory efficient, way to read log files?

I performed the following tests on a 50 MB log file:

 

Measure-Command -Expression {Get-Content .\logs\somelog.log}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 14
Milliseconds      : 695
Ticks             : 146956836
TotalDays         : 0.000170088930555556
TotalHours        : 0.00408213433333333
TotalMinutes      : 0.24492806
TotalSeconds      : 14.6956836
TotalMilliseconds : 14695.6836

 

Not too bad but not super fast.  This command also drops the file into the pipeline line by line. It manages to be the least memory efficient option. I’m not entirely sure why but it balloons the amount of memory required to multiple times the size of the file being read.  It wouldn’t be a problem but when you’re trying to read 50-100 50 MB files you can run out of memory quickly.

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -readcount 100}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 1
Milliseconds      : 282
Ticks             : 12829064
TotalDays         : 1.48484537037037E-05
TotalHours        : 0.000356362888888889
TotalMinutes      : 0.0213817733333333
TotalSeconds      : 1.2829064
TotalMilliseconds : 1282.9064

 

We’re starting to see a significant improvement here. This sends 100 line arrays into the pipeline.  Definitely better but not ideal for my purposes. 

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -readcount 1000}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 550
Ticks             : 5508710
TotalDays         : 6.37582175925926E-06
TotalHours        : 0.000153019722222222
TotalMinutes      : 0.00918118333333333
TotalSeconds      : 0.550871
TotalMilliseconds : 550.871

 

Pretty rocking performance wise.  1000 line arrays are getting created here.  I’ve heard from a number of people that the best performance in logs reading varies with the log file but generally it’s clever to start looking between 1000 and 3000 on your readcount.

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -readcount 0}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 493
Ticks             : 4939466
TotalDays         : 5.71697453703704E-06
TotalHours        : 0.000137207388888889
TotalMinutes      : 0.00823244333333333
TotalSeconds      : 0.4939466
TotalMilliseconds : 493.9466

 

This is definitely the right area, on my log files this is usually the fastest but it’s still creating a string array.  Not necessarily a bad thing but ended up being important for me when I was actually scanning for entries.  I’ll get to that next post.

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -raw}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 410
Ticks             : 4106764
TotalDays         : 4.75319907407407E-06
TotalHours        : 0.000114076777777778
TotalMinutes      : 0.00684460666666667
TotalSeconds      : 0.4106764
TotalMilliseconds : 410.6764

 

Before I get to the performance I just want to talk about the -raw parameter. This switches get-content’s behavior so that it reads the entire file as a single string. It may or may not be what you want in your environment, it’s crucial if you’re doing a multi line Regex match. More on that later.

I know this looks like it was faster than the last example but -readcount 0 seemed like it was beating it more often then not.  Regardless both commands were extremely close performance wise, usually 50 – 100 millisecond difference.  For me the choice of one over the other had more to do with my next operation than the read time. What these tests showed me was that when you’re using get-content it is really important to not accept the default readcount value. Or more accurately it’s important if the files are large and performance matters, like if you’re doing a log scrape for monitoring.

Next week I’ll be looking at the performance around actually scanning the log files for relevant data.  Thanks for reading!

 

Posted in Uncategorized | Tagged , , | Leave a comment

Giving Type Names to Your Custom Objects

Hello again!

This week I wanted to write about something that I found to be really useful but that I keep forgetting.  Ideally by committing it to a blog I’ll be able to remember it going forward! 

So here’s the deal, when you make a PSCustom object, using New-Object, Select-Object, or the more modern, [PSCustomObject]@{}, you can capture and modify it before releasing it back into the pipeline.

 

So that leaves me with two questions, how and why?

I’ll start with Why.

It’s worthwhile to add your own custom types for a bunch of reasons. The two top reasons are custom formatting and object filtering. 

Once you define a type you can define how it is displayed and handled by the host.  It’s really nice when you want to make a detailed return object for your query but users only want to see a couple key properties.  I’m often guilty of dropping select statements at the end of my functions just to clean up my output object.  That is really not a good way to handle this situation.  Custom types are the clean, natively supported by the shell, and they leave you with rich objects to work with when you need them.

Object filtering is the other big advantage. When you make your functions or scripts it’s handy to define what types of objects you accept. PSObject is obviously a pretty broad type, defining custom types for your objects allows you to limit your inputs to those custom types.  Which is really handy when you’re writing custom modules for environment specific tasks. I’ve written a module that works with log entries and data for a call center management application called Genesys. Until I buckled down and added custom types I was often getting conflicts when piping these functions together. 

Now we get onto the how, luckily for all of us that’s pretty easy.

$entry = [PSCustomObject]@{

  Property1 = $value1

  Property2 = $value2

  }

$entry.PSObject.TypeNames.Insert(0,’JasonsObject.CustomType1′)

The new typename in this case is JasonsObject.CustomType1.  The name is arbitrary but I like to structure the objects in kind of a standard object format.  Majortype.Minortype.SpecificClass.  You don’t need to follow my lead on this, use whatever works well in your environment.

 

And that’s all I have for this week.  Next up I’ll try and do an article on making the supporting formatting document.

 

Posted in Uncategorized | Tagged | Leave a comment