Set-EnvVariable

Hey guys, this is just a quick post to cover a new function I posted up on technet

http://gallery.technet.microsoft.com/Set-Environment-Variable-0e7492a3

There are a couple neat features in this function. Not in the functionality per se, I’m sure setting an environment variable isn’t that new to anyone. This function is cool because of some of the surrounding elements. First off, there is the Parameter validation:


# Enter the name of the Environment variable you want modified
[Parameter(Mandatory,
ValueFromPipelineByPropertyName,
ParameterSetName='Default')]
[Parameter(ParameterSetName='Concat')]
[validatescript({$_ -in ((Get-ChildItem -path env:\).name)})]
[string]$Name

So because it’s a set-* type function I only wanted to modify existing variables. That’s where the [ValidateScript()] comes in. It’s not as good as validate set but it’s more dynamic. It’ll check everything that’s in the Environment drive when it runs. I don’t want to have a static set because I want it to adapt to anyones environment. I lose tab completion but I still get the validation I wanted.

The other thing you might notice is that there are two parameter sets. I did that because I wanted to add two different functionality streams.

The function exists to modify existing environment variables, but it needs to act in one of two ways. Either it should overwrite the existing value or it needs to concatenate the new value to the existing value, like when adding a directory to the Path. So Set-EnvVariable has two parameter sets, Default, which is for the overwrite behavior and Concat which is for the appending behavior.


# Enter separator character, defaults to ';'
[Parameter(ParameterSetName='Concat')]
[ValidateLength(0,1)]
[string]$Separator = ';',

# Set to append to current value
[Parameter(ParameterSetName='Concat')]
[switch]$Concatenate

You’ll also notice, if you use it that the function always prompts for confirmation. It has a big impact and it defaults to wiping out the value of existing environment variables, not something to do lightly.


[cmdletbinding(SupportsShouldProcess=$true,
ConfirmImpact='high',
DefaultParameterSetName='Default')]

Anyhow, that’s about all there is that’s novel in this function. It’s cool and I like it a lot, please take the time to download it and give it a rating.

Posted in PowerShell | Leave a comment

Setting Up a NuGet Feed For Use with OneGet

rjasonmorgan:

Really cool article, I think it’s likely a bunch of us will be hosting private NuGet repositories in place of SCCM servers before too long.

Originally posted on Learn Powershell | Achieve More:

I blogged recently about using OneGet to install packages from an available NuGet feed. By default you can access the chocolatey provider, but you can actually build out your own local repo to host packages on for your internal organization. One of my examples of adding the package source and installing a package from the source were done using a local repo that I had built.

Building the local repo didn’t take a lot of time and for the most part, it didn’t really involve a lot of work to get it up and running. There is is a well written blog about it, but it only covers the Visual Studio piece and nothing about the IIS installation (it will actually use IIS Express by default) as well as downloading and installing the NuGet.server package (both of which I will be doing using none other than PowerShell).

Another reason for…

View original 1,102 more words

Posted in Uncategorized | 2 Comments

Reading log files, Quickly! – Part 1

Hello again everyone!

Today I wanted to talk about something I’ve been struggling with for a couple days (although more accurately weeks and months). That being, how to read log files effectively and efficiently? I work for a Managed Services Organization, which means we provide IT management services for clients.  These are typically large clients and large, often application specific, environments. I often find myself setting up custom monitors that need to read through log files for specific entries.

The problem I was facing is that I was reading Gigs of logs every few minutes and my queries were having trouble keeping up with the volume of logs being generated. To compound that I had multiple scripts reading the same log files at different times. Which means the same logs were being accessed and read multiple times while looking for the slightly different entries.

So today I want to address the first issue I saw around reading log files: What is the quickest, and most memory efficient, way to read log files?

I performed the following tests on a 50 MB log file:

 

Measure-Command -Expression {Get-Content .\logs\somelog.log}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 14
Milliseconds      : 695
Ticks             : 146956836
TotalDays         : 0.000170088930555556
TotalHours        : 0.00408213433333333
TotalMinutes      : 0.24492806
TotalSeconds      : 14.6956836
TotalMilliseconds : 14695.6836

 

Not too bad but not super fast.  This command also drops the file into the pipeline line by line. It manages to be the least memory efficient option. I’m not entirely sure why but it balloons the amount of memory required to multiple times the size of the file being read.  It wouldn’t be a problem but when you’re trying to read 50-100 50 MB files you can run out of memory quickly.

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -readcount 100}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 1
Milliseconds      : 282
Ticks             : 12829064
TotalDays         : 1.48484537037037E-05
TotalHours        : 0.000356362888888889
TotalMinutes      : 0.0213817733333333
TotalSeconds      : 1.2829064
TotalMilliseconds : 1282.9064

 

We’re starting to see a significant improvement here. This sends 100 line arrays into the pipeline.  Definitely better but not ideal for my purposes. 

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -readcount 1000}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 550
Ticks             : 5508710
TotalDays         : 6.37582175925926E-06
TotalHours        : 0.000153019722222222
TotalMinutes      : 0.00918118333333333
TotalSeconds      : 0.550871
TotalMilliseconds : 550.871

 

Pretty rocking performance wise.  1000 line arrays are getting created here.  I’ve heard from a number of people that the best performance in logs reading varies with the log file but generally it’s clever to start looking between 1000 and 3000 on your readcount.

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -readcount 0}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 493
Ticks             : 4939466
TotalDays         : 5.71697453703704E-06
TotalHours        : 0.000137207388888889
TotalMinutes      : 0.00823244333333333
TotalSeconds      : 0.4939466
TotalMilliseconds : 493.9466

 

This is definitely the right area, on my log files this is usually the fastest but it’s still creating a string array.  Not necessarily a bad thing but ended up being important for me when I was actually scanning for entries.  I’ll get to that next post.

 

Measure-Command -Expression {Get-Content .\logs\somelog.log -raw}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 410
Ticks             : 4106764
TotalDays         : 4.75319907407407E-06
TotalHours        : 0.000114076777777778
TotalMinutes      : 0.00684460666666667
TotalSeconds      : 0.4106764
TotalMilliseconds : 410.6764

 

Before I get to the performance I just want to talk about the -raw parameter. This switches get-content’s behavior so that it reads the entire file as a single string. It may or may not be what you want in your environment, it’s crucial if you’re doing a multi line Regex match. More on that later.

I know this looks like it was faster than the last example but -readcount 0 seemed like it was beating it more often then not.  Regardless both commands were extremely close performance wise, usually 50 – 100 millisecond difference.  For me the choice of one over the other had more to do with my next operation than the read time. What these tests showed me was that when you’re using get-content it is really important to not accept the default readcount value. Or more accurately it’s important if the files are large and performance matters, like if you’re doing a log scrape for monitoring.

Next week I’ll be looking at the performance around actually scanning the log files for relevant data.  Thanks for reading!

 

Posted in Uncategorized | Tagged , , | Leave a comment

Giving Type Names to Your Custom Objects

Hello again!

This week I wanted to write about something that I found to be really useful but that I keep forgetting.  Ideally by committing it to a blog I’ll be able to remember it going forward! 

So here’s the deal, when you make a PSCustom object, using New-Object, Select-Object, or the more modern, [PSCustomObject]@{}, you can capture and modify it before releasing it back into the pipeline.

 

So that leaves me with two questions, how and why?

I’ll start with Why.

It’s worthwhile to add your own custom types for a bunch of reasons. The two top reasons are custom formatting and object filtering. 

Once you define a type you can define how it is displayed and handled by the host.  It’s really nice when you want to make a detailed return object for your query but users only want to see a couple key properties.  I’m often guilty of dropping select statements at the end of my functions just to clean up my output object.  That is really not a good way to handle this situation.  Custom types are the clean, natively supported by the shell, and they leave you with rich objects to work with when you need them.

Object filtering is the other big advantage. When you make your functions or scripts it’s handy to define what types of objects you accept. PSObject is obviously a pretty broad type, defining custom types for your objects allows you to limit your inputs to those custom types.  Which is really handy when you’re writing custom modules for environment specific tasks. I’ve written a module that works with log entries and data for a call center management application called Genesys. Until I buckled down and added custom types I was often getting conflicts when piping these functions together. 

Now we get onto the how, luckily for all of us that’s pretty easy.

$entry = [PSCustomObject]@{

  Property1 = $value1

  Property2 = $value2

  }

$entry.PSObject.TypeNames.Insert(0,’JasonsObject.CustomType1′)

The new typename in this case is JasonsObject.CustomType1.  The name is arbitrary but I like to structure the objects in kind of a standard object format.  Majortype.Minortype.SpecificClass.  You don’t need to follow my lead on this, use whatever works well in your environment.

 

And that’s all I have for this week.  Next up I’ll try and do an article on making the supporting formatting document.

 

Posted in Uncategorized | Tagged | Leave a comment

Teamwork in PowerShell – ScriptingGames Follow up

Hello everyone,

Today I want to write one of the articles I promised I’d do as a follow up to the scripting games.

The tasks this time around were a lot harder than I had anticipated when I signed up. Honestly I nearly withdrew from the competition when I saw the practice round task. I had no idea how I would go about approaching something of that scale. Anyway, no one is reading this to hear about my feelings! This article is going to cover what our team did differently and why I think it mattered.

The Kitton_Mittons’ took a less democratic approach than most teams. Instead of each person contributing equally we started out with the idea that I would be the team lead and I would assign tasks to people as they were available. I would ultimately be responsible for all the content we published. The purpose of our team was for me to work with some of my friends to help them learn PowerShell, from the outset we had a defined hierarchy. I also personally believe that teams work best when there is a person in charge. Everyone had their own assignments and I was available to mentor any team member when they were working on their functions but before anything was submitted I would personally vet all the content and make any changes I thought were required.

As the games evolved so did our team dynamic, some rounds were really rough, particularly round 2, and some were a lot easier. We also got really lucky, at the PowerShell Saturday 007 event and just after round 3 finished, I met a guy, @Sred13, who actually won the Iron Scripter event and wanted to join our team. We added him on and he helped out in the last round. Even after we added another experienced PowerShell professional we stuck with our original dynamic. It worked out well and even though our scores don’t reflect it I think we did our best work in round 4.

I guess the message I want to get across is that regardless of how many experts you have on a team the internal dynamic matters. There were lots of good teams this year and a lot of guys that I know are better than me at PowerShell. We had a team with a defined hierarchy and we had a lot of good luck with our judges. I think both things helped us win the games this year. I’m not trying to say we didn’t deserve a win but I think everyone that competed at the top level in these games is aware that while the judging was good the scores were heavily influenced by how much the judges were paying attention.

Alright! That’s out of the way and now you can expect any follow up posts to focus on technical topics and concepts. If anyone is interested in reading more about the more philosophical aspects of PowerShell and administration check out this great blog from @stevenmurawski.

http://stevenmurawski.com/

Also be sure to check out an outstanding System Center blog from @Sred13:

http://foxdeploy.com/

Thank you for reading!

Posted in PowerShell | Tagged , | Leave a comment

2014 Winter Scripting Games

Hey guys,

Just a quick note to anyone who reads my blog but doesn’t pay attention to PowerShell.org, looking at you Mom… We won!  The Kitton Mittons’ took first place!  That’s great news for me personally, but it’s also great news for any regular readers.  I’ll be blogging about some of the things I learned during the games and some of the things that I thought were particularly cool about working with a team.  Stay tuned!

Posted in PowerShell | Tagged , | 1 Comment

Arrays and generic collections in Powershell

rjasonmorgan:

Great article on working with Collections.

Originally posted on The Powershell Workbench:

As I noted in my last post, Powershell likes to make arrays.

But it’s been noted that using arrays can be horribly inefficient when it comes to adding or removing items from them, and collections work much better for those operations. The method you normally see to get a collection instead of an array is to instantiate a particular collection type, and then iterate through some objects using the .add() method of the collection to populate it.

In a previous post about script blocks I noted that the Invoke() method of a script block makes the script block return a collection.

1

If you research that collection type, it’s basically just a generic collection of PS objects. Since it’s a generic collection, it can contain any type of objects so the methods it offers…

View original 224 more words

Posted in Uncategorized | Leave a comment