Reading the Event Log with Windows PowerShell

This post is part of the #PSBlogWeek PowerShell blogging series. #PSBlogWeek is a regular event where anyone interested in writing great content about PowerShell is welcome to volunteer for. The purpose is to pool our collective PowerShell knowledge together over a 5-day period and write about a topic that anyone using PowerShell may benefit from. #PSBlogWeek is a Twitter hashtag so feel free to stay up to date on the topic on Twitter at the #PSBlogWeek hashtag. For more information on #PSBlogWeek or if you’d like to volunteer for future sessions, contact Adam Bertram (@adbertram) on Twitter.

Once you’re done getting schooled on everything this post has to offer head on over to the powershell.org announcement for links to the other four past and upcoming #PSBlogWeek articles this week!


Whether it’s an error report, a warning, or just an informational log, one of the most common places for Windows to write logging information is to the event logs. There are tons of reasons to open up Event Viewer and peruse the event logs. Some of the things I’ve had to do recently include:

  • Checking why a service failed to start at boot time.
  • Finding the last time a user logged into the computer and who it was.
  • Checking for errors after an unexpected restart.

Although Event Viewer is a handy tool, given today’s IT landscape, we should always be looking for more efficient ways to consume data sources like the event logs. That’s why we’re going to take a look at the Get-WinEvent cmdlet in PowerShell. This cmdlet is a powerful and flexible way of pulling data out of the event logs, both in interactive sessions and in scripts.

Get-WinEvent: The Basics

Before we get started, don’t forget to launch your PowerShell session as Administrator, since some of the logs (like Security) won’t be accessible otherwise.

Like any good PowerShell cmdlet, Get-WinEvent has excellent help, including fourteen different examples. We’re going to review a lot of the important points, but you can always use Get-Help Get-WinEvent -Online to bring the full help up in a browser window.

Without any parameters, Get-WinEvent returns information about every single event in the every single event log. To get to the specific events you want, you need to pass one or more parameters to filter the output. Here are the most common parameters of Get-WinEvent and what they do:

  • LogName – Filters events in the specified log (think Application, Security, System, etc.).
  • ProviderName – Filters events created by the specified provider (this is the Source column in Event Viewer).
  • MaxEvents – Limits the number of events returned.
  • Oldest – Sorts the events returned so that the oldest ones are first. By default, the newest, most recent events are first.

So, by using these basic parameters, we can build commands that do things like:

Get the last 10 events from the Application log.

Get the last 5 events logged by Outlook.

Get the oldest 50 events from the Application log.

Discovering

The event logs have grown up quite a bit since the days when Application, Security, and System were the only logs to look through. Now there are folders full of logs. How are you going to know how to reference those logs when you’re using Get-WinEvent? You use the ListLog and ListProvider parameters.

For example, consider the log that PowerShell Desired State Configuration (DSC) uses. In Event Viewer, that log is located under:

Applications and Services Logs\Microsoft\Windows\Desired State Configuration\Operational

If I pass that name to the LogName parameter of Get-WinEvent, I get an error. To find the name I need to use, I run the command:

EventLog002

and see that there are a couple of options. Although there is one that mentions Desired State Configuration, it doesn’t sound like what I’m looking for. Let’s see if the name uses the abbreviation DSC.

EventLog003

That looks like a much better fit. To be sure, you can always check the properties of a log in Event Viewer and look at the file name.

EventLog004

Bingo.

You can pass * to the ListLog or ListProvider parameters to get all logs or all providers.

Filtering

The commonly used parameters are great for doing basic filtering, but if you’re doing anything more complicated than a basic health, check you’re going to need a more powerful search. That’s where the FilterHashtable parameter comes in.

First, a quick review of hash table syntax. A hash table can be specified on one line or on multiple lines. Both statements below produce the same hash table.

When passing a hash table to the FilterHashtable parameter, here are some of the keys you may find useful.

  • LogName – Same as the LogName parameter
  • ProviderName – Same as the ProviderName parameter
  • ID – Allows you to filter on the Event ID
  • Level – Allows you to filter on the severity of the entry (You have to pass a number: 4 = Informational, 3 = Warning, 2 = Error)
  • StartTime – Allows you to filter out events before a certain Date/Time
  • EndTime – Allows you to filter out events after a certain Date/Time (You can use StartTime and EndTime together)
  • UserID – Allows you to filter on the user that created the event

If this does not provide the level of granularity that you need, don’t forget that you can always pipe the results through Where-Object to further refine your results:

Extracting Meaning

Running one of the commands above might produce output that looks like this:

EventLog001

but what Get-WinEvent actually returns are objects of type [System.Diagnostics.Eventing.Reader.EventLogRecord]. PowerShell is being nice and picking just four common properties to show us, but there are more properties that we can access by storing the results to a variable or using a cmdlet like Select-Object, Format-Table, Format-List, or even Out-GridView.

Here are some of the properties you might find useful:

  • Id – Event ID
  • Level – Numeric representation of the event level
  • LevelDisplayName – Event level (Information, Error, Warning, etc.)
  • LogName – Log name (Application, Security, System, etc.)
  • MachineName – Name of the computer the event is from
  • Message – Full message of the event
  • ProviderName – Name of the provider (source) that wrote the event

You can also dig into the message data attached to the event. Event log messages are defined by the Event ID, so all events with the same Event ID have the same basic message. Messages can also have placeholders that get filled in with specific values for each instance of that Event ID. These fill-in-the-blank values are called “insertion strings,” and they can be very useful.

For example, let’s say I grabbed an event out of the Application log and stored it in a variable $x. If I just examine $x, I can see the full message of the event is “The session ‘183c457c-733c-445d-b5d6-f04fc9623c8b’ was disconnected”.

EventLog005

This is where things get interesting. Since Windows Vista, event logs have been stored in XML format. If you run (Get-WinEvent -ListLog Application).LogFilePath you’ll see the .evtx extension on the file. The EventLogRecord objects that Get-WinEvent returns have a ToXml method that I can use to get to the XML underneath the object; this is where the insertion string data is stored.

By converting the event to XML and looking at the insertion strings, I can get direct access to the value that was inserted into the message without having to parse the full message and extract it. Here is the code to do that:

The ToXml method returns a string containing the XML, and putting the [xml] accelerator in front of it tells PowerShell to read that string and create an XML object from it. From there, I navigate the XML hierarchy to the place where the insertion string data is stored.

EventLog006

The messages for different Event IDs can have different numbers of insertion strings, and you may need to explore a little to figure out exactly how to pull the specific piece of data you’re looking for, but all events for a given Event ID will be consistent. The example above uses a simple event on purpose, but one example of how I’ve used this in my automation is when pulling logon events from the Security log. Event 4624, which records a successful logon, contains insertion strings for which user is logging on, what domain they belong to, what kind of authentication they used, and more.

Remote Computers

All the cool stuff we just covered above? You can do all of that on remote targets as well. There are two parameters you can use to have Get-WinEvent get values from a remote computer:

  • ComputerName – Lets you specify which computer to connect to remotely
  • Credential – Lets you specify a credential to use when connecting, in case your current session is not running under an account with the appropriate permissions.

Get-WinEvent does not use PSRemoting to get remote data; just the Event Log Service. This requires TCP port 135 and a dynamically chosen port above 1024 to be open in the firewall.

Putting It All Together

There is more to discover in using the Get-WinEvent cmdlet and the data that it returns but, hopefully, this introduction serves as a thorough foundation for accessing the event logs from PowerShell. These techniques for discovering, filtering, and extracting meaning from the event logs can be applied in an interactive PowerShell session or an automated script. They can also be used to read the event logs on your local machine or a remote target. It’s hard to know what data you will be searching for to meet your requirements until you are faced with them, but one thing is for sure: the event logs probably have at least some of the data you need, and now you know how to get it!

References

Internationalization with Import-LocalizedData: Part 2

When I first sat down to write this post, this one came out instead. If you’re not familiar with the Import-LocalizedData Cmdlet, you might want to start there.

I want to talk some more about Import-LocalizedData, but more specifically the way it works with the automatic variable $PSUICulture and some interesting behavior I observed there. The about_Script_Internationalization help topic details how both the Cmdlet and automatic variable were added in PowerShell version 2.0 to strengthen support for internationalization. Basically, $PSUICulture will contain the region code for whatever language Windows has been configured to display itself in, and this is the language that Import-LocalizedData will look for by default. This is very powerful because now my scripts have zero code in them that’s trying to figure out which translation of the resource file they should use.

As I mentioned in my previous post, I was working on a PowerShell project that involved internationalization, which is how I found myself using Import-LocalizedData in the first place. Another part of the project was configuring virtual machines as they were provisioned, including updating the Regional Settings based on the users’ preferences.

Regional Settings - Keyboards and Languages Tab

The client had specified which settings on the Regional Settings applet they would like to set based on the culture, and had wanted to leave some of the system level settings set to the default American English. I had to make sure that wouldn’t affect my scripts’ ability to present properly localized text, so I did some testing to make sure I knew specifically which setting would get $PSUICulture to reflect the user’s language. I found that the Display Language setting on the Keyboards and Languages tab is what did it, which was great because it is not one of the system level settings.

It’s important to note that you don’t normally see the dropdown to select a display language. It only appears once you’ve installed one or more additional languages. I was able to access the necessary language files through my MSDN subscription.

Fast forward to a couple of weeks later, and we had run into a problem. A Japanese user had provisioned a virtual machine for himself and selected Japanese as his preferred language, but my scripts were still displaying text in English. I grabbed the Japanese language files from MSDN and installed them on my test machine. Sure enough, when I set the display language to Japanese, PowerShell was still in English. If I changed everything in Regional Settings to Japanese, however, it worked properly.

It took me a number of tries to identify the exact conditions required to get the correct value of ja-JP in $PSUICulture. It turned out to be that both the Display Language and the System Locale had to be set to Japanese in order for it to work.

Regional Settings - Administrative Tab

I thought I was losing my mind, I had tested this so many times and I was always able to get the correct results just by changing the Display Language. I started installing more languages on my test machine and testing the settings with them. Some of them worked just on Display Language, but some also required the System Locale. Finally I realized what was different about the languages that behaved differently – the glyphs. Languages like Japanese, Russian, and Thai don’t use the latin alphabet that languages like American and Spanish use. These were the ones that required the System Locale to be set in addition to the Display Language for PowerShell to correctly detect the language.

The System Locale was one of the system level settings that the client did not want to change. Luckily the region code for the language the user had selected got stored locally on the machine. I was able to pass that into the -UICulture parameter of Import-LocalizedData and explicitly tell the Cmdlet which language to use.

PowerShell is a great and powerful tool for IT Administrators and Software Engineers alike. Its strength, flexibility, and extensibility still regularly impress me. Be that as it may, experiences like this one remind that, just like any language, there are always gotchas and quirks to be worked around. Happy scripting!

Internationalization with Import-LocalizedData

Recently I was working on a PowerShell project that involved internationalization. When researching the best approach, I learned about the Import-LocalizedData Cmdlet and it made it incredibly easy to support internationalization in my scripts.

Before I really get into things, let me clarify a couple similar but distinct definitions (this is mostly for my own benefit, because I always get them mixed up):

  • Localization is the process of translating and adapting a product’s strings and UI for a new language.
  • Globalization is the process of preparing a product for localization. This is especially relevant for existing products there were initially built without internationalization in mind.
  • Internationalization is the parent term for both globalization and localization.

There are actually quite a few definitions for these terms, but this is how Microsoft defines them. Since we’re talking about PowerShell, a Microsoft product, these are the definitions I’m going to stick with. So you could say that when you are working with internationalization you globalize your application so that it can be localized into multiple languages.

To demonstrate this process, let’s use a simple example. I’ve got a script called PSUICultureExample.ps1 that looks like this:

…and it produces this message box:

The hello world message box in English.

Pretty straightforward, but it needs to be globalized.

Globalizing the Script

To globalize the script, we need to extract the strings that are shown to the user into an external file. We’ll create a subfolder called Localized (you can call it whatever you want) and add a file called PSUICultureExample.psd1 that looks like this:

This file uses the ConvertFrom-StringData Cmdlet and a here-string to create a hashtable containing the strings we need as key/value pairs. You can write any code you’d like that returns a hashtable, I just happen to think this method is very clean to work with.

The next thing we have to do is load this data into our actual script and use the hashtable data instead of the hard-coded strings. To import the data, we’ll use this command:

The Import-Localizeddata Cmdlet, by default, will look in the same folder as the script for a psd1 file with the same name. We already named our file PSUICultureExample.psd1, so we’re alright there, but the file isn’t in the same folder as our script so we need to specify that path using the -BaseDirectory parameter. You can use relative paths in this parameter, but they are relative to your PowerShell session’s current working directory, not relative to the script file’s directory. We really should use the full path to be sure we will always point to the right place. A quick Join-Path with the automatic variable $PSScriptRoot gives us the full path to the Localized folder we created.

(If for some reason you’re still in PowerShell 2.0, you won’t have $PSScriptRoot. Try this instead.)

Now that we have our string data loaded, we just have to swap out the hard-coded strings and our PSUICultureExample.ps1 file looks like this:

That’s all there is to globalizing our script!

Localizing the Script

The next step, localizing the script, is very easy because of the way we implemented our globalization using Import-LocalizedData. This Cmdlet uses an automatic variable called $PSUICulture (Now you see why I named the script that, right?) to determine what language it should try to display to the user and it will attempt to locate the correct version of our psd1 file by looking in subfolders named for the appropriate locale. For example, let’s say I install the Spanish Language Pack and change all of my settings to Spanish (Spain). the $PSUICulture automatic variable will contain es-ES instead of en-US like it had previously. So what happens when I run my script now?

Yep, everything is still in English. Why? Because I haven’t properly localized the script for Spanish yet. To localize for Spanish, we need to create a subfolder in the Localized folder called es-ES, and make a copy of the PSUICulture.psd1 file there that contains the same hashtable keys but Spanish values instead of English ones. I took Spanish in high school, but I’m going to let Google Translate handle this one:

…and now when I run it, it looks like this:

The hello world message box in Spanish

¡Bueno! Now everything works for our users in Spain. Our two-line script will automatically display Spanish text for users that have Windows configured to display the UI in Spanish.

Summary

Example Folder Structure

This is what the folder structure of the full example looks like.

Globalization and Localization are both easy in PowerShell thanks to Import-LocalizedData. It’s a powerful Cmdlet that takes a lot of the logic of picking a language and loading the correct string data out of your hands. If you’re building a script, even if you’re not planning to localize it, I would recommend globalizing your string data. It’s a good practice to get into, and you never know if you might need to localize down the line if the script becomes popular in the community.

There are two last points I want to bring up. The first is the reason why the English strings file isn’t in an en-US folder. This would have worked just fine, but by leaving a file in the base folder, we are giving Import-LocalizedData a fallback to use if it can’t find a matching psd1 file for the UI culture specified in $PSUICulture.

The second thing I want to add is that Import-LocalizedData will not merge a language-specific file with the fallback file. If you define five values in the psd1 file in your base directory but only four of those in the es-ES version of the file, your Spanish users will not see the English value for that fifth value, it will simply be null.

I’ve posted the full example on a GitHub repository so you can easily try it yourself. This is also a great opportunity to become more familiar with using Git and GitHub. My repository only has the English and Spanish translations, I would love to receive some pull requests to add more languages!

Update: I originally started writing about Import-LocalizedData so I could share an issue I ran into while working with it. I’ve written that story in a follow-up post.

Conditional E-mails from SQL Server Reporting Services

SQL Server Reporting Services (SSRS) is a great reporting tool, especially its ability to schedule reports to be sent out via e-mail. But one thing nobody wants is useless e-mails flooding their inbox. Some of the reports I’ve built using SSRS list problem data that needs to be cleaned up. I don’t want those reports getting sent out every day, even when they’re empty. I’m going to show you the method I used to get SSRS to only send a report under the correct circumstances.

Note: This method is not officially supported by Microsoft and involves directly modifying SQL Server Agent jobs that are created by SSRS. Also, if your version of SSRS supports data-driven subscriptions there’s a better way to do this, which you can read about here.

1. Schedule the report

First, create and schedule the report. SSRS creates a SQL Server Agent job to run the scheduled report, and you are going to modify that job.

2. Find the job id

SSRS uses a GUID as the name of the SQL Server Agent job it creates for each scheduled report. You need to identify which job is the one you need.

To do this, connect to the server hosting the SSRS database in SQL Management Studio. The default database name for SSRS is ReportServer. The following query will return a list of all scheduled reports by name.

The ScheduleID field for each row is the job name used for each scheduled report.

3. Edit the job in SQL Server Agent

Now that you know the name of the SQL Server Agent job you need to modify, find it in the tree view in Management Studio. Right click and bring up the job’s properties, then go to the steps section. There will be one step there named with the same GUID and “_step_1” at the end. Edit that step, and the command text will look something like this:

The GUID listed for @EventData should match the SubscriptionID listed for the report in the second step.

Modify the T-SQL in this step like this to add your condition:

Save the changes and you’re almost done. There is only one more thing to check.

4. Check Security Settings

The SQL Server Agent runs its jobs as the NT AUTHORITY\NetworkService account, so you will need to make sure that account has rights to query your database. The easiest way to do this is to modify the security for the account at the server level and configure a user mapping to your database granting the db_datareader role. Exactly how to do this differs depending on what version of SQL Server you’re using, so it’s probably better that I don’t try to document the specifics.

At this point, you are all set. The job will still run according to its original schedule, but will only send a report if your condition evaluates to true.

Warning: One important caveat to this method is that SSRS has no idea that you’ve edited the SQL Server Agent job, so if anyone modifies the report schedule via SSRS the changes you made will be overwritten.

Example

I had a table that was populated nightly by a script that scanned servers on the network. Any time a new server showed up on the network I would need to manually assign it to a category. I created a report that would list all the servers that did not yet have a category, and set it up so that it would only send when servers like that existed. Here is what the code looked like:

This method for conditionally sending SSRS reports lets you create automated alerts to monitor system health and stability. It’s a great trick for any DevOps engineer to have in his or her tool belt.

What PowerShell Summit Showed Me

PowerShell Summit was a blast. I learned some great new skills and connected with a ton of folks who are as passionate about technology as I am. But even though PowerShell Summit is a conference about Windows PowerShell, attending it opened my eyes to some other important lessons that apply to everyone in IT.

Focus On Your Career, Not Your Job

Whether you love your job or hate it, you should always be learning new technologies based on what is best for you, not your company. Learn the things that make you a valuable professional and you’ll be in a good position no matter what happens.

Does that mean you should ignore the skills that you need to do your actual job? Absolutely not. Those are important too as long as you intend to keep that job. But if your company’s direction and your personal growth path don’t line up, it’s up to you to invest your own time and money to make it happen.

Keep An Eye On the Direction Of the Industry

To focus on the skills that make you a valuable professional, you have to know what skills are (and will be) the most valuable to have. Not everyone has the vision to see where the industry is going long term. If that’s the case for you (like it is for me), you’ve got to find the people who have that vision and get tuned in. Stay up to date on their blog, follow them on Twitter, and look for opportunities to hear them speak at conferences or on podcasts.

Many of the speakers at PowerShell Summit talked about what is coming in the future of IT. Some of them were among the Microsoft employees building that future. Listening to them blew my mind. What is coming, especially with Nano Server and Windows Containers, will be shaping the future of our industry in big ways. I will be listening to what they have to say.

Participate In the Community, and Give Back

Microsoft has open sourced a ton of code on GitHub, and there is a thriving community contributing to those projects. IT Professionals have a growing voice that can shape the tools that we will be using for years to come. If you want to stay relevant and be involved in the future of IT, you have to get involved in that conversation.

Now I know not everyone can code, and you might feel like that means you can’t get involved, but you’re wrong. I’m a guy that can code, but I know absolutely nothing about Active Directory, for example. That’s where you can help. You can provide domain expertise to people who don’t have it. You can participate in user groups and share your knowledge. You can exercise the technical previews that Microsoft releases and provide feedback. Anyone can find a role, and all of these things have to happen for us to collectively succeed in growing the industry.

I’m so excited to see what’s going to happen in IT in the next five years. I’ll be studying hard to stay up to date, and working to find a place in the community where I can give back and help others. How about you?