EventLogClearer v1.0.1.22

Update on 11/19/2012: Check out my updated version here.

I've been playing with Visual Studio 2012 the past couple of days, and this is the first full application I've written with it. I originally was going to try to make a Windows 8 modern-style application, but that's still a little too foreign to me and I ended up giving up and going back to a regular desktop application.

This application is designed to clear the selected event logs on remote systems. You can add the computer names manually, or you can scan the Active Directory of your current domain and the app will auto-populate the list with all the computers it found. There is some basic threading in the app to improve performance and GUI responsiveness. When you exit the app, your settings are saved in the Windows registry, and loaded back in again when you restart the application. All messages regarding success or failure will be shown in the Status window.

It requires .NET 4.5.  Testing was done on Windows 8 x64, but should run on any version of Windows with .NET 4.5. Thanks for Stackoverflow for helping me figure out that bug fix.

Screenshots:

EventLogClearer v1

 

EventLogClearer v1

Executable: EventLogClearer.exe (127.00 kb)

VS2012 Project (Source): EventLogClearer.v.1.0.1.22.zip (479.52 kb)

Windows Hyper-V Server 2012

Windows Server 2012 As you probably know, Windows Server went to General Availability yesterday.  So, I took the opportunity to upgrade my virtualization host from 2008 R2 to Windows Hyper-V Server 2012.  Hyper-V Server, which was introduced in the 2008 R2 era, is based on Server Core, meaning it has no GUI, and it comes with the Hyper-V role preinstalled.  Hyper-V Server has no licensing cost of its own - it's free - but it comes with no free guest VM licenses.

I'll say right off the bat that I love Server Core. I'm sad that I don't get to play with Server Core much in production environments, but I firmly believe that will be changing in the near future. The fact that there is literally nothing that you cannot do on the command line and/or Powershell, makes Server 2012 the absolute most scriptable and automatable Windows Server ever. By far. And on top of that, the newest version of Hyper-V is bursting at the seams with improvements over the last version, such as shared-nothing live migrations, a much more flexible and extensible virtual switch that can be modified with plugins, and improvements on every metric in regards to how much vRAM, vCPUs, VHD size, etc., that can be allocated to virtual machines. Server 2012 feels super-polished, and runs faster than any previous version of Windows, especially Core.

Now that I've drooled over the new server OS, let's talk about my real-world experiences with it so far.

I downloaded the Hyper-V Server 2012 ISO, and uploaded the installation wim file to my local Windows Deployment Server. After scooting all my existing virtual machines off the box, I rebooted it into PXE mode and began installing the new 2012 image. The install was quick and painless. No issues recognizing my RAID volumes, etc. When the newly created server booted up, it asked for a local administrator password. Standard stuff. Then, I was dumped into a desktop with two Cmd.exe shells open, one of which was running Sconfig.

Sconfig is a basic command-line program that comes with Server Core as a supplement to help administrators do some routine setup and administration tasks on their new Core servers, since they don't have a GUI to rely on. Sconfig can do basic things like rename the computer, join a domain, set up an IP address, enable remote management, etc.

sconfig

The point is that all of that stuff can be done without Sconfig, but Sconfig just makes it simpler to get up and running. Enabling remote management is particularly important, because that means we will be able to connect to this server remotely using MMC snapins, Powershell sessions, and the Remote Server Administration Tools (RSAT.) RSAT is not out for Windows 8/Server 2012 yet. But since I'm running Windows 8 on my workstation, and Windows 8 comes with client Hyper-V capabilities, that means I can add the Hyper-V Management Console on my workstation via the "Turn Windows features on or off" thing in the Control Panel. I'll then be able to connect to the Server 2012 hypervisor remotely and be able to play around with a GUI.

Speaking of Windows 8 - the Windows key is now your best friend. Hit Win+X on your keyboard. Now hit C. Try that a few times. Have you ever opened a command prompt that quickly before? No, you haven't.

So I did all the setup tasks there in Sconfig, and that's when I noticed that it doesn't seem that Sconfig is capable of setting IPv6 addresses on my network cards:

 

noipv6

Bummer, I'll have to do that later. All my servers run dual-stacked, so IPv6 addresses are important to me. After joining my domain, setting up the time zone, and setting up IPv4 addresses on the two NICs in this server, I figured I'd done all the damage I could do with Sconfig and logged out.

Now that remote desktop and remote management were enabled, I could do the rest of the setup of the server from the comfort of my Win8 desktop. First, I wanted to enter a remote Powershell session on the server:

PS C:\Users\ryan> New-PSSession -ComputerName hyper2012 | Enter-PSSession
[hyper2012]: PS C:\Users\ryan\Documents>

A one-liner and I'm in. First, let's set those IPv6 addresses. At first I thought to do it with netsh. But upon executing it, netsh gives me a little warning about how it will eventually be deprecated and that I should use Powershell instead. Well I don't want to use anything but the latest, so Powershell it is!

PS C:\Users\ryan> New-NetIPAddress -AddressFamily IPv6 -InterfaceIndex 13 -IPAddress "fd58:2c98:ee9c:279b::3" -PrefixLength 64

That's it. The cmdlet will accept the IP address without the Prefix Length, but the Prefix Length is very important. If you omit that, you will wonder why it's not working. Prefix length is the same thing as subnet mask. So if this were an IPv4 address, with a subnet mask of 255.255.255.0, I would use 24 as the prefix length. My IPv6 addresses use a 64 bit prefix length. Since these cmdlets are brand new, they're not fully documented yet. The Get-Help cmdlet did not help me beyond giving me the parameters of the cmdlet. But the cool thing about Powershell is that it's simple enough that you can pretty easily guess your way through it. Plus that's the coolest thing about using a new OS that's hot of the presses, is that you're getting to run through exercises that not many people have yet, and you can't just Google all the answers because the info is just not out there yet.

Now let's put some IPv6 DNS servers on these NICs to go along with our IP addresses:

PS C:\Users\ryan> Set-DnsClientServerAddress -Addresses fd58:2c98:ee9c:279b::1,fd58:2c98:ee9c:279b::2 -InterfaceIndex 13

One thing I noticed is that even though I enabled Remote Management on the server, that did not enable the firewall rules that allow me to remotely administer the Windows Firewall via MMC. So I needed to enable those firewall rules on the server:

PS C:\Users\ryan> Set-NetFirewallRule -Name RemoteFwAdmin-RPCSS-In-TCP -Enabled True
PS C:\Users\ryan> Set-NetFirewallRule -Name RemoteFwAdmin-In-TCP -Enabled True

I am in favor of using the Windows Firewall and I like to control it across all my servers via Group Policy. Almost every organization I have worked with unilaterally disables the Windows firewall on all their devices. I don't know why... you have centralized control of it on all your computers via Active Directory and it adds another layer of security to your environment. In fact, had I not been in such a hurry and if I had let the Group Policy run on this server, those firewall rules would have been automatically enabled and I wouldn't have had to do that manual step.

So far I'm loving it. You're reading this blog post right now on a virtual machine that is hosted on Hyper-V Server 2012. I've never been as excited about a new Windows Server release as I am about 2012, and I make sure that everyone around me knows it, for better or worse. :)

GPO Application Precedence - "Just Because You Can" Edition

This one really gets back to my roots as a big fan of everything related to Active Directory and Group Policies. Someone had a question yesterday about GPO application that, I admit, gave me pause. It would have been an excellent question for an MCITP exam or a sysadmin interview.

It's also an example of a GPO strategy that might be too complicated for its own good.

The basic behavior of Group Policy application order is well-known by almost every Windows admin. Let's review:

  • Local policies are applied first.
  • Then policies linked at the site level.
  • Then policies linked at the domain level.
  • Then GPOs linked to OUs in order such that policies linked to "higher" OUs apply first, and the policies linked "closest" to the object go last.
  • If multiple GPOs are linked at the same level, they go from the bottom-up. (AKA by Link Order)
  • Last writer wins, i.e., each subsequent GPO overwrites any conflicting settings defined in earlier GPOs. Settings that do not conflict are merged.
  • Enforce (formerly known as No Override,) Block Inheritance and Loopback Processing can be used at various levels of the aforementioned hierarchy in various combinations to augment the behavior of GPO application.
So that seems like a pretty simple system, but it's just flexible enough that you can get into some confusing situations with it. For instance, take the following OU structure:
 
(OU)All Servers
       |
       +--(OU)Terminal Servers

The Terminal Servers OU is a Sub-OU of the All Servers OU. Now, let's link two different policy objects to each of the OUs:

(OU)All Servers [Servers_GPO]
       |
       +--(OU)Terminal Servers [TS_GPO]

So using what we know, we assume that a computer object in the Terminal Servers OU will get all the settings from Servers_GPO, and then it will receive settings from TS_GPO, which will overwrite any conflicting settings from Servers_GPO.

Now let's put the Enforced flag on Servers_GPO:

(OU)All Servers [Servers_GPO-ENFORCED]
       |
       +--(OU)Terminal Servers [TS_GPO]

Now the settings in Servers_GPO will win, even if they conflict with settings in TS_GPO. But let's go one step further. What happens if you also Enforce TS_GPO?

(OU)All Servers [Servers_GPO-ENFORCED]
       |
       +--(OU)Terminal Servers [TS_GPO-ENFORCED]

Which GPO will win?  Had I been taking a Microsoft exam, I might have had to flip a coin. I have to admit, I had never considered this scenario. If neither policy was enforced, we know TS_GPO would win. If Servers_GPO was enforced and TS_GPO was not enforced, then we know Servers_GPO would win. But what about now?

And furthermore, why would anyone want to do that? I can't explain what goes on in some administrator's heads when they're planning these things out, but luckily I did have Technet at my disposal:

You can specify that the settings in a GPO link should take precedence over the settings of any child object by setting that link to Enforced. GPO-links that are enforced cannot be blocked from the parent container. Without enforcement from above, the settings of the GPO links at the higher level (parent) are overwritten by settings in GPOs linked to child organizational units, if the GPOs contain conflicting settings. With enforcement, the parent GPO link always has precedence. By default, GPO links are not enforced.

So with that, we should be able to surmise that the parent GPO - Servers_GPO - will win. A little testing confirmed it - the higher-level GPO takes precedence over a lower-level GPO even when they're both enforced.

I might call this one of those "just because you can, doesn't mean you should" sort of administrative practices.

Server 2012 - Out with the Old, In with the New

I came across this Technet article that details features that are being removed or deprecated as of Windows Server 2012.  Below are a few of my inane and probably ill-informed thoughts:

"AD Federation Services - Support for using Active Directory Lightweight Directory Services (AD LDS) as an authentication store is removed"

I guess this means AD FS can only store authentication information in AD now? I know that some people use it, but I think I wouldn't mind seeing AD LDS go altogether.

"Oclist.exe has been removed. Instead, use Dism.exe."

I'm all for consolidating redundant tools and putting all the various bits of related functionality in one place.

  • "The Cluster Automation Server (MSClus) COM application programming interface (API) has been made an optional component called FailoverCluster-AutomationServer which is not installed by default. Cluster programmatic functionality is now provided by the Failover Cluster API and the Failover Cluster WMI provider.
  • The Cluster.exe command-line interface has been made an optional component called FailoverCluster-CmdInterface which is not installed by default. Cluster command-line functionality is provided by the Failover Cluster PowerShell cmdlets.
  • Support for 32-bit cluster resource DLLs has been deprecated. Use 64-bit versions instead."

I'm also behind the move to a united effort based on Powershell. Knowing that you can use Powershell to manage all the parts of your server, as opposed to a hundred separate CLI executables is a good thing. I also like deprecating 32-bit junk... although that is going to cause some heartburn for some enterprises, as uprooting 15 year-old technology in a big enterprise can often be like pulling teeth. Actually more like getting approval from Congress first before you commence pulling teeth.

"Support for Token Rings has been removed."

Oh no what ever will I do without my token ring network!? Oh wait that's right, 1972 called and they want their network back. Next thing you know they'll be telling me to get rid of Banyan Vines too!

"Versions of Microsoft SQL Server prior to 7.0 are no longer supported. Computers running Windows Server 2012 that connect to computers running SQL Server 6.5 (or earlier) will receive an error message."

This is another interesting one. A lot of very large companies rely on really old SQL servers... I see a lot of painstaking migrations in my near future.

  • "ODBC support for 16- and 32-bit applications and drivers is deprecated. Use 64-bit versions instead.
  • ODBC/OLEDB support for Microsoft Oracle is deprecated. Migrate to drivers and providers supplied by Oracle.
  • Jet Red RDBMS and ODBC drivers are deprecated."

Ouch again! Microsoft seems to really be emphasizing "stop using old shit, k thx."*

(* not an actual Microsoft quote)

"The Subsystem for UNIX-based Applications (SUA) is deprecated. If you use the SUA POSIX subsystem with this release, use Hyper-V to virtualize the server. If you use the tools provided by SUA, switch to Cygwin's POSIX emulation, or use either mingw-w64 (available from Sourceforge.net) or MinGW (available from MinGW.org) for doing a native port."

I for one am glad to see this go. Just make a *nix VM if you need to fork() so badly.

  • "The WMI provider for Simple Network Management Protocol (SNMP) is deprecated because the SNMP service is being deprecated.
  • The WMI provider for the Win32_ServerFeature API is deprecated.
  • The WMI provider for Active Directory is deprecated. Manage Active Directory with PowerShell cmdlets.
  • The WMI command-line tool (Wmic) is deprecated. Use PowerShell cmdlets instead.
  • The namespace for version 1.0 of WMI is deprecated. Prepare to adapt scripts for a revised namespace."

All good stuff. Dropping off the really old vestigial junk, and consolidating everything under the banner of Powershell.

There are a few more bullet points in the original article, but those were the ones I cared most about. I'm a little surprised to see them cutting ties with 32-bit SQL, but I'm glad they're doing it. It's going to cause some work for people (like me) who still use large, distributed SQL systems to start migrating, but we'll all be better off in the long run.

Thread Quantum

Edit October 26, 2014: The commenter known as dirbase corrected me on some errors I made in this post, which I will now be correcting.  Thank you very much for the constructive feedback, dirbase. :) 

Still being a bit lazy about the blog -- I've been busy reading and working, both of which have longer quantums than writing for this blog, apparently.

Basically I just wanted to take a moment out of this Sunday afternoon to discuss thread quantums.  This entire post is inspired by Windows Internals, 6th Ed. by Mark Russinovich, et al.  As always, I could not recommend the book more highly.

So, we've all seen this screen, right?  Adjust processor scheduling for best performance of "Programs" or "Background services:"

Advanced System Properties

Well that seems like a very simple, straightforward choice... but who knows what it actually means?  To answer that question, we need to know about a basic mechanism that Windows uses: Quantum.

A thread quantum is the amount of time that a thread is allowed to run until Windows checks if there is another thread at the same priority waiting for its chance to run.  If there are no other threads of the same priority waiting to run, then the thread is allowed to run for another quantum.

Process Priority is that attribute that you can set on a process in Task Manager or in Process Explorer by right-clicking a process and choosing its priority.  Even though it's the threads that actually "run" and not processes per se, each process can have many dynamically-lived threads, so Windows allows you to set a priority per process, and in turn each thread of that process inherits its base priority from its parent process. (Threads actually have two priority attributes, a base and a current priority.  Scheduling decisions are made based on the thread's current priority.)

There are 32 process priority levels, 0-31, that are often given simplified labels such as "Normal," "Above Normal," "Real time," etc.  Those are all within the subset of 0-1 on the Interrupt Request Level (IRQL) scale.  What this means is that if you set a process to run at "Real Time" - or the highest possible priority - the process and its threads will still not have the ability to preempt or block hardware interrupts, but it could delay and even block the execution of important system threads, not to mention all other code running at Passive level.  That is why you should have a very good reason for setting a process to such a high priority.  Doing so has the ability to affect system-wide stability.

So now, back to quantum.  We now know its definition, but how long exactly is a quantum?  That depends on your hardware clock resolution (not to be confused with timer expiration frequency,) the speed of your processor, and how you have configured that setting pictured above to optimize performance for "Programs" or "Background services."  As of Windows 7 and Windows Server 2008 R2, clients are configured to let threads run for 2 clock intervals before another scheduling decision is made, while it's 12 clock intervals on servers. So when you change that setting on the Performance Options page, you are bouncing back and forth between those two values.

The reasoning for the longer default quantums on Server operating systems is to minimize context switching, and that if a process on a server is woken up, with a longer quantum it will have a better chance of completing the request and going back to sleep without being interrupted in between.  On the other hand, shorter quantums can make things seem "snappier" on your desktop, leading to a better experience for desktop OSes.

As I said before, the resolution of your system clock is a factor in determining how long a quantum really is.  In contemporary x86 and x64 processors, this is usually 15.6 milliseconds and it's set by the HAL, not the OS kernel.  You can see what it's set to for yourself by using a kernel debugger and examining KeMaximumIncrement:

KeMaximumIncrement

Reading the bytes right to left, if you convert 02, 62, 5A to decimal, you will get 156250, which represents about 15.6ms. Don't confuse this value with timer expiration frequency/interval.  The two values are related, but different.

There are a couple of different ways to obtain timer expiration frequency/interval.  One way is with clockres.exe from Sysinternals:

clockres.exe

Notice that the maximum timer interval is the familiar 15.6 milliseconds, which is also my hardware clock interval.  But my current timer interval is 1ms.  Programs running on your system can request system-wide changes to this timer, which is what has happened here.  You can use powercfg.exe -energy to run a power efficiency analysis of your system that will identify processes that have made such requests to increase the resolution of the system timer.  When timer expirations fire at a higher frequency, that causes the system to use more energy, which can be of significant concern on laptops and mobile devices that run on battery power.  In my case, it's usually Google Chrome that asks that the system timer resolution be increased from its default of 15.6ms.  But remember that even when this timer interval changes, it doesn't change the length of thread quantums, as thread quantum calculation is done using the max or base clock interval.

When Windows boots up, it uses the above KeMaximumIncrement value, in seconds, and multiplies it by the processor speed in Hertz, divides it by 3 and stores the result in KiCyclesPerClockQuantum:

KiCyclesPerClockQuantum

Converted to decimal, that is 17151040 CPU cycles.

The other factor in determining the length of a quantum is base processor frequency.  You can obtain this value in several different ways, including using the !cpuinfo command in the debugger: 

!cpuinfo

3.293 GHz is the base frequency of my processor, even though a command such as Powershell's

$(Get-WMIObject Win32_Processor).MaxClockSpeed

would report 3.801 GHz as the maximum frequency. This is a slightly overclocked Intel i5-2500k.  Now that we have those two pieces of information, all that's left is some good old fashioned arithmetic:

The CPU completes 3,293,000,000 cycles per second, and the max timer interval, as well as the hardware clock resolution, is 15.6 ms.  3293000000 * 0.0156 = 51370800 CPU cycles per clock interval.

1 Quantum Unit = 1/3 (one third) of a clock interval, therefore 1 Quantum Unit = 17123600 CPU cycles.

This is only a tiny, rounding error amount off from the value that is stored in KiCyclesPerClockQuantum.

Assuming that at a rate of 3.293GHz, each CPU cycle is 304 picoseconds, that works out to 5.2 milliseconds per quantum unit.  Since my PC is configured for thread quantums of 2 clock intervals, and each clock interval is 3 quantum units, that means my PC is making a thread scheduling decision about every 31 milliseconds.

Now there's one final complication to this, and that is by using the "Programs" performance setting as opposed to the "Background services" setting, you are also enabling variable length quantums.  Whereas a typically configured server will use fixed-length, 12 clock-interval quantums... but I'll leave off here and if you're interested in knowing more about variable length quantums, I would suggest the book I mentioned at the beginning of this post.