EventLogClearer v1.0.1.22

Update on 11/19/2012: Check out my updated version here.

I've been playing with Visual Studio 2012 the past couple of days, and this is the first full application I've written with it. I originally was going to try to make a Windows 8 modern-style application, but that's still a little too foreign to me and I ended up giving up and going back to a regular desktop application.

This application is designed to clear the selected event logs on remote systems. You can add the computer names manually, or you can scan the Active Directory of your current domain and the app will auto-populate the list with all the computers it found. There is some basic threading in the app to improve performance and GUI responsiveness. When you exit the app, your settings are saved in the Windows registry, and loaded back in again when you restart the application. All messages regarding success or failure will be shown in the Status window.

It requires .NET 4.5.  Testing was done on Windows 8 x64, but should run on any version of Windows with .NET 4.5. Thanks for Stackoverflow for helping me figure out that bug fix.

Screenshots:

EventLogClearer v1

 

EventLogClearer v1

Executable: EventLogClearer.exe (127.00 kb)

VS2012 Project (Source): EventLogClearer.v.1.0.1.22.zip (479.52 kb)

A Lesser-Known Side-Effect of the Godaddy Outage

ssl certSo GoDaddy.com experienced a massive denial of service attack and subsequent outage yesterday. GoDaddy hosts thousands of websites, email addresses, and global name servers. All of which were taken down yesterday for at least an hour or two. There are of course rumors that the "hacker group" Anonymous was somehow involved. Maybe they were, or maybe they weren't, but the fact is thousands of websites and millions of users across the globe were indiscriminately targeted. Lots of innocent, small businesses with online operations were unjustly hurt by the actions of whatever jackwagon(s) was/were involved.

The most obvious effect of the denial of service attack was that all Godaddy websites were inaccessible. Not just Godaddy.com itself, but all customer websites hosted by them. DNS records were unavailable for huge swaths of the internet.  Even the site http://www.downforeveryoneorjustme.com/ was overloaded by people wondering if a website was, in fact, down for everyone.

One lesser talked-about impact was that the Godaddy certificate revocation server was down too, which meant anyone on the web, and any automated monitoring tool that was monitoring the availability of HTTPS websites, became unable to check for the revocation of SSL certificates that were issued by Godaddy.

Some systems might return an error code 12057. The Windows WinInet API documentation defines it thusly:

#define ERROR_INTERNET_SEC_CERT_REV_FAILED    12057 // Unable to validate the revocation of the SSL certificate because the revocation server is unavailable
#define ERROR_WINHTTP_SECURE_CERT_REV_FAILED  12057 // Same as ERROR_INTERNET_SEC_CERT_REV_FAILED
#define CRYPT_E_REVOCATION_OFFLINE       0x80092013 // Since the revocation server was offline, the called function wasn't able to complete the revocation check

I.e., can't check for certificate revocation because Godaddy is getting pounded at the moment.

So the next question is, 'Should we care?'

If you absolutely just needed to clear this error, then you can go into the settings/options of your web browser, and uncheck the "Check for certificate revocation" option. Internet Explorer seems to have this enabled by default, but it can be switched off. Chrome has this unchecked by default but it can be turned on.

Personally I think we should care about checking for certificate revocation. By not checking for cert revocations, you're losing one of the big benefits that SSL certificates provide. If a certificate gets hacked, allowing the attacker to impersonate the intended certificate owner over the internet, I would certainly like to know if and when that certificate is revoked.

It may be more convenient and it may rely on one less component if you disable CRL checking, but if I browse to my online banking website one day, and I get a warning about it using a revoked certificate, I'm certainly not logging in!

Windows Hyper-V Server 2012

Windows Server 2012 As you probably know, Windows Server went to General Availability yesterday.  So, I took the opportunity to upgrade my virtualization host from 2008 R2 to Windows Hyper-V Server 2012.  Hyper-V Server, which was introduced in the 2008 R2 era, is based on Server Core, meaning it has no GUI, and it comes with the Hyper-V role preinstalled.  Hyper-V Server has no licensing cost of its own - it's free - but it comes with no free guest VM licenses.

I'll say right off the bat that I love Server Core. I'm sad that I don't get to play with Server Core much in production environments, but I firmly believe that will be changing in the near future. The fact that there is literally nothing that you cannot do on the command line and/or Powershell, makes Server 2012 the absolute most scriptable and automatable Windows Server ever. By far. And on top of that, the newest version of Hyper-V is bursting at the seams with improvements over the last version, such as shared-nothing live migrations, a much more flexible and extensible virtual switch that can be modified with plugins, and improvements on every metric in regards to how much vRAM, vCPUs, VHD size, etc., that can be allocated to virtual machines. Server 2012 feels super-polished, and runs faster than any previous version of Windows, especially Core.

Now that I've drooled over the new server OS, let's talk about my real-world experiences with it so far.

I downloaded the Hyper-V Server 2012 ISO, and uploaded the installation wim file to my local Windows Deployment Server. After scooting all my existing virtual machines off the box, I rebooted it into PXE mode and began installing the new 2012 image. The install was quick and painless. No issues recognizing my RAID volumes, etc. When the newly created server booted up, it asked for a local administrator password. Standard stuff. Then, I was dumped into a desktop with two Cmd.exe shells open, one of which was running Sconfig.

Sconfig is a basic command-line program that comes with Server Core as a supplement to help administrators do some routine setup and administration tasks on their new Core servers, since they don't have a GUI to rely on. Sconfig can do basic things like rename the computer, join a domain, set up an IP address, enable remote management, etc.

sconfig

The point is that all of that stuff can be done without Sconfig, but Sconfig just makes it simpler to get up and running. Enabling remote management is particularly important, because that means we will be able to connect to this server remotely using MMC snapins, Powershell sessions, and the Remote Server Administration Tools (RSAT.) RSAT is not out for Windows 8/Server 2012 yet. But since I'm running Windows 8 on my workstation, and Windows 8 comes with client Hyper-V capabilities, that means I can add the Hyper-V Management Console on my workstation via the "Turn Windows features on or off" thing in the Control Panel. I'll then be able to connect to the Server 2012 hypervisor remotely and be able to play around with a GUI.

Speaking of Windows 8 - the Windows key is now your best friend. Hit Win+X on your keyboard. Now hit C. Try that a few times. Have you ever opened a command prompt that quickly before? No, you haven't.

So I did all the setup tasks there in Sconfig, and that's when I noticed that it doesn't seem that Sconfig is capable of setting IPv6 addresses on my network cards:

 

noipv6

Bummer, I'll have to do that later. All my servers run dual-stacked, so IPv6 addresses are important to me. After joining my domain, setting up the time zone, and setting up IPv4 addresses on the two NICs in this server, I figured I'd done all the damage I could do with Sconfig and logged out.

Now that remote desktop and remote management were enabled, I could do the rest of the setup of the server from the comfort of my Win8 desktop. First, I wanted to enter a remote Powershell session on the server:

PS C:\Users\ryan> New-PSSession -ComputerName hyper2012 | Enter-PSSession
[hyper2012]: PS C:\Users\ryan\Documents>

A one-liner and I'm in. First, let's set those IPv6 addresses. At first I thought to do it with netsh. But upon executing it, netsh gives me a little warning about how it will eventually be deprecated and that I should use Powershell instead. Well I don't want to use anything but the latest, so Powershell it is!

PS C:\Users\ryan> New-NetIPAddress -AddressFamily IPv6 -InterfaceIndex 13 -IPAddress "fd58:2c98:ee9c:279b::3" -PrefixLength 64

That's it. The cmdlet will accept the IP address without the Prefix Length, but the Prefix Length is very important. If you omit that, you will wonder why it's not working. Prefix length is the same thing as subnet mask. So if this were an IPv4 address, with a subnet mask of 255.255.255.0, I would use 24 as the prefix length. My IPv6 addresses use a 64 bit prefix length. Since these cmdlets are brand new, they're not fully documented yet. The Get-Help cmdlet did not help me beyond giving me the parameters of the cmdlet. But the cool thing about Powershell is that it's simple enough that you can pretty easily guess your way through it. Plus that's the coolest thing about using a new OS that's hot of the presses, is that you're getting to run through exercises that not many people have yet, and you can't just Google all the answers because the info is just not out there yet.

Now let's put some IPv6 DNS servers on these NICs to go along with our IP addresses:

PS C:\Users\ryan> Set-DnsClientServerAddress -Addresses fd58:2c98:ee9c:279b::1,fd58:2c98:ee9c:279b::2 -InterfaceIndex 13

One thing I noticed is that even though I enabled Remote Management on the server, that did not enable the firewall rules that allow me to remotely administer the Windows Firewall via MMC. So I needed to enable those firewall rules on the server:

PS C:\Users\ryan> Set-NetFirewallRule -Name RemoteFwAdmin-RPCSS-In-TCP -Enabled True
PS C:\Users\ryan> Set-NetFirewallRule -Name RemoteFwAdmin-In-TCP -Enabled True

I am in favor of using the Windows Firewall and I like to control it across all my servers via Group Policy. Almost every organization I have worked with unilaterally disables the Windows firewall on all their devices. I don't know why... you have centralized control of it on all your computers via Active Directory and it adds another layer of security to your environment. In fact, had I not been in such a hurry and if I had let the Group Policy run on this server, those firewall rules would have been automatically enabled and I wouldn't have had to do that manual step.

So far I'm loving it. You're reading this blog post right now on a virtual machine that is hosted on Hyper-V Server 2012. I've never been as excited about a new Windows Server release as I am about 2012, and I make sure that everyone around me knows it, for better or worse. :)

Are Windows Administrators Less Likely to Script/Automate?

I wrote this bit as a comment on another blog this morning, and then after typing a thousand words I realized that it would make good fodder as a post on my own blog. The article that I was commenting on was on the topic that Windows administrators are less likely to script and/or automate because Windows uses a GUI and Linux is more CLI-centric. And because Linux is more CLI-focused, it is more natural for a Linux user to get into scripting than it is a Windows user. Without further ado, here is my comment on that article:


This article and these comments suffer from a lack of the presence of a good Microsoft engineer and/or administrator. As is common, this discussion so far has been a bunch of Linux admins complaining that Windows isn't more like Linux, but not offering much substance to the discussion from a pro-Microsoft perspective.

That said, I may be a Microsoft zealot, but I do understand and appreciate Linux. I think it’s a great, fast, modular, infinitely customizable and programmable operating system. So please don’t read this in anger, you Linux admins.

First I want to stay on track and pay respect to the original article, about scripting on Windows. Scripting has been an integral part of enterprise-grade Windows administration ever since Windows entered the enterprise ecosystem. It has evolved and gotten a lot better, especially since Powershell came on the scene, and it will continue to evolve and get better, but we've already been scripting and automating Windows in the enterprise space since the '90s. (Though maybe not as well as we would have liked back then.)

But I will make a concession. There are Windows admins out there that don't script. A lot of them. And I do view that as a problem. Here's the thing: I believe that what made Windows so popular in the first place - its accessibility and ease of use because of its GUI - is also what leads to Windows being frequently misused and misconfigured by unskilled Windows administrators. And that, in turn, leads people to blame Windows itself for problems that could have been avoided had you hired a better engineer or admin to do the job in the first place.

GUIs and wizards are nice and have always been a mainstay of Windows, but I won’t hire an engineer or administrator without good scripting abilities. Even on Windows you should not only know how to script, but you should want to script and have the innate desire to automate. If you find yourself sitting there clicking the same button over and over again, it should be natural for you to want to find a way to make that happen automatically so you can go do other more interesting things. Now that I think about it, that’s a positive character trait that applies universally to any occupation in the world.

And yeah, it was true for a long time that there were certain things in Windows that could only be accomplished via the GUI. But that’s changing – and quickly. For instance, Exchange Server is already fully converted to the point where every action you take in the GUI management console is just executing Powershell commands in the background. There’s even a little button in the management console that will give you a preview of the Powershell commands that will be executed if you hit the ‘OK’ button to commit your changes. SQL Server 2012 will be the first version of SQL that’ll go onto Server Core. (About time.) The list goes on, but the point is that Microsoft is definitely moving in the right direction by realizing that the command line is (and always has been) the way to go for creating an automatable server OS. Microsoft is continuing to put tons of effort into that as we speak.

However, just because scripting on Windows is getting better now doesn’t mean we haven’t already been writing batch files and VB scripts for a long time that do pretty impressive things, like migrate 10,000 employee profiles for an AD domain migration.

I really love Server Core, but it's just a GUI-less configuration of the same Windows we've been using all along. Any decent Windows admin has no trouble using Core, because the command line isn't scary or foreign to them. For instance, one of the comments on this article reads:

"The root of the problem seems to be that Linux started with the command line and added GUIs later, whereas Windows did it the other way around."

I think that's false. Windows started as a shell on top on top of DOS – a command line-only operating system. DOS was still the underpinning of Windows for a long time and even after Windows was re-architected and separated from DOS, the Command Prompt and command-line tools were and still are indispensible. Now I will grant you that Linux had way better shells and shell scripting capabilities than Windows did for a long time, and Microsoft did have to play catch-up in that area. Powershell and Server Core came along later and augmented the capabilities of and possibilities for Windows – but the fact remains we’ve been scripting and automating things using batch files and VB Script for a long time now.

There was also this comment: “Another cause for slow uptake is that Windows skills don't persist.”

Again I would say false. I can run a script I wrote in 1996 on Server 2012 just fine, with no modification. Have certain tools and functions evolved while others have been deprecated? Of course. Maybe a new version of Exchange came out with new buttons to click? Of course – that’s the evolution of technology. But your core skillset isn’t rendered irrelevant every time a new version of the software comes out. Not unless your skillset is very small and narrow.

There was also this comment:

“I also complain that PowerShell is not a "shell" in a traditional sense. It is not a means of fully interacting with the OS. There is no ecosystem of text editors, mail clients, and other tools that are needed in the daily operation and administration of servers and even clients.”

As I mentioned earlier, there are fewer and fewer things every day that cannot be done directly from Powershell or even regular command-line executables. And to the second sentence - I’m not sure if there will ever be a desire to go back to an MS-DOS Edit.exe style text editor or email clients… but I could probably write you a Powershell-based email client in an hour if you really wanted to read your emails with no formatted styles or fonts. :)


So in the end, I think the original article had a good point - there probably are, or were, more Linux admins out there with scripting abilities than Windows admins. But I also think that's in flux and Windows Server is poised in a better position than ever for the server market.

GPO Application Precedence - "Just Because You Can" Edition

This one really gets back to my roots as a big fan of everything related to Active Directory and Group Policies. Someone had a question yesterday about GPO application that, I admit, gave me pause. It would have been an excellent question for an MCITP exam or a sysadmin interview.

It's also an example of a GPO strategy that might be too complicated for its own good.

The basic behavior of Group Policy application order is well-known by almost every Windows admin. Let's review:

  • Local policies are applied first.
  • Then policies linked at the site level.
  • Then policies linked at the domain level.
  • Then GPOs linked to OUs in order such that policies linked to "higher" OUs apply first, and the policies linked "closest" to the object go last.
  • If multiple GPOs are linked at the same level, they go from the bottom-up. (AKA by Link Order)
  • Last writer wins, i.e., each subsequent GPO overwrites any conflicting settings defined in earlier GPOs. Settings that do not conflict are merged.
  • Enforce (formerly known as No Override,) Block Inheritance and Loopback Processing can be used at various levels of the aforementioned hierarchy in various combinations to augment the behavior of GPO application.
So that seems like a pretty simple system, but it's just flexible enough that you can get into some confusing situations with it. For instance, take the following OU structure:
 
(OU)All Servers
       |
       +--(OU)Terminal Servers

The Terminal Servers OU is a Sub-OU of the All Servers OU. Now, let's link two different policy objects to each of the OUs:

(OU)All Servers [Servers_GPO]
       |
       +--(OU)Terminal Servers [TS_GPO]

So using what we know, we assume that a computer object in the Terminal Servers OU will get all the settings from Servers_GPO, and then it will receive settings from TS_GPO, which will overwrite any conflicting settings from Servers_GPO.

Now let's put the Enforced flag on Servers_GPO:

(OU)All Servers [Servers_GPO-ENFORCED]
       |
       +--(OU)Terminal Servers [TS_GPO]

Now the settings in Servers_GPO will win, even if they conflict with settings in TS_GPO. But let's go one step further. What happens if you also Enforce TS_GPO?

(OU)All Servers [Servers_GPO-ENFORCED]
       |
       +--(OU)Terminal Servers [TS_GPO-ENFORCED]

Which GPO will win?  Had I been taking a Microsoft exam, I might have had to flip a coin. I have to admit, I had never considered this scenario. If neither policy was enforced, we know TS_GPO would win. If Servers_GPO was enforced and TS_GPO was not enforced, then we know Servers_GPO would win. But what about now?

And furthermore, why would anyone want to do that? I can't explain what goes on in some administrator's heads when they're planning these things out, but luckily I did have Technet at my disposal:

You can specify that the settings in a GPO link should take precedence over the settings of any child object by setting that link to Enforced. GPO-links that are enforced cannot be blocked from the parent container. Without enforcement from above, the settings of the GPO links at the higher level (parent) are overwritten by settings in GPOs linked to child organizational units, if the GPOs contain conflicting settings. With enforcement, the parent GPO link always has precedence. By default, GPO links are not enforced.

So with that, we should be able to surmise that the parent GPO - Servers_GPO - will win. A little testing confirmed it - the higher-level GPO takes precedence over a lower-level GPO even when they're both enforced.

I might call this one of those "just because you can, doesn't mean you should" sort of administrative practices.