C# applications with "install" or "setup" in the name and Application Compatibility

Hello ladies and gents.

I've been busy the past few days creating a very complex (for me) .NET application in C#. I want to talk about an issue that I banged my head on for a few hours, because the solution is out there and easy to find on the internet, but every person who posted the answer on a forum or MSDN article was making an assumption that I knew something already that I didn't know in order to arrive at the solution, and so that missing piece of knowledge was keeping me from joining the ranks of those who knew how to solve this problem. What can I say, I'm a novice programmer.

I'm using Visual Studio 2010, and I have created a C# project. I'm using Windows 7 64bit. This project that I have created has the word "install" in the name. Apparently, any time that Windows Vista or 7 see the user executing a program that has the word "setup" or "install" in it, it triggers something deep within the bowels of Windows to cough up this incisive message:

appcompat*Thank you that's very helpful now go away*

I don't like the principle behind this behavior. I have read that this box will only pop up if you have the magic word in your executable name and that executable does not add an entry to the Add/Remove Programs list by the time it exits, which could indicate a failed installation. Or it could indicate nothing went wrong at all, which from my experience is the case 100% of the time. In fact I'd be scared to hit that "Reinstall using recommended settings" button, because I definitely have no idea what that might do. Furthermore I've read that some install applications do put a new entry in the Add/Remove Programs list and yet it still generates this popup.

I've been using Windows Vista and Windows 7 for a long time, and I have never once found this to be helpful in any way. But now this behavior has really gone and annoyed me by popping up every time I test during my development.

So I started researching the problem and I was immediately led to believe that this could be easily solved by editing the manifest file in my application's project in Visual Studio. Here is what you add to your manifest file to make it not show that Program Compatibility popup:

<compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
   <application>
      <!--The ID below indicates application support for Windows Vista -->
      <supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/>
      <!--The ID below indicates application support for Windows 7 -->
      <supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/>
   </application>
</compatibility>

OK... I can't find my manifest file. I've looked through all the menus and tabs and buttons here in Visual Studio... hrm... maybe I just need to edit this installer.manifest file that's sitting in the same output directory as my application and then rebuild it! Nope, that didn't work. It just seems to wipe out my changes and revert the file to default every time I build the solution... my project settings were set to "Embed manifest with default settings."  So even though the default manifest was there in the output directory, any modifications you make to it just get wiped out.

Finally, I realized that I just need to generate a new manifest file for myself. In the Solution Explorer pane, right-click your project and choose "Add --> New Item..." and add an "Application Manifest File." It should pop up under the Resources folder in your project, whether you already had one or not. Now you can edit that manifest to your heart's content, and the changes will not get wiped out. You'll notice that the section of XML I pasted above sort of already exists in your new manifest file, so just use your head and paste in only the couple of relevant lines in the right places. ;)  You will also notice that if you go back to your Project's properties, you should see it pointing at the manifest file you just created, instead of saying "Embed manifest with default settings."

No more "Program Compatibility Assistant" popup!  I feel like an idiot sometimes...

Now  that you know about creating and editing your own application manifest file, you are well on your way to doing other things with it, such as having the application request its own administrator access from UAC, instead of, say, simply checking to see whether the current user is an administrator or not, and then having the application say "Sorry, you're not an administrator so I won't run.  Please try executing the program again by right-clicking on my icon and choosing 'Run as administrator...'" Which unfortunately I have seen too many times. Follow up with this article.

Monitoring with Windows Remote Management (WinRM) and Powershell Part I

Hey guys. I should have called this post "Monitoring with Windows Remote Management (WinRM), and Powershell, and maybe a Certificate Services tutorial too," but then the title would have definitely been too long. In any case, I poured many hours of effort and research into this one. Lots of trial and error. And whether it helps anyone else or not, I definitely bettered myself through the creation of this post.

I'm pretty excited about this topic. This foray into WinRM and Powershell Remoting was sparked by a conversation I had with a coworker the other day. He's a senior Unix engineer, so he obviously enjoys *nix and when presented with a problem, naturally he approaches it with the mindset of someone very familiar with and ready to use Unix/Linux tools.

I'm the opposite of that - I feel like Microsoft is the rightful king of the enterprise and usually approach problems with Windows-based solutions already in mind. But what's important is that we're both geeks and we'll both still happily delve into either realm when it presents an interesting problem that needs solving. There's a mutual respect there, even though we don't play with the same toys.

The Unix engineer wants to monitor all the systems using SNMP because it's tried and true and it's been around forever, and it doesn't require an agent or expensive third-party software. SNMP wasn't very secure or feature-rich at first so now they're on SNMPv3. Then there's WBEM. Certain vendors like HP have their own implementations of WBEM. I guess Microsoft wasn't in love with either and so decided to go their own way, as Microsoft is wont to do, hence why you won't find an out of the box implementation of SNMPv3 from Microsoft.

One nice thing about SNMP though, is that it uses one static, predictable port.

In large enterprise IT infrastructures, you're likely to see dozens of sites, hundreds (if not thousands,) of subnets, sprinklings of Windows and Unix devices all commingled together... and you can't swing a dead cat without hitting a firewall which may or may not have some draconian port restrictions on it. Furthermore, in a big enterprise you're likely to see the kind of bureaucracy and separation of internal organizations such that server infrastructure guys can't just go and reconfigure firewalls on their own, network guys can't just make changes without running it by a "change advisory board" first, and it all basically just makes you want to pull your hair out while you wait... and wait, and wait some more. You just want to be able to communicate with your other systems, wherever they are.

Which brings us to WinRM and Powershell Remoting. WinRM, a component of Windows Hardware Management, is Microsoft's implementation of the multi-platform, industry-standard WS-Management protocol. (Like WMI is Microsoft's implementation of WBEM. Getting tired of the acronym soup yet? We're just getting started. You might also want to review WMI Architecture.) I used WinRM in a previous post, but only used the "quickconfig" option. Seems like most people rarely go any deeper than the quickconfig parameter.

Here's an excerpt from a Technet doc:

"WinRM is Microsoft's implementation of the WS-Management protocol, a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that enables hardware and operating systems from different vendors to interoperate. You can think of WinRM as the server side and WinRS the client side of WS-Management."

I bolded the phrase that especially made my ears perk up. You see, Windows has a long history with things like RPC and DCOM. Those protocols have been instrumental in many awesome distributed systems and tool sets throughout Microsoft's history. But it just so happens that these protocols are also probably the most complex, and most firewall unfriendly protocols around. It's extremely fortuitous then that Ned over at AskDS just happened to write up a magnificent explication of Microsoft RPC. (Open that link in a background tab and read it after you're done here.)

Here's the thing - what if I want to remotely monitor or interact with a machine in another country, or create a distributed system that spans continents? There are dozens of patchwork networks between the systems. Each packet between the systems traverses firewall after firewall. Suddenly, protocols such as RPC are out the window. How am I supposed to get every firewall owner from here to Timbuktu to let my RPC and/or DCOM traffic through?

That's why monitoring applications like SCOM or NetIQ AppManager require the installation of agents on the machines. They collect the data locally and then ship it to a central management server using just one or two static ports. Well, they do other more complex stuff too that requires software be installed on the machine, but that's beside the point.

Alright, enough talk. Let's get to work on gathering performance metrics remotely from a Windows server. There are a few scenarios to test here. One is communications within the boundaries of an Active Directory domain, and the other is communications with an external, non-domain machine. Then, exploring SSL authentication and encryption.

The first thing you need to do is set up and configure the WinRM service. One important thing to remember is that just starting the WinRM service isn't enough - you still have to explicitly create a listener. In addition, like most things SSL, it requires a certificate to properly authenticate and encrypt data. Run: 

winrm get winrm/config

to see the existing default WinRM configuration:

WinRM originally used ports 80 for HTTP and 443 for HTTPS. With Win7 and 2k8R2, it has changed to use ports 5985 and 5986 respectively. But those are just defaults and you can change the listener(s) back to the old ports if you want. Or any port for that matter. Run:

winrm enumerate winrm/config/listener

to list the WinRM listeners that are running. You should get nothing, because we haven't configured any listeners yet. WinRM over SSL will not work with a self-signed certificate. It has to be legit. From support.microsoft.com:

"WinRM HTTPS requires a local computer "Server Authentication" certificate with a CN matching the hostname, that is not expired, revoked, or self-signed to be installed."

To set up a WinRM listener on your machine, you can run

winrm quickconfig

or

winrm quickconfig -transport:HTTPS

or even

winrm create winrm/config/listener?Address=*+Transport=HTTPS @{Port="443"}

Use "set" instead of "create" if you want to modify an existing listener. The @{} bit at the end is called a hash table and can be used to pass multiple values. The WinRM.cmd command line tool is actually just a wrapper for winrm.vbs, a VB script. The quickconfig stuff just runs some script that configures and starts the listener, starts and sets the WinRM service to automatic, and creates some Windows Firewall exceptions. What is more is that Powershell has many cmdlets that use WinRM, and the entire concept of Powershell Remoting uses WinRM. So now that you know the fundamentals of WinRM and what's going on in the background, let's move ahead into using Powershell. In fact, you can emulate all of the same behavior of "winrm quickconfig" by instead running 

Configure-SMRemoting.ps1

from within Powershell to set up the WinRM service. Now from another machine, fire up Powershell and try to use the WinRM service you just set up:

$dc01 = New-PSSession -ComputerName DC01
Invoke-Command -Session $dc01 -ScriptBlock { gwmi win32_computersystem }

Returns:

You just pulled some data remotely using WinRM! The difference between using a "session" in Powershell, and simply executing cmdlets using the -ComputerName parameter, is that a session persists such that you can run multiple different sets of commands that all share the same data. If you try to run New-PSSession to connect to a computer on which you have not configured the WinRM service, you will get a nasty red error. You can also run a command on many machines simultaneously, etc. Hell, it's Powershell. You can do anything.

Alright so that was simple, but that's because we were operating within the safe boundaries of our Active Directory domain and all the authentication was done in the background. What about monitoring a standalone machine, such as SERVER1?

My first test machine:

  • Hostname: SERVER1 
  • IP: 192.168.1.10 
  • OS: Windows 2008 R2 SP1, fully patched, Windows Firewall is on
  • It's not a member of any domain

First things first: Launch Powershell on SERVER1. Run:

Set-ExecutionPolicy Unrestricted

Then set up your WinRM service and listener by running

Configure-SMRemoting.ps1

and following the prompts. If the WinRM server (SERVER1) is not in your forest (it's not) or otherwise can't use Kerberos, then HTTPS/SSL must be used, or the destination machine must be added to the TrustedHosts configuration setting. Let's try the latter first. On your client, add the WinRM server to the "Trusted Hosts" list:

We just authenticated and successfully created a remote session to SERVER1 using the Negotiate protocol! Negotiate is basically "use Kerberos if possible, fall back to NTLM if not." So the credentials are passed via NTLM, which is not clear text, but it's not awesome either. You can find a description of the rest of the authentication methods here, about halfway down the page, if you need a refresher.

Edit 1/29/2012: It should be noted that even within a domain, for Kerberos authentication to work when using WinRM, an SPN for the service must be registered in AD. As an example, you can find all of the "WSMAN" SPNs currently registered in your forest with this command:

setspn -T yourForest -F -Q WSMAN/*

SPN creation for this should have been taken care of automatically, but you know something is wrong (and Kerberos will not be used) if there is no WSMAN SPN for the device that is hosting the WinRM service.

OK, I am pooped. Time to take a break. Next time in Part II, we're going to focus on setting up SSL certificates to implement some real security to wrap up this experiment!

BlogEngine.NET, SimpleCaptcha, and Spam

I use BlogEngine.NET for this blog. I've loved it so far. It suits me perfectly because I also love .NET and C#.

BlogEngine.NET comes with a few "extensions" out of the box, and one of those extensions is called SimpleCaptcha. You simply configure it with a question and an answer. Visitors who supply the correct answer get to post comments. This wards off most of the spammers. But from what I'm seeing, is that whatever spammers use to automatically crawl the web, leaving little spam-filled coprolites in their wake, seems to be able to solve simple mathematical equations like 5+5, 3+7, and even (5+2)-1. I changed my captcha challenge to that latter equation and received a spam comment not five seconds later.

Maybe this will stop them...

So I figured the next best thing to do, without annoying and frustrating my visitors too much with those really bizarre graphical captchas that you can't even read half the time, was to change my SimpleCaptcha to something that was still simple, but required slightly more human-like thinking than what I suspect most spambots are capable of. Questions such as "what is the opposite of cold" or "a shape with four equal sides." These sorts of questions have brought my comment spam to a screeching halt. But there's one last problem: SimpleCaptcha is case sensitive and there's no immediately apparent way to turn it off. I don't want a visitor to type "Square" and not get their comment posted because they needed to have typed "square" instead.

So, to remedy this problem, simply access your web server and browse to wherever you have IIS/BlogEngine.NET installed. Then drill down to where SimpleCaptcha is. For me, it's C:\inetpub\wwwroot\App_Code\Extensions\SimpleCaptcha\. Open up the file SimpleCaptchaControl.cs in a text editor (or Visual Studio if you'd rather,) and find this method:

public void Validate(string simpleCaptchaChallenge)
{
   this.valid = this.skipSimpleCaptcha || this.simpleCaptchaAnswer.Equals(simpleCaptchaChallenge);
}

Simply change that one line to this:

public void Validate(string simpleCaptchaChallenge)
{
   this.valid = this.skipSimpleCaptcha || this.simpleCaptchaAnswer.Equals(simpleCaptchaChallenge,StringComparison.OrdinalIgnoreCase);
}

And you've just made your SimpleCaptcha not case-sensitive. The change takes effect as soon as you save the file; no restarts of anything are required.

DNS 101: Round Robin (Or Back When I was Young And Foolish Part II)

I learned something today. It's something that made me feel stupid for not knowing. Something that seemed elemental and trivial - yet, I did not know it. So please, allow me to relay my somewhat embarrassing learning experience in the hopes that it will save someone else from the same embarrassment.

I did know what DNS round robin was. Or at least, I would have said that I did.

Imagine you configure DNS1, as a DNS server, to use round robin. Then, you create 3 host (A or AAAA) records for the same host name, using different IPs. Let's say we create the following A records on DNS1:

server01 - A 10.0.0.4
server01 - A 10.0.0.5
server01 - A 10.0.0.6

Then on a workstation which is configured to use DNS1 as a DNS server, you ping server01. You receive 10.0.0.4 as a reply. You ping server01 again. With no hesitation, you get a reply from 10.0.0.4 again. We assume that your local workstation has cached 10.0.0.4 locally and will reuse that IP for server01 until the entry either expires, or we flush the DNS cache on the workstation with a command like ipconfig/flushdns.

I run ipconfig/flushdns. Then I ping server01 again.

This time I receive a response from 10.0.0.5. Now I assume DNS round robin is working perfectly. I go home for the day feeling like I know everything there is to know about DNS.

But was it that the DNS server is responding to DNS queries with the single next A/AAAA record that it has on file, in a round-robin type sequential fashion to every DNS query that it receives? That is what I assumed.

But the fact of the matter is that DNS servers, when queried for a host name, actually return a list of all A/AAAA records associated with that host name, every time that host name is queried for. (To a point - the list must fit within a UDP packet, and some firewalls/filters don't let UDP packets longer than 512 bytes through. That's changing though. Our idea of how big data is and should be allowed to be is always growing.)

I assume that www.google.com, being one of the busiest websites in the world, has not only some global load balancing and other advanced load balancing techniques employed, but probably also has more than one host record associated with it. To test my theory, I fire up Wireshark and start a packet capture. I then flush my local DNS cache with ipconfig/flushdns and then ping www.google.com.

Notice how I pinged it, got one IP address in response (.148), then flushed my DNS cache, pinged it again and got another different IP address (.144)? But despite what it may look like, that name server is not returning just one A/AAAA record each time I query it:


*Click for Larger*

My workstation is ::9. My workstation's DNS server is ::1. The DNS server is configured to forward DNS requests for zones for which it is not authoritative on to yet another DNS server. So I ask for www.google.com, my DNS server doesn't know, so it forwards the request. The forwardee finally finds out and reports back to my DNS server, which in turn relays back to me a list of all the A records for www.google.com. I get a long list containing not only a mess of A records, but a CNAME thrown in there too, all from a single DNS query! (We're not worried about the subsequent query made for an AAAA record right now. Another post perhaps.)

I was able to replicate this same behavior in a sanitary lab environment running a Windows DNS server and confirmed the same behavior. (Using the server01 example I mentioned earlier.)

Where round robin comes in is that it rotates the order of the list given to each subsequent client who requests it. Keep in mind that while round robin-ing the A records in your DNS replies does supply a primitive form of load distribution, it's a pretty poor substitute for real load balancing, since if one of the nodes in the list goes down, the DNS server will be none the wiser and will continue handing out the list with the downed node's IP address on it.

Lastly, since we know that our client is receiving an entire list of A records for host names which have many IP addresses, what does it actually do with the list?  Well, the ping utility doesn't do much. If the first IP address on the list is down, you get a destination unreachable message and that's it. (Leading to a lot of people not realizing they have a whole list of IPs they could try.) Web browsers however, have a nifty feature known as "browser retry" or "client retry," where they will continue trying the other IPs in the list until they find a working one. Then they will cache the working IP address so that the user does not continue to experience the same delay in web page loading as they did the first time. Yes, there are exploits concerning this feature, and yes it's probably a bad idea to rely on this since browser retry is implemented differently across every different browser and operating system. It's a relatively new mechanism actually, and people may not believe you if you tell them. To prove it to them, find (or create) a host name which has several bad IPs and one or two good ones. Now telnet to that hostname. Even telnet (a modern version from a modern operating system) will use getaddrinfo() instead of gethostbyname() and if it fails to connect the first IP, you can watch it continue trying the next IPs in the list.

More info here, here and here. That last link is an MSDN doc on getaddrinfo(). Notice that it does talk about different implementations on different operating systems, and that ppResult is "a pointer to a linked list of one or more addrinfo structures that contains response information about the host."

Replacing Task Manager with Process Explorer

Alright, everyone's back from the holidays, and I for one am ready to get my nose back to the grindstone!

In this post, I want to talk about a fairly recent discovery for me: Mark Russinovich's Process Explorer, not to be confused with his Process Monitor. Process Explorer has been around for years and is still being kept current so the fact that I had never really used it before now is a bit of an embarrassment for me, but I'm a total convert now and won't go without it from now on. Hopefully I'll be able to convert someone else with this post.

First, there were two videos that I watched recently that were instrumental in convincing me to keep Process Explorer permanently in my toolbox. It's a two-part series of talks given by Russinovich himself about the "Mysteries of Windows Memory Management." The videos are a little on the technical side, but they're extremely detailed and in-depth and if you're interested in hearing one of the top NT gurus in the world explicate the finer intricacies of how Windows uses physical and virtual memory, then you need to watch these videos. They're quite long, so you may want to save them for later:

Part 1
Part 2

One of the prevailing themes in the videos is that Russinovich doesn't seem to care much for the traditional Task Manager. We all know and love taskmgr and the three-fingered salute required to bring it up. (The three-fingered salute, CTRL+ALT+DEL, is officially referred to as the Secure Attention Sequence. Some free trivia for you.) He explains how some of the labels in Task Manager - especially the ones concerning memory usage - are a bit misleading and/or inaccurate. (What is memory free versus memory available?) He then shows us how he uses Process Explorer in lieu of Task Manager, which gives us a much clearer and more accurate (and cooler looking) picture of all the processes running on the machine, the memory that they're using, the ways in which they're using it, the handles, DLLs and files the processes are using, and so much more.

It's basically better than the regular Windows Task Manager in every way... and the best part? You can easily "replace" Task Manager with it such that when you hit Ctrl+Alt+Del and choose to bring up the "Task Manager," Process Explorer actually launches instead!

Awesome, right? Process Explorer provides an enormous wealth of information where the vanilla Task Manager falls short. Part of me wants to post more screen shots of this program to show you more examples of what you can see and do with Process Explorer, but those videos by Russinovich himself do a better job of showing off exactly how the program works and what all of it means than I can. In the videos, you'll learn what a Working Set is, Private Bytes, Bytes Committed, what a Hard Fault is and how it differs from a Soft Fault, etc.

And not to mention that as an added bonus, you can use this tool to troubleshoot the age-old conundrum of "what process is holding this file open so that I'm unable to delete it! Waaah!"

Needless to say, that if you ever hit Ctrl+Alt+Del on one of my machines and hit Start Task Manager, Process Explorer is going to show up instead.