Windows XP, RDC7, Trusted Publishers, and You

Someone asked me for some help yesterday with a problem they were having at work. At their company they use Windows XP workstations, a 2003 Active Directory infrastructure, and *.rdp files that the employees use to establish remote connections to other servers. XP was pretty nice when it came out, but today it's old and just not exciting anymore. Same goes with Server 2003. I mean Windows 7 and 2008 R2 are both several years old by now and definitely proven technologies... but still, upgrading to a modern OS seems to be at the bottom of almost every company's list. Desktop admins around the globe are still puttering about supporting employees on WinXP, and server admins all over the world are still logging on to Server 2003 (or worse!) servers.

With the release of Windows 7 came the new Remote Desktop Client 7, which adds some nice new features, and supports the new and interesting Group Policies that come with a 2008+ Active Directory. One such Group Policy is "Specify SHA1 thumbprints of certificates representing trusted .rdp publishers." Enabling this setting allows you, as the administrator, to specify a list of SHA1 hashes that represent certificates that are from what are considered trusted sources.  When the recipient of this policy launches an *.rdp file, and it's signed by a trusted certificate whose hash is on the list, the user will not get prompted with a warning. When you locate this setting in the (2008 and above) GPO editor, it plainly states that this policy is for "Windows Vista SP1 and above." The thing is, you can install RDC7 on Windows XP.

Here's the rest of the detail on the GPO settings from Technet: http://technet.microsoft.com/en-us/library/cc771261(WS.10).aspx.

Furthermore, a signed *.rdp file will have these two lines at the end:

signscope:s:Full Address,Server
signature:s:THISISANSHA1THUMBPRINT

The problem is that the aforementioned Group Policy setting doesn't exist on 2003 Domain Controllers.

Nevertheless, the effect of the newer 2008 policy should still work since we've installed the new RDC7 client on the Windows XP machines. In theory. We just have to figure out how to deploy it. As it turns out, you can just navigate yourself to this registry key on the client:

HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client\PublisherBypassList

In Windows XP the PublisherBypassList key might not exist. Create it! Your SHA1 hashes go there as 32-bit dwords, no spaces, all caps. (This could be done in either HKLM or HKCU. The hashes in HKCU are just added onto the ones loaded from HKLM... just like the description of the GPO setting says.)

So even though you don't have that GPO Setting in Server 2003 like you do in 2008, you can push generic registry modifications such as this out to clients, thereby achieving the same effect.

And it works!

Replacing Task Manager with Process Explorer

Alright, everyone's back from the holidays, and I for one am ready to get my nose back to the grindstone!

In this post, I want to talk about a fairly recent discovery for me: Mark Russinovich's Process Explorer, not to be confused with his Process Monitor. Process Explorer has been around for years and is still being kept current so the fact that I had never really used it before now is a bit of an embarrassment for me, but I'm a total convert now and won't go without it from now on. Hopefully I'll be able to convert someone else with this post.

First, there were two videos that I watched recently that were instrumental in convincing me to keep Process Explorer permanently in my toolbox. It's a two-part series of talks given by Russinovich himself about the "Mysteries of Windows Memory Management." The videos are a little on the technical side, but they're extremely detailed and in-depth and if you're interested in hearing one of the top NT gurus in the world explicate the finer intricacies of how Windows uses physical and virtual memory, then you need to watch these videos. They're quite long, so you may want to save them for later:

Part 1
Part 2

One of the prevailing themes in the videos is that Russinovich doesn't seem to care much for the traditional Task Manager. We all know and love taskmgr and the three-fingered salute required to bring it up. (The three-fingered salute, CTRL+ALT+DEL, is officially referred to as the Secure Attention Sequence. Some free trivia for you.) He explains how some of the labels in Task Manager - especially the ones concerning memory usage - are a bit misleading and/or inaccurate. (What is memory free versus memory available?) He then shows us how he uses Process Explorer in lieu of Task Manager, which gives us a much clearer and more accurate (and cooler looking) picture of all the processes running on the machine, the memory that they're using, the ways in which they're using it, the handles, DLLs and files the processes are using, and so much more.

It's basically better than the regular Windows Task Manager in every way... and the best part? You can easily "replace" Task Manager with it such that when you hit Ctrl+Alt+Del and choose to bring up the "Task Manager," Process Explorer actually launches instead!

Awesome, right? Process Explorer provides an enormous wealth of information where the vanilla Task Manager falls short. Part of me wants to post more screen shots of this program to show you more examples of what you can see and do with Process Explorer, but those videos by Russinovich himself do a better job of showing off exactly how the program works and what all of it means than I can. In the videos, you'll learn what a Working Set is, Private Bytes, Bytes Committed, what a Hard Fault is and how it differs from a Soft Fault, etc.

And not to mention that as an added bonus, you can use this tool to troubleshoot the age-old conundrum of "what process is holding this file open so that I'm unable to delete it! Waaah!"

Needless to say, that if you ever hit Ctrl+Alt+Del on one of my machines and hit Start Task Manager, Process Explorer is going to show up instead.

RSSInYourFace

À propos of my last post, my latest creation is finished and needs some beta-testing now. It's called RSSInYourFace, and it's an RSS feed reader. Yes, I realize that there are already dozens of RSS feed readers out there, but this one is mine.

I wrote it so that I could stay current on all my favorite blogs and news sites without actually having to waste time surfing the web. This is particularly useful at work where you might want to appear as though you're doing something besides surfing the web all day. The gist of the program is that you supply a list of URLs to your favorite feeds, and the program will sit quietly in your system tray until something new gets posted from one of them, at which point it will alert you with a balloon popup about the new item. Subsequently clicking the balloon as it pops up will launch that website/news item in your default browser.

It was written and tested on Windows 7 64-bit, but should work on 32-bit as well. .NET 4.0 is required.

I would really like some feedback if you would like to use it. Try to break it, make it crash, etc. I'd really like to squash any bugs or inefficiencies that I haven't caught yet. Also feel free to make feature requests - I can almost certainly make it happen. My main concern right now is maintaining support for the myriad different formats of feeds out there. Imagine my dismay when I realized that there wasn't just one standard type of RSS feed. There's never just a standard...

Link to version 1.03 Release Build: RSSInYourFace1.0.3.exe (87.00 kb)
Link to version 1.02 Release Build: RSSInYourFace.1.02.exe (87.00 kb)

 

.NET Programming Love, Multi-Threadedness, and CPU Usage

I do not want to ever be known as a professional software developer.

It's not because I don't enjoy programming, and certainly not because I don't appreciate the people who do love being professional developers. I do enjoy it. It's fun. I've been writing computer programs since I was drawing multi-colored circles in QuickBasic at 14 years old, and I still find it so fun that I do it as a pastime for no other reason than pure personal pleasure.

It's not because "I'm a server guy, or a hardware guy, or an architecture guy, or a network guy... and so therefore I don't need to write programs."  I know plenty of hardware, network and server architecture guys that have entire life-long careers going and don't care to write a line of code ... sometimes not even a line of script!

I don't want to be a professional developer because I'm deathly afraid that if I were forced to code for a living, it might suck all the fun out of it. It's like playing poker for a living. Sure it's fun to play poker with your buddies on a Saturday night and drink and laugh and play ... but what if you needed to play poker to earn a living? What if your performance in Texas Hold 'Em dictated how well you ate or if you made your rent for the month?  All of a sudden, Texas Hold 'Em isn't so fun any more...

You might hear me reiterate that sentiment again throughout the course of this blog, as (at this point in my life) I feel it is one of the defining sentiments of my career as an IT professional.

So, with all that said, I want to share some programming goodness that I encountered tonight, simply as an enthusiast and an amateur programmer. You are more than welcome to comment on how amateurish and unevolved as a programmer I am. I encourage it actually, as there is no better way for me to learn than by accepting constructive criticism.

I really like .NET. I love it. And what makes that very ironic is that years ago, I used to denounce languages such as Java because of its JVM nature. "Why not program against the machine itself? It would obviously be more efficient," I used to say. Well years later, here I am professing my love for .NET - a CLR language, which basically makes it the Microsoft equivalent of Java! I get the feeling that Russinovich and his ilk don't have much respect for us CLR/JVM coders because we operate in a managed sandbox environment and don't have to worry as much about things like byte-by-byte memory allocation... and god forbid we forget to de-allocate something.

But you know what? .NET gives me time to focus on solving business problems, instead of agonizing over every last little memory leak.  That's what makes .NET the perfect "Systems Engineer's" or "Systems Administrator's" language. I don't care about typecasting every last delegated sub type as an enum... and such as.  (Source: Miss South Carolina)

.NET just lets me focus on doing things and solving problems.

Tonight, I was working on a little program for myself, as opposed to something for the company where I work.  Just something that I was personally interested in.  And I had only just recently really grasped the concept of multi-threading my .NET applications. It's something that I had been struggling with for a long time... and I'm not so ashamed to admit that because there are some enterprise-grade applications that I use at work today that our company pays a lot of money for where the GUI freezes up hard for several minutes while the application downloads data and you're left staring at a non-responsive GUI... And now I know that I can do it better than that.

I guess I'll add that to list of things they should be paying me for, eh?

It's not that creating parallel threads is difficult. Least of all in .NET. You can find hundreds of examples out there on Google right now. Knowing when and where to put them in your program and how to use them effectively and design the rest of your application to work with threads effectively is the tricky part. It's a design problem, not a technical one.

It's the C# BackgroundWorker that really got me in to designing good, multi-threaded GUIs.

BackgroundWorker workerEngine = new BackgroundWorker();
workerEngine.WorkerReportsProgress = true;
workerEngine.ProgressChanged += new ProgressChangedEventHandler(worker_ProgressChanged);
workerEngine.DoWork += new DoWorkEventHandler(worker_DoWork);
workerEngine.RunWorkerAsync();

And now you're executing this code in a parallel thread:

void worker_DoWork(object sender, DoWorkEventArgs e)
{
    // do some stuff
    // Sending "progress report" to the worker_ProgressChanged method
    ((BackgroundWorker)sender).ReportProgress(0, "I'm still working hard over here!"); 
}

Which can send "updates" to this method:

void worker_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
    // Sending results back to the main/GUI thread
    outputListBox.Items.Add((string)e.UserState);
}

Which can in turn interact with your GUI. You can do interesting things like populate a listbox on your GUI with updates from the background worker. See, that's one of interesting challenges for us novice coders trying to get in to multi-threaded programming... All of your GUI elements exist on one thread, and you cannot manipulate them or even get any information from them at all from another thread. So the difficulty is in creating a bridge (or a delegate, if you will,) between your threads, which we have now done with a BackgroundWorker object.

Now when you run this code, be aware that your worker thread is trying to try run as fast as he possibly can, which means he can eat up CPU time:

That's my background worker thread doing nothing but spinning in an infinite loop, causing significant CPU spikes on every core. At least the GUI remains responsive. Now let's stick a System.Threading.Thread.Sleep(100) (milliseconds) in the background thread so that perhaps he'll pace himself:

Much better! Now our background thread isn't hogging up the CPU, and our main GUI thread stays responsive and smooth throughout.

And one last thing - never put Thread.Sleep(x) anywhere on your GUI thread. It will obviously cause your GUI to be jerky and unresponsive while it's sleeping, which makes for an awful user experience.

Oh, and Merry Christmas. Hope Santa brings you what you wanted.

The Page File

How tired is this topic? I don't want to be "reheating boring leftovers," as Ned puts it, but maybe this post will help finally put it to bed for anyone still wondering.

Ever since I became an "NT geek" a little over 15 years ago, there has always seemingly been so much mystery and intrigue shrouding "the page file," also known as the "swap file."  Windows does not really do "swapping" anymore, so I will only refer to it as a page file from here on out. Still to this day, in 2011, people commonly ask me about what size their page file should be. "Does my page file need to be this big?  What happens if I shrink it?  Can I put it on another logical drive? Can I disable it?"  And naturally, the Web can be a cesspool of misinformation which only serves to further confuse people and cause them to rely on "rules of thumb" for which there's no true reasoning or sense behind them. To add to this mess, the exact ways in which Windows has used the page file have changed slightly over the years as Windows has evolved and the average amount of RAM in our machines has increased. (*cough* Superfetch *cough*)

I'm focusing on Windows Vista and later here. (Win7, Server 2008, R2, etc.)

Just follow the "Advanced"s to get there.

 

 

First I want to clear something up: Forget any "rules" you have ever heard about how the page file should be 1.5x or 2.0x or 3.14159x the amount of RAM you have.  Any such formula is basically useless and doesn't scale to modern systems with different amounts of RAM. Get your vestigial 20th century old wives tales out of my Windows.

Alright, sorry about that. I'm just really tired of hearing those rules of thumb about page file sizing. The only part about this that sort of embarrasses me is that Windows itself still uses a formula like this if you choose to let Windows manage your page file for you. Older versions of Windows use this formula to choose the page file size for you:

System Memory Minimum Page File Maximum Page File
< 1GB 1.5 * RAM 3 * RAM
> = 1GB 1 * RAM 3 * RAM

Windows 7 and 2008 R2 set the paging file size to the amount of RAM + 300MB.

Furthermore, the page file is dynamic by default and can expand on demand. And for the most part that still works just fine for the average home PC user.  But if you have a server with 192GB of RAM, Windows will by default create for you a nice, fat 192GB page file.  Our "rule of thumb" now looks absurd.

No, you do not need (or want) a 192GB page file gobbling up space on your nice 15k SAS drives.  The only reason you might ever want such a huge page file is only if you're interested in generating full crash dumps (full memory dumps) when the machine crashes. (You need a page file that is the size of RAM plus a couple hundred megabytes for this.) In later versions of Windows you can use a dedicated crash dump file to store memory dumps, further decreasing the need for a huge page file.  Also, you'll still get minidumps that can contain useful information about system crashes even if your page file is too small to support a full memory dump.

Other types of crash dumps are explained here.

The truth is the more RAM you have, the less page file you need - with a couple of stipulations.

The answer to "how big should the page file be?" is "just big enough so that you don't run out of commitable memory." The amount of memory that your system can commit is equal to your RAM + page file. You can put your system under a heavy workload and use Performance Monitor to see how much RAM you're using. You can also use Process Explorer by Mark Russinovich and watch the peak memory value. Just make sure that you have enough RAM + page file that you can support that peak memory usage at all times, or else your page file will be forced to expand if it can, and if your page file can't expand then your application might hang, or crash, or any number of systemic weirdnesses will occur.

Unfortunately, Superfetch complicates this test because it goes around in the background, pre-loading things into memory from disk all the time, so it'll always make it seem like you're running low on memory. In reality, a lot of that is Superfetch putting your memory to work, and it's actually really good at reducing disk seeks on PCs.  But Superfetch is turned off by default if Windows was installed on an SSD (because seek times are near 0 on an SSD anyway,) and it's also disabled in Windows Server. 

Also keep in mind that certain applications such as MSSQL and LSASS have their own memory management systems that can operate outside the purview of the NT Memory Management system, which can lead to things like those applications hogging up more than their fair share of memory.

 

Process Explorer by Mark Russinovich

On the other hand, don't disable the page file completely, even if you have 192GB of RAM.  (You should be able to move it off of the system drive with no ill-effects though, unless you're dealing with a very poorly written application.) Windows can run perfectly fine with no page file if you have plenty of RAM. However, some applications may not. Some third-party applications, and even some MSFT applications like AD DS and SQL may be simply assuming that there is a page file, and if there isn't one, it may result in unpredictable results. (Read: Hard to troubleshoot headaches.)

On the other other hand, keep in mind that having a huge pagefile (like 192GB) will affect system performance if your system is actually having to use that page file a lot.  Just having a large file on your disk won't affect performance by virtue of it just being there, but if your system is having to manipulate that file by pushing memory pages to it all the time it will. (And if you find yourself having to push memory pages to a huge file on disk very often, you obviously needed more RAM a long time ago.)

Lastly - yes, there are some "page file optimization" techniques that still apply, such as striping the page file across multiple spindles, setting a static size to eliminate file system fragmentation, etc. However, with RAM being as inexpensive as it is these days, your main concern should be minimizing having to touch the page file at all anyway.