WTF is a GitHub?

by Ryan 7. April 2014 13:04

I got my start in IT doing solely operations stuff: swapping out bad hard drives, terminating my own CAT5 cables, (I still carry a TIA-568 wiring diagram in my wallet in case I forget... had to throw out the condom to make room for it though, because let's be honest - I'll be using one way more than the other, just kidding, btw,) administering Active Directory, doing file server migrations... things like that.  And I still do mostly operations stuff today.  But I've also been playing around with programming since I was 14 years old, and I still do, even though my current job doesn't usually call for it.  Most of the time coding is just a hobby for me, but when there's a specific problem at work that I think I can solve with a little of my own programming elbow-grease, I'm all over it.  So I guess that makes me a natural participant in the DevOps movement, which is interesting because the age-old friction between IT pros and developers is still present.  On one hand, I get the idea of "don't half-ass two things; whole-ass one thing."  I think something that will ease that tension between Dev and Ops is the realization that developers aren't just developing boxed products anymore, but services.  Everything's a perpetual service now.  Which means the code and the hardware running that code have to evolve together now, at the same time and with the same goal. (Case in point: The OpenCompute Project, wherein we see entire datacenters being designed around the idea of running these super-scalable cloud workloads.)

Which means the developers and the IT pros need to hold hands and sing Kumbaya.

So Microsoft has been announcing tons of interesting things the past few weeks, such as Powershell 5 with OneGet, the open-sourcing of Roslyn the C# compiler, the introduction of .NET Native, etc.  And that got me into a bit of a mood for software the past few days.  Something I was way behind on was Git and GitHub.  Didn't know what they were, how they worked, how to integrate it into Visual Studio, etc.  So I've taken the past couple of days to educate myself, and the result is that now I have a public GitHub account.  I've only uploaded a few of my old personal projects - I will likely upload way more in the future.  I also have a ton of Powershell stuff built up, that I'll probably just make a single repository for.

Don't be too hard on me if you're going to scrutinize my code. I'm self-taught and I feel like I'm pretty terrible... but maybe I'm my own biggest critic.  I'm like the fat guy at the gym with all the other people with amazing bodies thinking to themselves "well at least he's trying..."

Git was originally designed by Linus Torvalds as a distributed source code revision tool.  GitHub is a place where you can store your repositories, and public code repositories are free!  You use Git to upload source code to GitHub.  Git/Github have established themselves as the premier service of its kind, which is evidenced by the fact that Visual Studio now integrates seamlessly with it and you can push commits to your branches (<- see me speaking the lingo there?) directly from within VS on your desktop.

Tags:

Programming

To Scriptblock or Not to Scriptblock, That is the Question

by Ryan 27. March 2014 17:03

I was doing some work with the Active Directory Powershell cmdlets recently.  Well, I work with them almost every day, but they still get me with their idiosyncrasies from time to time.

I needed to check some group memberships on various privileged groups within the directory.  I'll show you an abridged version of the code I started with to get the point across, the idea of which is that I iterate through a collection of groups (a string array) and perform some actions on each of the Active Directory groups in sequence:

Foreach($ADGroupName In [String[]]'Domain Admins',     `
                                  'Enterprise Admins', `
                                  'Administrators',    `
                                  'Account Operators', `
                                  'Backup Operators')
{
    $Group = Get-ADGroup -Filter { Name -EQ $ADGroupName } -Properties Members
    If ($Group -EQ $Null -OR $Group.PropertyNames -NotContains 'Members')
    {
        # This condition only occurs on the first iteration of the Foreach loop!
        Write-Error "$ADGroupName was null or was missing the Members property!"
    }
    Else
    {
        Write-Host "$ADGroupName contains $($Group.Members.Count) members." 
    }
}

Before I continue, I'd just like to mention that I typically do not mind very long lines of code nor do I avoid verbose variable names, mostly because I'm always using a widescreen monitor these days.  Long gone are the days of the 80-character-wide terminal.  And I agree with Don Jones that backticks (the Powershell escape character) are to be avoided on aesthetic grounds, but for sharing code in formats that are less conducive to long lines of code, such as this blog or a StackExchange site with their skinny content columns, I'll throw some backticks in to keep the horizontal scrolling to a minimum. 

Anyway, the above code exhibited the strangest bug. At least I'd call it a bug. (I'll make sure and let Jeffrey Snover know next time I see him. ;P) Only on the first iteration of the Foreach loop, I would get the "error" condition instead of the expected "Domain Admins contains 5 members" output.  The remaining iterations all behaved as expected.  It did not matter in what order I rearranged the list of AD groups; I always got an error on the first element in the array.

For a moment, I settled on working around the "bug" by making a "Dummy Group," including that as the first item in the array, gracefully handling the expected exception because Dummy Group did not exist, and then continuing normally with the rest of the legitimate groups.  This worked fine, but it didn't sit well with me.  Not exactly my idea of production-quality code.  I wanted to find the root cause of this strange behavior.

Stackoverflow has made me lazy.  Apparently I go to Serverfault when I want to answer questions, and Stackoverflow when I want other people to answer questions for me.  Simply changing line 7 above to this:

$Group = Get-ADGroup -Filter "Name -EQ '$ADGroupName'" -Properties Members

Made all the difference.  That is, using a string with an expandable variable inside it instead of the script block for a filter.  (Which itself is a little confusing since single quotes (') usually indicate non-expandable variables.  Oh well.  Just syntax to remember when playing with these cmdlets.

Nevertheless, if the code would not work correctly with a script block, I wish the parser would mark it as a syntax error, instead of acting weird.  (Behavior exists in PS 2 and PS 4, though in PS 4 the missing property is just ignored and I get 0 members, which is even worse.)

Powershell, Panchromatic Edition, Continued!

by Ryan 7. March 2014 18:03

That is a weird title.  Anyway, this post is a continuation to my last post, here, in which I used Powershell to create a bitmap that contained each and every RGB color (24-bit, ~16.7 million colors) exactly once.  We learned that using dynamic arrays and the += operator are often not a good choice when working with large amounts of data that you'd like to see processed before your grandchildren graduate high school. Today we look at another performance pitfall.

So last time, I printed a 4096x4096 bitmap containing 16777216 colors. But the pixels were printed out in a very uniform, boring manner.  I wanted to at least see if I could randomize the pixels a little bit to make the image more interesting.  First, I attempted to do that like this:

Function Shuffle([System.Collections.ObjectModel.Collection[System.Drawing.Color]]$List)
{
    $NewList = New-Object 'System.Collections.ObjectModel.Collection[System.Drawing.Color]'
    While ($List.Count -GT 0)
    {
        [Int]$RandomIndex = Get-Random -Minimum 0 -Maximum $List.Count                
        $NewList.Add($List[$RandomIndex])
        $List.RemoveAt($RandomIndex)
        Write-Progress -Activity "Randomizing Pixels..." -Status "$($NewList.Count) of 16777216"
    }
    Return $NewList
}

Seems pretty solid, right?  I intend to shuffle or randomize the neatly ordered list of pixels that I've generated.  So I pass that neatly ordered list to a Shuffle function.  The Shuffle function randomly plucks an element out of the original list one at a time, inserts it into a new "shuffled" list, then removes the original element from the old list so that it is not reused. Finally, it returns the new shuffled list.

Yeah... so that runs at about 12 pixels per second.

So instead of waiting 16 days for that complete, (16.7 million elements at 12 per second...)  I decided that I had to come up with a better solution.  I thought on it, and I almost resorted to writing a pure C# type and adding that to my script using Add-Type, but then I decided that would be "cheating" since I wanted to write this in Powershell as best I could.

Then it suddenly hit me: maybe I was thinking about it this way too hard.  Let's try something crazy:

Write-Progress -Activity "Randomizing Pixels" -Status "Please wait..."
$RandomPixelList = $AllRGBCombinations | Get-Random -Count $AllRGBCombinations.Count

Done in about two minutes, which beats the hell out of 16 days.  What we have now is a "randomized" list of pixels. Let's paint them and see how it looks:

A slice at 1x magnification:

A slice at 6x magnification:

I call it the "Cosmic Microwave Background."

You'll likely see a third installment in this series as I work some interesting algorithm into the mix so that the image is more interesting than just a random spray of pixels.  Until then...

Powershell Dynamic Arrays Will Murder You, Also... A Pretty Picture! (Part 1 of X)

by Ryan 5. March 2014 20:31

I was browsing the web a couple days ago and I saw this post, which I thought was a fun idea.  I didn't want to look at his code though, because, kind of like reading movie spoilers, I wanted to see if I could do something similar myself first before spoiling the fun of figuring it out on my own. The idea is that you create an image that contains every RGB color, using each color only once.

Plus I decided that I wanted to do it in Powershell, because I'm a masochist.

There are 256 x 256 x 256 RGB colors (in a 24-bit spectrum.) That equals 16777216 colors. Since each color will be used only once, if I only use 1 pixel per color, a 4096 x 4096 image would be capable of containing exactly 16777216 different colors.

First I thought to just generate a random color, draw the pixel in that color, then add it to a list of "already used" colors, and then on the next iteration, just keep generating random colors in a loop until I happen upon one that wasn't in my list of already used colors. But I quickly realized this would be horribly inefficient and slow.  Imagine: to generate that last pixel, I'd be running through the loop all day hoping for the 1 in ~16.7 million chance that I got the last remaining color that hadn't already been used. Awful idea.

So instead let's just generate a big fat non-random array of all 16777216 RGB colors:

$AllRGBCombinations = @()
For ([Int]$Red = 0; $Red -LT 256; $Red++)
{
    For ([Int]$Green = 0; $Green -LT 256; $Green++)
    {
        For ([Int]$Blue = 0; $Blue -LT 256; $Blue++)
        {
            $AllRGBCombinations += [System.Drawing.Color]::FromArgb($Red, $Green, $Blue)
        }
    }
}

That does generate an array of 16777216 differently-colored and neatly-ordered pixel objects... but guess how long it takes?

*... the following day...*

Well, I wish I could tell you, but I killed the script after it ran for about 20 hours. I put a progress bar in just to check that it wasn't hung or in an endless loop, and it wasn't... the code is just really that slow.  It starts out at a decent pace and then gradually slows to a crawl.

Ugh, those dynamic arrays and the += operator strike again. I suspect it's because the above method recreates and resizes the array every iteration... like good ole' ReDim back in the VBscript days.  It may be handy for small bits of data, but if you're dealing with large amounts of data, that you want processed this decade, you better strongly type your stuff and use lists.  Let's try the above code another way:

$AllRGBCombinations = New-Object 'System.Collections.ObjectModel.Collection[System.Drawing.Color]'
For ([Int]$Red = 0; $Red -LT 256; $Red++)
{
    For ([Int]$Green = 0; $Green -LT 256; $Green++)
    {        
        For ([Int]$Blue = 0; $Blue -LT 256; $Blue++)
        {
            $AllRGBCombinations.Add([System.Drawing.Color]::FromArgb($Red, $Green, $Blue))
        }
    }
    $PixelsGenerated += 65536
    Write-Progress -Activity "Generating Pixels..." -Status "$PixelsGenerated / 16777216" -PercentComplete (($PixelsGenerated / 16777216) * 100)
}

Only 5.2 seconds in comparison, including the added overhead of writing the progress bar. Notice how I only update the progress bar once every 256 * 256 pixels, because it will slow you down a lot if you try to update the progress bar after every single pixel is created.

Now I can go ahead and generate an image containing exactly one of every color that looks like this:

Yep, there really are 16.7 million different colors in there, which is why even a shrunken PNG of it is 384KB.  Hard to compress an image when there are NO identical colors! The original 4096x4096 bitmap is ~36MB.  And I ended up loading a shrunken and compressed JPG for this blog post, because I didn't want every page hit consuming a meg of bandwidth.

It kinda' makes you think about how limited and narrow a human's vision is, doesn't it?  24-bit color seems to be fine for us when watching movies or playing video games, but that image doesn't seem to capture how impressive our breadth of vision should be.

Next time, we try to randomize our set of pixels a little, and try to make a prettier, less computerized-looking picture... but still only using each color only once.  See you soon.

Verifying RPC Network Connectivity Like A Boss

by Ryan 16. February 2014 10:02

Aloha.  I did some fun Powershelling yesterday and now it's time to share.

If you work in an IT environment that's of any significant size, chances are you have firewalls.  Maybe lots and lots of firewalls. RPC can be a particularly difficult network protocol to work with when it comes to making sure all the ports necessary for its operation are open on your firewalls. I've found that firewall guys sometimes have a hard time allowing the application guy's RPC traffic through their firewalls because of its dynamic nature. Sometimes the application guys don't really know how RPC works, so they don't really know what to ask of the firewall guys.  And to make it even worse, RPC errors can be hard to diagnose.  For instance, the classic RPC error 1722 (0x6BA) - "The RPC server is unavailable" sounds like a network problem at first, but can actually mean access denied, or DNS resolution failure, etc.

MSRPC, or Microsoft Remote Procedure Call, is Microsoft's implementation of DCE (Distributed Computing Environment) RPC. It's been around a long time and is pervasive in an environment containing Windows computers. Tons of Windows applications and components depend on it.

A very brief summary of how the protocol works: There is an "endpoint mapper" that runs on TCP port 135. You can bind to that port on a remote computer anonymously and enumerate all the various RPC services available on that computer.  The services may be using named pipes or TCP/IP.  Named pipes will use port 445.  The services that are using TCP are each dynamically allocated their own TCP ports, which are drawn from a pool of port numbers. This pool of port numbers is by default 1024-5000 on XP/2003 and below, and 49152-65535 on Vista/2008 and above. (The ephemeral port range.) You can customize that port range that RPC will use if you wish, like so:

reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v Ports /t REG_MULTI_SZ /f /d 8000-9000
reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v PortsInternetAvailable /t REG_SZ /f /d Y
reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v UseInternetPorts /t REG_SZ /f /d Y

And/Or

netsh int ipv4 set dynamicport tcp start=8000 num=1001
netsh int ipv4 set dynamicport udp start=8000 num=1001
netsh int ipv6 set dynamicport tcp start=8000 num=1001
netsh int ipv6 set dynamicport udp start=8000 num=1001

This is why we have to query the endpoint mapper first, because we can't just guess exactly which port we need to connect to for a particular service.

So, I wrote a little something in Powershell that will test the network connectivity of a remote machine for RPC, by querying the endpoint mapper, and then querying each port that the endpoint mapper tells me that it's currently using.


#Requires -Version 3
Function Test-RPC
{
    [CmdletBinding(SupportsShouldProcess=$True)]
    Param([Parameter(ValueFromPipeline=$True)][String[]]$ComputerName = 'localhost')
    BEGIN
    {
        Set-StrictMode -Version Latest
        $PInvokeCode = @'
        using System;
        using System.Collections.Generic;
        using System.Runtime.InteropServices;

        public class Rpc
        {
            // I found this crud in RpcDce.h

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcBindingFromStringBinding(string StringBinding, out IntPtr Binding);

            [DllImport("Rpcrt4.dll")]
            public static extern int RpcBindingFree(ref IntPtr Binding);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcMgmtEpEltInqBegin(IntPtr EpBinding,
                                                    int InquiryType, // 0x00000000 = RPC_C_EP_ALL_ELTS
                                                    int IfId,
                                                    int VersOption,
                                                    string ObjectUuid,
                                                    out IntPtr InquiryContext);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcMgmtEpEltInqNext(IntPtr InquiryContext,
                                                    out RPC_IF_ID IfId,
                                                    out IntPtr Binding,
                                                    out Guid ObjectUuid,
                                                    out IntPtr Annotation);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcBindingToStringBinding(IntPtr Binding, out IntPtr StringBinding);

            public struct RPC_IF_ID
            {
                public Guid Uuid;
                public ushort VersMajor;
                public ushort VersMinor;
            }

            public static List QueryEPM(string host)
            {
                List ports = new List();
                int retCode = 0; // RPC_S_OK                
                IntPtr bindingHandle = IntPtr.Zero;
                IntPtr inquiryContext = IntPtr.Zero;                
                IntPtr elementBindingHandle = IntPtr.Zero;
                RPC_IF_ID elementIfId;
                Guid elementUuid;
                IntPtr elementAnnotation;

                try
                {                    
                    retCode = RpcBindingFromStringBinding("ncacn_ip_tcp:" + host, out bindingHandle);
                    if (retCode != 0)
                        throw new Exception("RpcBindingFromStringBinding: " + retCode);

                    retCode = RpcMgmtEpEltInqBegin(bindingHandle, 0, 0, 0, string.Empty, out inquiryContext);
                    if (retCode != 0)
                        throw new Exception("RpcMgmtEpEltInqBegin: " + retCode);
                    
                    do
                    {
                        IntPtr bindString = IntPtr.Zero;
                        retCode = RpcMgmtEpEltInqNext (inquiryContext, out elementIfId, out elementBindingHandle, out elementUuid, out elementAnnotation);
                        if (retCode != 0)
                            if (retCode == 1772)
                                break;

                        retCode = RpcBindingToStringBinding(elementBindingHandle, out bindString);
                        if (retCode != 0)
                            throw new Exception("RpcBindingToStringBinding: " + retCode);
                            
                        string s = Marshal.PtrToStringAuto(bindString).Trim().ToLower();
                        if(s.StartsWith("ncacn_ip_tcp:"))                        
                            ports.Add(int.Parse(s.Split('[')[1].Split(']')[0]));
                        
                        RpcBindingFree(ref elementBindingHandle);
                        
                    }
                    while (retCode != 1772); // RPC_X_NO_MORE_ENTRIES

                }
                catch(Exception ex)
                {
                    Console.WriteLine(ex);
                    return ports;
                }
                finally
                {
                    RpcBindingFree(ref bindingHandle);
                }
                
                return ports;
            }
        }
'@
    }
    PROCESS
    {
        ForEach($Computer In $ComputerName)
        {
            If($PSCmdlet.ShouldProcess($Computer))
            {
                [Bool]$EPMOpen = $False
                $Socket = New-Object Net.Sockets.TcpClient
                
                Try
                {                    
                    $Socket.Connect($Computer, 135)
                    If ($Socket.Connected)
                    {
                        $EPMOpen = $True
                    }
                    $Socket.Close()                    
                }
                Catch
                {
                    $Socket.Dispose()
                }
                
                If ($EPMOpen)
                {
                    Add-Type $PInvokeCode
                    $RPCPorts = [Rpc]::QueryEPM($Computer)
                    [Bool]$AllPortsOpen = $True
                    Foreach ($Port In $RPCPorts)
                    {
                        $Socket = New-Object Net.Sockets.TcpClient
                        Try
                        {
                            $Socket.Connect($Computer, $Port)
                            If (!$Socket.Connected)
                            {
                                $AllPortsOpen = $False
                            }
                            $Socket.Close()
                        }
                        Catch
                        {
                            $AllPortsOpen = $False
                            $Socket.Dispose()
                        }
                    }

                    [PSObject]@{'ComputerName' = $Computer; 'EndPointMapperOpen' = $EPMOpen; 'RPCPortsInUse' = $RPCPorts; 'AllRPCPortsOpen' = $AllPortsOpen}
                }
                Else
                {
                    [PSObject]@{'ComputerName' = $Computer; 'EndPointMapperOpen' = $EPMOpen}
                }
            }
        }
    }
    END
    {

    }
}

And the output will look a little something like this:


You can also query the endpoint mapper with PortQry.exe -n server01 -e 135, but I was curious about how it worked at a deeper level, so I ended up writing something myself. There weren't many examples of how to use that particular native API, so it was pretty tough.

Redirecting HTTP to HTTPS with the IIS URL Rewrite Module

by Ryan 1. February 2014 15:02

I should use TLS

Hi folks. I've been working on my HTTP web server the past couple of weeks. It's pretty good so far. It uses the managed thread pool, does asynchronous request handling, tokenization (like %CURRENTDATE%, etc.) server-side includes, SSL/TLS support, can run multiple websites simultaneously, etc. It seems really fast, but I haven't put it through any serious load tests yet.  My original goal was to use it to completely replace this blog and content management system (IIS and BlogEngine.NET,) but now I'm starting to think that won't happen. Mainly because I would need to convert years worth of content to a new format, and I would want to preserve the URLs so that if anyone on the internet had ever linked to any of my blog posts, they wouldn't all turn to 404s when I made the switch. That's actually more daunting to me than writing a server and a CMS.



So I'm taking a break from that and instead improving what I've already got here. One thing I've always wanted to do but never got around to was redirecting HTTP to HTTPS. We should be using SSL/TLS to encrypt our web traffic whenever possible. Especially when it's this simple:

First, open IIS Manager, and use Web Platform Installer to install the URL Rewrite 2.0 module:

Install URL Rewrite


Now at the server level, open URL Rewrite and add a new blank rule:

Add Blank Rule


Next you want to match (.*) regex pattern:

Edit the rule


And finally add the following condition:


Condition


And that's all there is to it!  This is of course assuming that you've got an SSL certificate bound to the website already, and that you're listening on both ports 80 and 443.

Note that you can also do this exact same thing by editing your web.config file by hand:


    
<system.WebServer>
   ...
   <rewrite>
      <globalrules>
        <rule name="Redirect to HTTPS" stopprocessing="true" enabled="true">
          <match url="(.*)"></match>
          <action type="Redirect" url="https://{HTTP_HOST}/{R:1}"></action>
          <conditions>
            <add pattern="^OFF$" input="{HTTPS}"></add>
          </conditions>
        </rule>
      </globalrules>
    </rewrite>
</system.WebServer>

And finally, a tip: Don't put resources such as images in your website that are sourced from HTTP, regardless if they're from your own server or someone else's. Use relative paths (or at least the https prefix) instead. Web browsers typically complain or even refuse to load "insecure" content on a secure page.

Tired of the NSA Seeing What Model of Plug and Play Mouse You're Using?

by Ryan 14. January 2014 13:40

Not long ago, the story broke that the NSA was capturing internet traffic generated by Windows crash dumps, driver downloads from Windows Updates, Windows Error Reporting, etc.

As per Microsoft's policy, this information, when it contains sensitive or personally identifiable data, is encrypted.

Encryption: All report data that could include personally identifiable information is encrypted (HTTPS) during transmission. The software "parameters" information, which includes such information as the application name and version, module name and version, and exception code, is not encrypted.

While I'm not saying that SSL/TLS poses an impenetrable obstacle for the likes of the NSA, I am saying that Microsoft is not just sending full memory dumps across the internet in clear text every time something crashes on your machine.  But if you were to, for instance, plug in a new Logitech USB mouse, your computer very well could try to download drivers for it from Windows Update automatically, and when that happens, it sends a few details about your PC and the device you just plugged in, in clear text.

Here is where you can read more about that.

So let's say you're an enterprise administrator, and you want to put an end to all this nonsense for all the computers in your organization, such that your computers no longer attempt to contact Microsoft or send data to them when an application crashes or someone installs a new device.  Aside from setting up your own internal corporate Windows Error Reporting server, (who does that?) you can disable the behavior via Group Policy. There are a surprising number of policy settings that should be disabled so that you're not leaking data all over the web:

  • The system will be configured to prevent automatic forwarding of error information.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Internet Communication Management -> Internet Communication settings-> “Turn off Windows Error Reporting” to “Enabled”.

  • An Error Report will not be sent when a generic device driver is installed.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Device Installation -> "Do not send a Windows error report when a generic driver is installed on a device" to "Enabled".

  • Additional data requests in response to Error Reporting will be declined.

Configure the policy value for Computer Configuration -> Administrative Templates -> Windows Components -> Windows Error Reporting -> "Do not send additional data" to "Enabled".

  • Errors in handwriting recognition on Tablet PCs will not be reported to Microsoft.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Internet Communication Management -> Internet Communications settings “Turn off handwriting recognition error reporting” to “Enabled”.

  • Windows Error Reporting to Microsoft will be disabled.

Configure the policy value for Computer Configuration -> Administrative Templates -> Windows Components -> Windows Error Reporting “Disable Windows Error Reporting” to “Enabled”.

  • Windows will be prevented from sending an error report when a device driver requests additional software during installation.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Device Installation -> “Prevent Windows from sending an error report when a device driver requests additional software during installation” to “Enabled”.

  • Microsoft Support Diagnostic Tool (MSDT) interactive communication with Microsoft will be prevented.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Troubleshooting and Diagnostics -> Microsoft Support Diagnostic Tool -> “Microsoft Support Diagnostic Tool: Turn on MSDT interactive communication with Support Provider” to “Disabled”.

  • Access to Windows Online Troubleshooting Service (WOTS) will be prevented.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Troubleshooting and Diagnostics -> Scripted Diagnostics -> “Troubleshooting: Allow users to access online troubleshooting content on Microsoft servers from the Troubleshooting Control Panel (via Windows Online Troubleshooting Service - WOTS)” to “Disabled”.

  • Responsiveness events will be prevented from being aggregated and sent to Microsoft.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Troubleshooting and Diagnostics -> Windows Performance PerfTrack -> “Enable/Disable PerfTrack” to “Disabled”.

  • The Application Compatibility Program Inventory will be prevented from collecting data and sending the information to Microsoft.

Configure the policy value for Computer Configuration -> Administrative Templates -> Windows Components -> Application Compatibility -> “Turn off Program Inventory” to “Enabled”.

  • Device driver searches using Windows Update will be prevented.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Device Installation -> “Specify Search Order for device driver source locations” to “Enabled: Do not search Windows Update”.

  • Device metadata retrieval from the Internet will be prevented.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Device Installation -> “Prevent device metadata retrieval from internet” to “Enabled”.

  • Windows Update will be prevented from searching for point and print drivers.

Configure the policy value for Computer Configuration -> Administrative Templates -> Printers -> “Extend Point and Print connection to search Windows Update” to “Disabled”.

Website Upgrade, Coding, and Dealing with NTFS ACLs on Server Core

by Ryan 9. January 2014 15:20

I apologize in advance - this blog post is going to be all over the place.  I haven't posted in a while, mainly because I've been engrossed in a personal programming project. Part of which includes a multithreaded web server I wrote over the weekend that I'm kind of proud of. My ShareDiscreetlyWebServer is single-threaded, because when I wrote it, I had not yet grasped the awesome power of the async and await keywords in C#.  They're très sexy. Right now the only verb I support is GET (because it's all I need for now,) but it's about as fast as you could hope a server written in managed code could be.

Secondly, I just upgraded this site to Blogengine.NET v2.9.  The motivation behind it was that today, I got this email from a visitor to the site:

Hi,
I tried to leave you a comment but it didnt work.
Can you please go through steps you took to migrate your blog to Azure as I am interested in doing the same thing.
Did you set it up as a Azure Web Site or use an Azure VM and deployed it that way?
Are you using BlogEngine or some other blog publishing tool.

Wait, my comments weren't working? Damnit. I tried to post a comment myself and sure enough, commenting on this blog was busted. It was working fine but it just stopped working some time in the last few weeks. And so, I figured that if I was going to go to the trouble of debugging it, I'd just go ahead and upgrade Blogengine.NET while I was at it.

But first, to answer the guy's question above, my blog migration was simple. I used to host this blog out of my house on a Windows Server running in my home office. I signed up for a Server 2012 Azure virtual machine, RDP'ed to it, installed the IIS role, robocopy'd my entire C:\inetpub directory to the new VM, and that was that.

So version 2.9 so far is a little lackluster so far.  They updated most of the UI to the simplistic, sleek "modern" look that's all the rage these days, especially on tablets and phones.  But in the process it appears they've regressed to the point where the editor is no longer compatible with Internet Explorer 11, 10, or 9. (Not that it worked perfectly before either.)  It's annoying as hell. I'm writing this post right now in IE with compatibility mode turned on, and still half of the buttons don't work.  It's crippled compared to the version 2.8 that I was on this morning.

That's ironic that the developers who wrote a CMS entirely in .NET, in Visual Studio, couldn't be bothered to test it on any version of IE.  Guess I'll wait patiently for version 3.0.  Or maybe write my own CMS after I get finished writing the web server to run it on.

But even after the upgrade, and after fixing all the little miscellaneous bugs that the upgrade introduced, it still didn't fix my busted comment system. So I had to dig deeper. I logged on to the server, fired up Process Monitor while I attempted to post a comment: 

w3wp.exe gets an Access Denied error right there, clear as day.  (Thanks again, ProcMon.)

If you open the properties of w3wp.exe, you'll notice that it runs in the security context of an application pool, e.g. "IIS APPPOOL\Default Web Site". So just give that security principal access to that App_Data directory.  Only one problem...

Server Core.

No right-clicking our way out of this one.  Of course we could have done this with cacls.exe or something, but you know I'm all about the Powershell.  So let's do it in PS.

$Acl = Get-Acl C:\inetpub\wwwroot\App_Data
$Ace = New-Object System.Security.AccessControl.FileSystemAccessRule("IIS APPPOOL\Default Web Site", "FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
$Acl.AddAccessRule($Ace)
Set-Acl C:\inetpub\wwwroot\App_Data $Acl

Permissions, and commenting, are back to normal.

Let's Deploy EMET 4.1!

by Ryan 26. December 2013 17:22

Howdy!

Let's talk about EMET.  Enhanced Mitigation Experience Toolkit.  Version 4.1 is the latest update, out just last month. It's free. You can find it here. When you download it, make sure you also download the user guide PDF that comes with it, as it's actually pretty good quality documentation for a free tool.

The thing about EMET, is that it is not antivirus. It's not signature-based, the way that traditional AV is. EMET is behavior based.  It monitors the system in real time and watches all running processes for signs of malicious behavior and attempts to prevent them.  It also applies a set of overall system-wide hardening policies that make many types of exploits more difficult or impossible to pull off. The upshot of this approach is that EMET can theoretically thwart 0-days and other malware/exploits that antivirus is oblivious to.  It also allows us to protect legacy applications that may not have been originally written with today's security features in mind.

The world of computer security is all about measure and countermeasure. The attackers come up with a new attack, then the defenders devise a defense against it, then the attackers come up with a way to get around that defense, ad nauseum, forever. But anything you can do to raise the bar for the attackers - to make their jobs harder - should be done.

Here's what the EMET application and system tray icon look like once it's installed:

EMET

From that screenshot, you get an idea of some of the malicious behavior that EMET is trying to guard against.  You can turn on mandatory DEP for all processes, even ones that weren't compiled for it. Data Execution Prevention has been around for a long time, and is basically a mechanism to prevent the execution of code that resides in areas of memory marked as non-executable. (I.e. where only data, not code, should be.)  With DEP on, heaps and stacks will be marked as non-executable and attempts to run code from those areas in memory will fail. Most CPUs these days have the technology baked right into the hardware, so there's no subverting it. (Knock on wood.)

You can turn on mandatory ASLR for all processes on the system, again, even for processes that were not compiled with it.  Address Space Layout Randomization is a technique whereby a process loads modules into "random" memory addresses, whereas in the days before ASLR processes always loaded modules into the same, predictable memory locations. Imagine what an advantage it would be for an attacker to always know exactly where to find modules loaded in any process on any machine.

Then you have your heapspray mitigation. A "heap spray" is an attack technique where the attacker places copies of malicious code in as many locations within the heap as possible, increasing the odds of success that it will be executed once the instruction pointer is manipulated. This is a technique that attackers came up with to aid them against ASLR, since they could no longer rely on predictable memory addresses. By allocating some commonly-used memory pages within processes ahead of time, we can keep the heap sprayer's odds of success low.

Those are only a few of the mitigations that EMET is capable of.  Read that user guide that I mentioned before for much more info.

Oh, and one last thing: Is it possible that EMET could cause certain applications to malfunction? Absolutely! So always test thoroughly before deploying to production. And just like with enterprise-grade antivirus software, EMET also requires a good bit of configuring until you come up with a good policy that suits your environment and gives you the best mix of protection versus application compatibility.

Let's get into how EMET can be deployed across an enterprise and configured via Group Policy. Once you've installed it on one computer, you will notice a Deployment folder in with the program files. In the Deployment folder you will find the Group Policy template files you need to configure EMET across your enterprise via GPO.  First, create your Group Policy Central Store if you haven't already:

Creating Central Store

Copy the EMET.ADMX file into the PolicyDefinitions folder, and EMET.ADML file into the EN-US subfolder.  If all goes well, you will notice a new Administrative Template now when you go to create a new GPO:

EMET GPO

Now you may notice that while I do have the EMET administrative template... all my other admin templates have disappeared! That's because I forgot to copy all of the other admin templates from one of the domain controllers into Sysvol before I took the screen shot. So don't forget to copy over all the other *.admx and *.adml files from one of your DCs before continuing.

Now you can control how EMET is configured in a centralized, consistent, enforceable way on all the computers in your organization.

The next part is deploying the software. The EMET user guide describes using System Center Configuration Manager to deploy the software, and while I agree that SCCM is boss when it comes to deploying software, I don't have it installed here in my lab, so I'm going to just do it via GPO as well.  In fact, I'll do it in the same EMET GPO that defines the application settings too.

Copy the installer to a network share that can be accessed by all the domain members that you intend to deploy the software to:

Copy the MSI

Then create a new GPO with a software package to deploy that MSI from the network location. Make sure the software is assigned to the computer, not the user.  And lastly, you'll likely rip less of your hair out if you turn off asynchronous policy processing like so:

Computer Settings
 + Administrative Templates
    + System
       + Logon
          + Always wait for the network at computer startup and logon: Enabled

And software deployment across an entire organization, that simple. Luckily I didn't even have to apply a transform to that MSI, which is good, because that is something I didn't feel like doing this evening.

Until next time, stay safe, and if you still want to hear more about EMET, watch this cool talk from Neil Sikka about it from Defcon 21!

Finding Hidden Processes with Volatility and Scanning with Sysinternals Sigcheck

by Ryan 19. December 2013 16:15

Happy Memory-Forensics-Thursday!

I get called in to go malware hunting every once in a while.  It's usually after an automatic vulnerability scanner has found something unusual about a particular computer on the network and threw up a red flag.  Once a machine is suspected of being infected, someone needs to go in and validate whether what the vulnerability scanner found is truly a compromise or a false positive, the nature of the infection, and clean it if possible.  I know that the "safest" reaction to the slightest whiff of malware is to immediately disconnect the machine from the network, format it and reinstall the operating system, but in a busy production environment, that extreme approach isn't always feasible or necessary.

We all know that no antivirus product can catch everything, nor is any vulnerability scanner perfect.  But a human with a bit of skill and the right tools can quickly sniff out things that AV has missed.  Malware hunting and forensic analysis really puts one's knowledge of deep Windows internals to the test, possibly more so than anything else, so I find it extremely fun and rewarding.

So today we're going to talk about two tools that will aid you in your journey.  Volatility and Sigcheck.

Volatility is a wondrous framework for analyzing Windows memory dumps.  You can find it here. It's free and open-source.  It's written in Python, but there is also a compiled exe version if you don't have Python installed.  Volatility is a framework that can run any number of plugins, and these plugins perform data analyses on memory dumps, focused on pointing out specific indicators of compromise, such as API hooks, hidden processes, hooked driver IRP functions, interrupt descriptor table hooks, and so much more.  It's not magic though, and it doesn't do much that you could not also do manually (and with much more painstaking effort) with WinDbg, but it does make it a hell of a lot faster and easier.  (We have to wait until 2014 for Win8/8.1 and Server 2012/2012R2 support.)

But first, before you can use Volatility, you must have a memory dump.  (There is a technology preview branch of Volatility that can read directly from the PhysicalMemory device object.)  There are many tools that can dump memory, such as WinPMem, which you can also find on the Volatility downloads page that I linked to earlier.  It can dump in both RAW format and DMP (Windows crash dump) formats.  Make sure that you download a version with signed drivers, as WinPmem loads a driver to do its business, and modern versions of Windows really don't like you trying to install unsigned drivers.  You can also use LiveKd to dump memory using the command .dump -f C:\memory.dmp.

Since Volatility is such a huge and versatile tool, today I'm only going to talk about one little piece of it - finding "hidden" processes.

When a process is created, the Windows kernel assigns it an _EPROCESS data structure.  Each _EPROCESS structure in turn contains a _LIST_ENTRY structure.  That _LIST_ENTRY structure contains a forward link and a backward link, each pointing to the next _EPROCESS structure on either side of it, creating a doubly-linked list that makes a full circle.  So if I wanted to know all of the processes running on the system, I could start with any process and walk through the _EPROCESS list until I got back to where I started.  When I use Task Manager, tasklist.exe or Process Explorer, they all use API functions that in turn rely on this fundamental mechanism.  Behold my awesome Paint.NET skills:

EPROCESS Good

So if we wanted to hide a process from view, all we have to do is overwrite the backward link of the process in front of us and the forward link of the process behind us to point around us.  That will effectively "unlink" our process of doom, causing it to be hidden:

EPROCESS BAD

This is what we call DKOM - Direct Kernel Object Manipulation.  A lot of rootkits and trojans use this technique.  And even though modern versions of Windows do not allow user mode access to the \\Device\PhysicalMemory object, which is where the _EPROCESS objects will always be because they're in a non-paged pool, we don't need it, nor do we need to load a kernel mode driver, because we can pull off a DKOM attack entirely from user mode by using the ZwSystemDebugControl API.  But we can root out the rootkits with Volatility.  With the command

C:\> volatility.exe --profile=Win7SP0x86 -f Memory.raw psscan

That command shows a list of running processes, but it does it not by walking the _EPROCESS linked list, but by scanning for pool tags and constrained data items (CDIs) that correspond to processes.  The idea is that you compare that list with a list of processes that you got via traditional means, and processes that show up as alive and well on Volatility's psscan list but not Task Manager's list are hidden processes probably up to no good.

There are other methods of finding hidden processes.  For instance, scanning for DISPATCHER_HEADER objects instead of looking at pool tags.  Even easier, a handle to the hidden process should still exist in the handle table of csrss.exe (Client/Server Runtime Subsystem) even after it's been unlinked from the _EPROCESS list, so don't forget to look there.  (There's a csrss_pslist plugin for Volatility as well.)  Also, use the thrdscan plugin to check for threads that belong to processes that don't appear to exist, which would be another sign of tomfoolery.

Alright, so now you've located an executable file that you suspect is malware, but you're not sure.  Scan that sucker with Sigcheck!  Mark Russinovich recently added VirusTotal integration into Sigcheck, with the ability to automatically upload unsigned binaries and have them scanned by 40+ antivirus engines and give you back reports on whether the file appears to be malicious!  Sigcheck can automatically scan through an entire directory structure, just looking for suspicious binaries, uploading them to VirusTotal, and showing you the results.

Remember that you must accept VirusTotal's terms and conditions before using the service.

Uploading suspicious files to VirusTotal is practically a civic responsibility, as the more malicious signatures that VirusTotal has on file, the more effective the antivirus service is for the whole world.

About Me

Name: Ryan Ries
Location: Texas, USA
Occupation: Systems Engineer 

I am a Windows engineer and Microsoft advocate, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.


Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST


MCITP: Enterprise Administrator

VCP5-DCV

Profile for Ryan Ries at Server Fault, Q&A for system administrators

LOPSA

GitHub: github.com/ryanries

 

I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.