Be the Master of the LastLogonTimestamp Attribute with S4U2Self

I've written a little bit about the LastLogonTimestamp/LastLogonDate attribute here, and of course there is AskDS's notorious article on the subject here, but today I'm going to give you a handy little tip that I don't think I have mentioned before.

If you're an Active Directory administrator, chances are you're interested or have been interested in knowing if a given account is "stale," meaning that the account's owner has not logged in to the domain in some time.  (Keep in mind that an account could be either a user or a computer as it relates to Active Directory.)  You, like many sysadmins, might have some script or automated process that checks for stale accounts using a command-line approach, such as:

dsquery user -inactive 10

or Powershell's extremely flexible:

Get-ADUser -Filter * -Properties LastLogonDate
| ? { $_.Enabled -AND $_.LastLogonDate -LT (Get-Date).AddDays(-90) }

And then you take action on those inactive accounts, such as moving them to an "Inactive Users" OU, or disabling their accounts, or sending a reminder email to the account holder reminding them that they have an account in this domain, etc.

It might be handy for you to "artificially" update the lastLogonTimeStamp of another user though.  Maybe you know that this user is on vacation and you don't want their user account to get trashed by the "garbage collector" for being inactive.  According to the documentation, lastLogonTimeStamp is only editable by the system, so forget about directly modifying the attribute the way that you would other LDAP attributes.  And of course "LastLogonDate" is not a real attribute at all - merely a calculated attribute that Powershell gives you to be helpful by converting lastLogonTimestamp into a friendly .NET DateTime object.

The S4U2Self (Service for User to Self) Kerberos extension can help us here.

Just right click on any object such as an OU in Active Directory a folder in a file share, go to its Properties, then the Security tab.  Click the Advanced button.  Now go to the Effective Permissions tab.  Click the Select... button, and choose the user whose lastLogonTimestamp you want to update.  We are going to calculate the effective permissions of this inactive user:

By doing this, you are invoking the S4U2Self Kerberos extension, whereby the system will "go through the motions of Kerberos authentication and obtain a logon for the client, but without providing the client's credentials. Thus, you're not authenticating the client in this case, only making the rounds to collect the group security identifiers (SIDs) for the client."[1]

And just like that, you have updated the "Last Logon Time" on another user's behalf, without that user having to actually log on themselves.

Disable RC4-HMAC (And Others) in Active Directory

My long-distance pal Mathias Jessen pointed out this article today on Twitter, in which the article's author attempts to make us all shudder in fright at the idea that Microsoft Active Directory has a terrifying security vulnerability that will cause the world's corporate infrastructure to crumble and shatter humanity's socio-political status quo as script-kiddies take over the now destabilized Earth.

OK, it's not all that bad...

Of the several encryption types that AD supports for Kerberos authentication, RC4-HMAC is among the oldest and the weakest.  The reason the algorithm is still supported?  You guessed it... backwards compatibility.  The problem is that in this (perhaps ill-conceived, but hindsight's 20/20) implementation of RC4-HMAC, as outlined in RFC 4757, the encryption key that is used is the user's NT/MD4 hash itself!  What this means is that all I need is the NT hash of another user, and by forcing an AD domain controller to negotiate down to RC4-HMAC, I can be granted a Kerberos ticket as that other user.  (Getting another user's NT hash means I've probably already owned some random domain-joined workstation, for instance.)

As you probably already know, an NT hash is essentially password-equivalent and should be treated with the same level of sensitivity as the password itself.  And if you didn't know that, then you should read my earlier blog post where I talk a lot more about this stuff.

This was a deliberate design decision - that is, Microsoft is not going to just patch this away.  The reason they chose to do it this way was to ease the transition from NTLM to Kerberos back around the release of Windows 2000.  Newer versions of Windows such as Vista, 2008, 2008 R2, etc., use newer, better algorithms such as AES256_HMAC_SHA1 that do not use an NT hash.  Newer versions of Windows on a domain will automatically use these newer encryption types by default, but the older types such as the dreaded RC4-HMAC are still supported and can be used by down-level clients... or malicious users pretending to be down-level domain members.

As an administrator, you're free to turn the encryption type off entirely if you do not need the backwards compatibility.  Which you probably don't unless you're still rockin' some NT 4.0 servers or some other legacy application from the '90s.

(Which probably means most companies...)

Edit the Default Domain Controllers (or equivalent) Group Policy and look for the setting:

Computer Configuration > Policies > Windows Settings > Security Settings > Local Policies > Security Options > Network Security: Configure encryption types allowed for Kerberos.

This setting corresponds to the msDS-SupportedEncryptionTypes LDAP attribute on the domain controller computer objects in AD.  Enable the policy setting and uncheck the first three encryption types.

And of course, test in a lab first to ensure all your apps and equipment that uses AD authentication can deal with the new setting before applying it to production.

Writing and Distributing a Powershell Module via GPO

I recently wrote a bunch of Powershell Cmdlets that were related to each other, so I decided to package them all up together as a Powershell module.  These particular Cmdlets were tightly integrated with Active Directory, so it made sense that they would often be run on a domain controller, or against a domain controller, usually by a domain administrator.

First, to create a Powershell script module, you create a folder, and then place your *.psm1 and *.psd1 files in that folder and they must have exactly the same name as that folder. (You can also compile your Powershell module into a DLL, but let's just talk about script modules today.)  So for instance, if you name your module "CloudModule," you would create a directory such as:

C:\Program Files\PSModules\CloudModule\

And then you'd place your files of the same name in that folder:

C:\Program Files\PSModules\CloudModule\CloudModule.psm1
C:\Program Files\PSModules\CloudModule\CloudModule.psd1

I think that a subdirectory of Program Files works well, because the Program Files directory is protected from modification by non-administrators and low-integrity processes.

The psd1 file is the manifest for the PS module. You can easily create a new manifest template with the New-ModuleManifest cmdlet, and customize the template to suit your needs. The module's manifest is simply the "metadata" for the Powershell module, such as what version of Powershell it requires, other prerequisite modules it requires, the module's author, etc. The cool thing is that with current versions of Powershell, just by saying:

RequiredModules = @('ActiveDirectory')

in your manifest file, Powershell will automatically load the ActiveDirectory module for you the first time any cmdlet from your module is run. So you really don't need to put an Import-Module or a #Requires -Module anywhere.

The psm1 file is the collection of Powershell Advanced Functions (cmdlets) that make up your module. I would recommend naming all of your cmdlets with a common theme, using supported verbs (Get-Verb) and a common prefix. You know how the Active Directory module does Get-ADUser, Remove-ADObject, Set-ADGroup, etc.? You should do that too. For example, if you work for McDonald's, do Show-McSalary, Deny-McWorkersComp, Stop-McStrike, etc.

Now that you've got your module directory and files laid out, you need to add that path to your PSModulePath environment variable so that Powershell knows where to look for it every time it starts. Do not try to cheat and put your custom module in the same System32 directory where Microsoft puts their standard PS modules.

I decided that I wanted all my domain controllers to automatically load this custom module. And I don't want to go install and maintain this this thing on 50 separate machines.  So for a centrally-managed solution, let's utilize Group Policy and SYSVOL.

First, let's put something like this in the Default Domain Controllers (or equivalent) GPO:

(*Click to enlarge*)

And secondly, let's put our Powershell module in:


Of course, since this directory is replicated amongst all domain controllers, we only need to do this on one domain controller for the files to become available on all domain controllers.  Likewise, any time you update the module, you only need to update it on one DC, and all DCs will get the update.

Lastly, since this Powershell module is only for Domain Administrators, and SYSVOL by design is a very public place, let's protect our module's directory from prying eyes:

Careful that you only modify the permissions of that one specific module directory... you don't want to modify the permissions of anything else in Sysvol.


Happy Friday, nerds.

This is just going to be a quick post wherein I paste in a Powershell script I wrote about a year ago that parses a Windows Server DNS debug log file, and converts the log file from flat boring hard-to-read text, into lovely Powershell object output.  This makes sorting, analyzing and data mining your DNS log files with Powershell a breeze.

It's not my best work, but I wanted to get it posted on here because I forgot about it until today, and I just remembered it when I wanted to parse a DNS log file, and I tried searching for it here and realized that I never posted it.

#Requires -Version 2
Function Parse-DNSDebugLogFile
    This function reads a Windows Server DNS debug log file, and turns it from
    a big flat file of text, into a useful collection of objects, with each
    object in the collection representing one send or receive packet processed
    by the DNS server.
    This function reads a Windows Server DNS debug log file, and turns it from
    a big flat file of text, into a useful collection of objects, with each
    object in the collection representing one send or receive packet processed
    by the DNS server. This function takes two parameters - DNSLog and ErrorLog.
    Defaults are used if no parameters are supplied.
    Written by Ryan Ries, June 2013.
    Param([Parameter(Mandatory=$True, Position=1)]
            [ValidateScript({Test-Path $_ -PathType Leaf})]
            [String]$ErrorLog = "$Env:USERPROFILE\Desktop\Parse-DNSDebugLogFile.Errors.log")

    # This is the collection of DNS Query objects that this function will eventually return.
    $DNSQueriesCollection = @()
    # Try to read in the DNS debug log file. If error occurs, log error and return null.
        [System.Array]$DNSLogFile = Get-Content $DNSLog -ReadCount 0 -ErrorAction Stop
        If($DNSLogFile.Count -LT 30)
            Throw "File was empty."
        "$(Get-Date) - $($_.Exception.Message)" | Out-File $ErrorLog -Append
        Write-Verbose $_.Exception.Message
    # Cut off the header of the DNS debug log file. It's about 30 lines long and we don't want it.
    $DNSLogFile = $DNSLogFile[30..($DNSLogFile.Count - 1)]
    # Create a temporary buffer for storing only lines that look valid.
    $Buffer = @()
    # Loop through log file. If the line is not blank and it contains what looks like a numerical date,
    # then we can be reasonably sure that it's a valid log file line. If the line is valid,
    # then add it to the buffer.    
    Foreach($_ In $DNSLogFile)
        If($_.Length -GT 0 -AND $_ -Match "\d+/\d+/")
            $Buffer += $_
    # Dump the buffer back into the DNSLogFile variable, and clear the buffer.
    $DNSLogFile = $Buffer
    $Buffer     = $Null
    # Now we parse text and use it to assemble a Query object. If all goes well,
    # then we add the Query object to the Query object collection. This is nasty-looking
    # stuff that we have to do to turn a flat text file into a beutiful collection of
    # objects with members.
    Foreach($_ In $DNSLogFile)
        If($_.Contains(" PACKET "))
            $Query = New-Object System.Object
            $Query | Add-Member -Type NoteProperty -Name Date      -Value $_.Split(' ')[0]
            $Query | Add-Member -Type NoteProperty -Name Time      -Value $_.Split(' ')[1]
            $Query | Add-Member -Type NoteProperty -Name AMPM      -Value $_.Split(' ')[2]
            $Query | Add-Member -Type NoteProperty -Name ThreadID  -Value $_.Split(' ')[3]
            $Query | Add-Member -Type NoteProperty -Name PacketID  -Value $_.Split(' ')[6]
            $Query | Add-Member -Type NoteProperty -Name Protocol  -Value $_.Split(' ')[7]
            $Query | Add-Member -Type NoteProperty -Name Direction -Value $_.Split(' ')[8]
            $Query | Add-Member -Type NoteProperty -Name RemoteIP  -Value $_.Split(' ')[9]
            $BracketLeft  = $_.Split('[')[0]
            $BracketRight = $_.Split('[')[1]
            $Query | Add-Member -Type NoteProperty -Name XID       -Value $BracketLeft.Substring($BracketLeft.Length - 9, 4)
            If($BracketLeft.Substring($BracketLeft.Length - 4, 1) -EQ "R")
                $Query | Add-Member -Type NoteProperty -Name Response -Value $True
                $Query | Add-Member -Type NoteProperty -Name Response -Value $False
            $Query | Add-Member -Type NoteProperty -Name OpCode -Value $BracketLeft.Substring($BracketLeft.Length - 2, 1)
            $Query | Add-Member -Type NoteProperty -Name ResponseCode -Value $BracketRight.SubString(10, 8).Trim()
            $Query | Add-Member -Type NoteProperty -Name QuestionType -Value $_.Split(']')[1].Substring(1,5).Trim()
            $Query | Add-Member -Type NoteProperty -Name QuestionName -Value $_.Split(' ')[-1]
            # Just doing some more sanity checks here to make sure we parsed all that text correctly.
            # If something looks wrong, we'll just discard this whole line and continue on
            # with the next line in the log file.
            If($Query.QuestionName.Length -LT 1)
            If($Query.XID.Length -NE 4)
                "$(Get-Date) - XID Parse Error. The line was: $_" | Out-File $ErrorLog -Append
            If($Query.Protocol.ToUpper() -NE "UDP" -AND $Query.Protocol.ToUpper() -NE "TCP")
                "$(Get-Date) - Protocol Parse Error. The line was: $_" | Out-File $ErrorLog -Append
            If($Query.Direction.ToUpper() -NE "SND" -AND $Query.Direction.ToUpper() -NE "RCV")
                "$(Get-Date) - Direction Parse Error. The line was: $_" | Out-File $ErrorLog -Append
            If($Query.QuestionName.Length -LT 1)
            # Let's change the query format back from RFC 1035 section 4.1.2 style, to the 
            # dot notation that we're used to. (8)computer(6)domain(3)com(0) =
            $Query.QuestionName = $($Query.QuestionName -Replace "\(\d*\)", ".").TrimStart('.')
            # Finally let's add one more property to the object; it might come in handy:
            $Query | Add-Member -Type NoteProperty -Name HostName -Value $Query.QuestionName.Split('.')[0].ToLower()
            # The line, which we've now converted to a query object, looks good, so let's
            # add it to the query objects collection.
            $DNSQueriesCollection += $Query
    $DNSLogFile = $Null
    # Clear the error log if it's too big.
    If(Test-Path $ErrorLog)
        If((Get-ChildItem $ErrorLog).Length -GT 1MB)
            Clear-Content $ErrorLog
    Return $DNSQueriesCollection
# END FUNCTION Parse-DNSDebugLogFile

And the output looks like this:

Date         : 6/17/2013
Time         : 11:51:41
AMPM         : AM
ThreadID     : 19BC
PacketID     : 0000000010146F10
Protocol     : UDP
Direction    : Snd
RemoteIP     :
XID          : e230
Response     : True
OpCode       : Q
ResponseCode : NOERROR
QuestionType : A
QuestionName :
HostName     : host01

Date         : 6/17/2013
Time         : 11:51:41
AMPM         : AM
ThreadID     : 19BC
PacketID     : 000000000811A6C0
Protocol     : UDP
Direction    : Snd
RemoteIP     :
XID          : 36df
Response     : True
OpCode       : Q
ResponseCode : NXDOMAIN
QuestionType : SRV
QuestionName :
HostName     : _kerberos-master

TechEd 2014

I have been wanting to attend TechEd for several years now, and finally I've attended my first one this year in Houston, Texas. Overall it was a great experience, although I was a little overwhelmed by having so much that I wanted to see and do and not enough time to do it all. I also feel disappointed that there was an emergency back home while I was at TechEd so I had to cut the trip short and miss an entire day of the conference. But, it's not the end of the world. I hope to be able to go back for TechEd 2015 next year.

Here are a few of my thoughts.

TechEd 2014 George R Brown Convention Center

I drove to Houston from Dallas, and arrived here at the George R Brown convention center on about 1 hour of sleep. Luckily I signed up as soon as registration opened, which meant that I was able to book a hotel room in the Hilton right across the street. All I had to do to get back and forth from my room to the convention center was cross the catwalk.

Monday's Keynote

It was extremely crowded. This conference was not for the agoraphobic. In fact, the conference was so congested, and the convention center so poorly designed to handle such a large crowd, it made it difficult to even move around at times. Many of the sessions (especially the Russinovich sessions) filled up 30 minutes before they were scheduled to begin. This was the first year Microsoft consolidated the MMS and TechEd conferences though, which did not help.

The conference center was a gigantic cuboid structure with no thought given to traffic flow, restroom layout, etc.

Endless hallway of TechEd conferences

That being said, the content was still great. Tons of cutting-edge System Center, Powershell Desired State Configuration, Azure stuff, and lots of security talks. (AKA "Trustworthy Computing") Lots, and lots of security talks. I almost lost count of how many times I got to see the presenters try to scare us by showing us Mimikatz to dump lsass.exe private memory, or export "non-exportable" private keys. And all the rockstars were there too; the guys that are essentially celebrities to me because of the awesome stuff they've done while at Microsoft.

Some of the other great content I saw:

  • Virtual Machine Manager Network Virtualization, which neatly solves the overlapping IP address situation in a multi-tenant environment.
  • Freaking Azure Pack which allows you to essentially deploy your very own private Azure inside your own organization.
  • Aaron Margosis trying to convince us to go up to Mark Russinovich and say to him, "Mr. Cogswell, I'm a huge fan of your work..."
  • Jeffrey Snover muttering "god damnit" to himself over and over again under his breath while trying to show us how to change the color of error text in our Powershell console.
  • Getting to pitch my idea to one of the Windows client security guys about LogonUI.exe performing a hash check on utilman.exe before launching it from the Accessibility button.
  • Etc.

Just another shot of the huge crowd
Just another shot of the endless sea of people.

High-tech signs
And an example of the high-tech digital signage.

Me and Ed Wilson
Me and Ed Wilson, AKA Dr. Scripto

Me and Mathias, whom I met on ServerFault, after a night of exploring the Tech Expo, drinking beer, walking, talking about the pathetic state of healthcare in the United States, and eating at the Cheesecake Factory.

Speaking of tech expo, this year it was all about SSDs, flash memory, ridiculous amounts of RAM for all your SQL 2014 In-Memory OLTP processing, and 56 gigabit Infiniband RoCE setups:

I hope to see you again next year, TechEd...