Mind Your Powershell Efficiency Optimizations

A lazy Sunday morning post!

As most would agree, Powershell is the most powerful Windows administration tool ever seen. In my opinion, you cannot continue to be a Windows admin without learning it. However, Powershell is not breaking any speed records. In fact it can be downright slow. (After all, it's called Power-shell, not Speed-shell.)

So, as developers or sysadmins or devopsapotami or anyone else who writes Powershell, I implore you to not further sully Powershell's reputation for being slow by taking the time to benchmark and optimize your script/code.

Let's look at an example.

$Numbers = @()
Measure-Command { (0 .. 9999) | ForEach-Object { $Numbers += Get-Random } }

I'm simply creating an array (of indeterminate size) and proceeding to fill it with 10,000 random numbers.  Notice the use of Measure-Command { }, which is what you want to use for seeing exactly how long things take to execute in Powershell.  The above procedure took 21.3 seconds.

So let's swap in a strongly-typed array and do the exact same thing:

[Int[]]$Numbers = New-Object Int[] 10000
Measure-Command { (0 .. 9999) | ForEach-Object { $Numbers[$_] = Get-Random } }

We can produce the exact same result, that is, a 10,000-element array full of random integers, in 0.47 seconds.

That's an approximate 45x speed improvement.

We called the Get-Random Cmdlet 10,000 times in both examples, so that is probably not our bottleneck. Using [Int[]]$Numbers = @() doesn't help either, so I don't think it's the boxing and unboxing overhead that you'd see with an ArrayList. Instead, it seems most likely that the dramatic performance difference was in using an array of fixed size, which eliminates the need to resize the array 10,000 times.

Once you've got your script working, then you should think about optimizing it. Use Measure-Command to see how long specific pieces of your script take. Powershell, and all of .NET to a larger extent, gives you a ton of flexibility in how you write your code. There is almost never just one way to accomplish something. However, with that flexibility, comes the responsibility of finding the best possible way.

Server Core Page File Management with Powershell

A quickie for tonight.

(Heh heh...)

Microsoft is really pushing to make this command line-only, Server Core and Powershell thing happen. No more GUI. Everything needs to get done on the command line. Wooo command line. Love it.

So... how the heck do you set the size and location of the paging file(s) without a GUI? Could you do it without Googling Binging? Right now?

You will be able to if you remember this:

$PageFileSizeMB = [Math]::Truncate(((Get-WmiObject Win32_ComputerSystem).TotalPhysicalMemory + 200MB) / 1MB)
Set-CimInstance -Query "SELECT * FROM Win32_ComputerSystem" -Property @{AutomaticManagedPagefile="False"}
Set-CimInstance -Query "SELECT * FROM Win32_PageFileSetting" -Property @{InitialSize=$PageFileSizeMB; MaximumSize=$PageFileSizeMB}

The idea here is that I'm first turning off automatic page file management, and then I am setting the size of the page file manually, to be static and to be equal to the size of my RAM, plus a little extra. If you want full memory dumps in case your server crashes, you need a page file that is the size of your physical RAM plus a few extra MB.

You could have also done this with wmic.exe without using Powershell, but when given a choice between Powershell and not-Powershell, I pretty much always go Powershell.

Did I mention Powershell?

Generating Certificate Requests With Certreq

Hey there,

SSL/TLS and the certificates it comes with are becoming more ubiquitous every day.  The system is not without its flaws, (BEAST, hash collision attacks, etc.,) but it's still generally regarded as "pretty good," and it's downright mandatory in any network that needs even a modicum of security.

One major downside is the administrative burden of having to keep track of and renew all those certificates, but Active Directory Certificate Services does a wonderful job of automating a lot of that away.  Many Windows administrator's lives would be a living hell if it weren't for Active Directory-integrated auto-enrollment.

But sometimes you don't always have the pleasure of working with an Enterprise CA. Sometimes you need to manually request a certificate from a non-Microsoft certificate authority, or a CA that is kept offline, etc.  Most people immediately start thinking about OpenSSL, which is a fine, multiplatform open-source tool.  But I usually seek out native tools that I already have on my Windows servers before I go download something off the internet that duplicates functionality that already comes with Windows.

Which brings me to certreq.  I use this guy to generate CSRs (certificate requests) when I need to submit one to a CA that isn't part of my AD forest or cannot otherwise be used in an auto-enrollment scenario. First paste something like this into an *.inf file:

;----------------- csr.inf -----------------
[Version]
Signature="$Windows NT$

[NewRequest]
Subject = "CN=web01.contoso.com, O=Contoso LLC, L=Redmond, S=Washington, C=US" 
KeySpec = 1
KeyLength = 2048
; Can be 1024, 2048, 4096, 8192, or 16384.
; Larger key sizes are more secure, but have
; a greater impact on performance.
Exportable = TRUE
MachineKeySet = TRUE
SMIME = False
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication
;-----------------------------------------------

Then, run the command:

C:\> certreq -new csr.inf web01.req

And certreq will take the settings from the INF file that you created and turn them into a CSR with a .req extension.  The certreq reference and syntax, including all the various parameters that you can include in your INF file is right here. It's at this moment that the private key associated with this request is generated and stored, but it is not stored within the CSR so you don't have to worry about securely transporting the CSR.

Now you can submit that CSR to the certificate authority. Once the certificate authority has approved your request, they'll give you back a PEM or a CER file. If your CA gives you a PEM file, just rename it to CER.  The format is the same.  Remember that only the computer that generated the CSR has the private key for this certificate, so the request can only be completed on that computer.  To install the certificate, run:

C:\> certreq -Accept certificate.cer

Now you should see the certificate in the computer's certificate store, and the little key on the icon verifies that you do have the associated private key to go along with it.

So there you have it.  See you next time!

Today's Thoughts on Windows 8.1 (Will Do Server 2012 R2 Next)

Guten abend!

So thankfully, Microsoft reversed their earlier decision to not release Windows 8.1 and Server 2012 R2 RTM on TechNet or MSDN until October 18th. Both products popped up on TechNet a few days ago. So, I downloaded both and have been playing with them in my lab the past few days. (Which is likely the last good thing I will be able to get from TechNet.  Rest in peace, you final bastion of good will from Microsoft to IT professionals.)

Windows 8.1 has gone onto the following test machine:

  • Intel Core i5-2500k
  • 16GB RAM
  • 256GB Samsung SSD
  • NVidia GTX 670 2GB

Needless to say, it screams. My experience has been that you will typically have a better time with Win 8 if you set it up with your Microsoft Live ID from the beginning, and not a domain account. In fact, it's almost impossible to install Windows 8.1 with anything other than your Microsoft Live ID. (Although you're free to join a domain later, after the install. But good luck installing with a local account.) I would say that this will be a barrier for Windows 8 adoption in the enterprise, however, the actual Win 8.1 Enterprise SKU has not been released yet, so the installer for that edition should be tweaked for easier installation in an AD domain in an enterprise environment. (And I admittedly have not even tried custom deployable images as you would with an enterprise environment.)

That looks weird.

But in a home setting, the reason I think it's awesome to go ahead and use your Live ID to install Windows 8.1 is because:

  • Your Skydrive sets itself up. It's already there waiting for you. It's integrated into Explorer already, and the coolest part is it initially takes up no room on your hard drive. It all stays online but browsable from within Explorer, and you only pull a file down from the cloud when you open it. But if you have some need to have it available offline? Just right-click the file, folder, or your entire Skydrive and choose "Make available offline" and it will all be downloaded locally. If you used Skydrive before 8.1, you should love this improvement. If you did not use Skydrive before 8.1 then you may find that this added feature only gets in the way. 
  • All your OS settings from Windows 8 are synchronized and brought into 8.1, even if you performed a clean install of 8.1. As soon as the installation finished, I landed on a Windows desktop and my wallpaper is already what I had on my last PC, because the wallpaper was stored on Skydrive. Furthermore, all my settings like 'folder view settings' were automatically sucked into the new installation as well. Ever since Windows 95, every time I would install the OS on a new machine, the first thing I did was go to the folder view settings and uncheck the "Hide File Extensions" option. I always hated that Windows would hide the file extension of files. Well, now that setting stays with me on every Win 8 machine I move to and I no longer have to worry about it.
  • IE11 seems great so far. Very fast, although, that could also be attributed to my beefy hardware. However, I have experienced one compatibility problem so far with IE11. I know that the user agent string for one thing changed dramatically in IE11. But in a pinch, hit F12 for the developer tools and you can emulate any down-level version of IE that you need. No big deal. I'll resist the urge to rant against web developers here.
  • (Though seriously, web developers, if you're listening, you are ruining the web.)
  • Boot to desktop and the ability to show your desktop wallpaper as your Start Screen background are welcome features. The resurrection of the classic Start Button on the taskbar, however, I don't care about one way or the other. I never really missed the old Start Menu from old versions of Windows. I pretty much don't care about the 'Modern,' 'Metro' interface either way, but I'm not bitter about it, because I know it wasn't made for me. It was made for phones and tablets. I have a desktop PC, and as such, I have no need for the Modern UI. End of story. Use what works for you. The OS now has a new feature now that I'm not really interested in, but who cares, the rest of the underlying OS is still there, and it's still good.
  • The Remote Server Administration Tools for Win 8.1 Preview installs on and works in Win 8.1 RTM, which I am using to set up a full Server 2012 R2 lab environment, which I shall talk about shortly in an upcoming blog post!

Powershell RemoteSigned and AllSigned Policies, Revisited

A while back I talked about Powershell signing policies here.  Today I'm revisiting them.

Enabling script signing is generally a good idea for production environments. Things like script signing policies, using Windows Firewall, etc., all add up to provide a more thorough defense-in-depth strategy.

On the other hand, one negative side-effect that I've seen from enforcing Powershell signing policies is that it breaks the Best Practices Analyzers in Windows Server, since the BPAs apparently use scripts that are unsigned. Which is strange, since Microsoft is usually very good about signing everything that they release. I assume that they've since fixed that.

I'd consider the biggest obstacle to using only signed Powershell scripts to be one of discipline. But maybe that in itself is a good thing - if only the admins and engineers who are disciplined enough to put their digital signature on something are allowed to run PS scripts in production, perhaps that will cut down on the incidents of wild cowboys running ill-prepared scripts in your environment, or making a quick change on the fly to an important production script, etc. Could you imagine having to re-sign your script every time you changed a single line? That seems like the perfect way to encourage the behavior that scripts are first perfected in a lab, and only brought to production once they're fully baked and signed.

The next obstacle is getting everyone their own code signing certificate. This means you either need to spend some money getting a boat-load of certs from a public CA for all your employees, or you need to maintain your own PKI in your own Active Directory forest(s) for internal-use-only certificates.  This part alone is going to disqualify many organizations. Very rare is the organization that cares about properly signing things in their IT infrastructure.  Even rarer is the organization that actually does it, as opposed to just saying they want everything to be properly signed.  "It's just too much hassle... and now you're asking me to sign all my scripts, too?"

I also want to reinforce this point: Like UAC, Powershell script execution policies are not meant to be relied upon as a strong security measure. Microsoft does not tout them as such. They're meant to prevent you from making mistakes and doing things on accident. Things like UAC and PS script execution policies will keep honest people honest and non-administrators from tearing stuff up.  An AllSigned execution policy can also thwart unsophisticated attempts to compromise your security by preventing things such as modifying your PS profile without your knowledge to execute malicious code the next time you launch Powershell. But execution policies are no silver bullet. They are simply one more thin layer in your overall security onion.

So now let's play Devil's advocate. We already know the RemoteSigned policy should be a breeze to bypass just by clearing the Zone.Identifier alternate NTFS data stream. How do we bypass an Allsigned policy?

PS C:\> .\script.ps1
.\script.ps1 : File C:\script.ps1 cannot be loaded. The file C:\script.ps1 is not digitally signed. The script will not execute on the system.

Script won't execute?  Do your administrator's script execution policies get you down?  No problem:

  • Open Notepad.
  • Paste the following line into Notepad:
Powershell.exe -NoProfile -Command "Powershell.exe -NoProfile -EncodedCommand ([Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes((Get-Content %1 | %% {$_} | Out-String))))"
  • Save the file as bypass.bat.
  • Run your script by passing it as a parameter to bypass.bat.

PS C:\> .\bypass.bat .\script.ps1

... And congratulations, you just bypassed Powershell's execution policy as a non-elevated user.

So in conclusion, even after showing you how easy it is to bypass the AllSigned execution policy, I still recommend that good execution policies be enforced in production environments. Powershell execution policies are not meant to foil hackers.

  1. They're meant to be a safeguard against well-intentioned admins accidentally running scripts that cause unintended consequences.
  2. They verify the authenticity of a script. We know who wrote it and we know that it has not been altered since they signed it.
  3. They encourage good behavior by making it difficult for admins and engineers to lackadaisically run scripts that could damage a sensitive environment.