Another great blog from one of our SQL Server Consultants…
Customer demand would have it, that as a SQL Server DBA, automation was very high on my agenda. And so my task was to set to automate SQL Server 2012 builds. Hardly criminal and a quick spin into Google, some people have been very busy out there with PowerShell! Could I not download and use some of this wonderful creativity? Well, yes, totally. However, infinite wisdom prevailed. I thought it would be cool to write my own build script that not only does the build, but also dynamically builds its own configuration file, collects the bootstrap, summary log, configuration file (for DR rebuild) and performs an inventory update, all pretty cool (well, I did want to learn PowerShell and find out what the rave was).
If you are of the Microsoft persuasion you soon realise PowerShell is infinitely better than DOS and very welcoming in that a lot the commands you know and love are still there; it is instantly something to start raving about in that respect. Some UNIX commands are also thrown in for good measure and you can of course have aliases – great! The frustration came when I decided this would only be cool if you could do the installs remotely.
Ok, so SQL Server installs over the network, very straightforward: \\..\setup.exe /CONFIGURATIONFILE= allthingsgood.ini. My initial thoughts were to just to write this into a stored procedure … Makes sense, I could run it under a job schedule. I could use something like PSExec.exe and enter a session to a remote host, run the setup from a network share. I had to think again, unfortunately this does not work; the mapping to the network share was only obtainable if I re-entered my username and password. Ok I thought, this must be because I am using T-SQL to shell out to DOS and then run PSExec (as it turns out, this was the case!). So my next move was to get the sproc to use PowerShell to run a remote session script-block. Nope, same issue. I could get it to work if I passed in my username and password directly (for example, say PSExec to remote host and the map a drive with net use z: \\server\share /USER:), otherwise I was getting lots of weird access and not found errors. Obviously hard-coding credentials into my script was not just undesirable, it’s an outright no-no (straight off the back of a security project too). So what was the problem? It all boils down to the restriction put upon us by the authentication protocol.
It’s commonly known as a ‘second hop’ or ‘double hop’ issue – much more common than I thought too. It will happen anywhere integrated security is permitted and you’ve passed into your first resource and wish to access another network resource – it’s a security measure and thinking about it, quite right too – IIS websites, DOS, PowerShell, SQL Server … It affects them all. To go any further once in a remote session, you need to enter in your security credentials again, e.g. PSExec to a remote host and map a network drive works fine; shell out to DOS from T-SQL (1 hop) and then run PSExec to a remote host or even just map a network drive (2nd hop) – just won’t work until you re-pass credentials with access.
To make second hops in PowerShell, you need to enable the CredSSP authentication protocol. You have to run the following on the machine in which you have your remote session:
Enable-WSManCredSSP –Role client –DelegateComputer *
You further have to run the following where your setup.exe is located:
Enable-WSManCredSSP –Role server
There’s a great article on this by Don Jones in TechNet Magazine: http://technet.microsoft.com/en-us/magazine/jj853299.aspx
The thinking now is to have an staging OU in AD with a GPO as Don suggests in his article to have CredSSP enabled – pre-build phase.
But, that’s not the only stumbling block. My script was using Invoke-Command to call setup.exe which worked fine yet and the install launched only to fail in random places with OutofMemoryException errors – great, what now? I soon discovered the remote shells (and so my Invoke-Command cmdlet use) is limited to 150 Mb shell space by default – what? Where’s the power? Soon remedied – back to the GPO to ramp this up to 2 Gb with:
Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB 2048
And so the build script is finally together to pass in just the target hostname as a mandatory parameter, call some T-SQL to build a configuration file and then pipe back to PoweShell to run the setup with said file; it then calls another sproc to run some post-install instance configuration, collects the logs and updates our inventory db as intended. Optional parameters are for cluster (nodes) and Sharepoint builds (different collation) and different environments (dev is default, other environments change the name of the instance to reflect the environment). The script also prompts for authentication credentials from the outset which are used to maintain the session throughout, also providing us with an audit trail of which engineer did what build and when.
For more information or to schedule a demo please contact usContact us
DSP firmly established itself as one of the UK’s fastest growing proactive Database MSPs during 2016, signing £2.1m of new contracts and making its second acquisition, the Oracle DBA support division of IT Services provider ITSB.Read more
A main goal... how to future proof your applications environment through Oracle while becoming more efficient in terms of costs and productivity.
As well as the option of developing the fundamental steps you need to take to get ...Read more
In the age of big data, public cloud, private cloud it’s easy to be pushed into constantly thinking about the future... What should you optimise though and how can you be sure it’ll make a difference? Would knowing it could give you a competitive edge be worth considering?Read more