Friday 4 November 2016

Keep Calm! Deploy to Production!

Welcome to the latest JenkinsHeaven post!

This week I completed the extension of our deployment capability of the web app from our Jenkins Master (running in the Test domain) directly to the Production Support environment (running in the production domain). Thus demonstrating that this set of mechanisms will work for the Production environment (also in the production domain).

The domains have made things tricky.

The trick was:

  • Build on the master and restore all NuGet packages (from both nuget.org and our internal NuGet server)
  • Use the Archive for Clone Workspace SCM Post Build step to archive the entire workspace (**/*.*). See Clone Workspace plugin
  • On success kick off a downstream job tied to the slave (running on Production Support IIS box)
  • First step of downstream job (running on slave) select "Clone Workspace" from the list of possible SCMs and select the parent project
  • I have a step that automates backing up the web application files (just in case the deployment fails for some reason)
  • Deploy with MSBuild as before!
  • Success!

Till next time.

Saturday 20 August 2016

Another Milestone

Welcome to the latest JenkinsHeaven post!

A big thank you from us here at Full Circle Solutions. JenkinsHeaven recently passed 200,000 unique page views.

Till next time.

Tuesday 10 May 2016

Parameterised Builds rock!

Welcome to the latest JenkinsHeaven post!

We have a lot to cover in this post. I decided against splitting this material into multiple posts as it logically belongs together and want the reading experience to be as simple as possible.

It may surprise some of my readers but until last week I had not used parameterised builds before.

Well I'm here to let you know that they are awesome and will rock your world, baby!

Today's post is a case of needs must.

Our application at work is growing along a number of axes:

  • To deploy the full solution means deploying two independent systems. In a future release this is expected to grow to three.
  • We are also about to deploy the current release to production with a road-map of at least three future releases.
  • There are three pre-production test environments that Release Candidates can be deployed to before being deployed to production.

All this spells an explosion in the number of jobs making it difficult to manage. This won't do. Parameterised builds are the solution to this problem.

Our updated taxonomy has:

  1. Jobs per independent system
    1. Polling the repository building every commit
    2. Deploying a specified labelled version (Master branch first test env)
  2. Jobs per independent system & release branch
    1. Deploying a specified labelled version to the specified pre-production environment
    2. Deploying a specified labelled version to production

That's it. We now support multiple systems, environments and releases with fewer total jobs than before. Boom! With the CloudBees Folders plugin...Everything. Becomes. Clear! (Tip: If you have an existing job that has builds and is holding a workspace, you might find that you need to Wipeout the Workspace after moving it into a Folder to get it to build correctly again. I had to do this with the TFS plugin).

Note that we have decided that the master branch will always represent latest and greatest and spawn release branches at the time we first deploy to the first of two UAT environments. This allows the team to keep working on the next release and commit UAT bug fixes to the release branch. This approach borrows ideas from the GitFlow approach to branching (BTW: we are using TFS for source control only) and seems to be working well for us.

In that context, let's look at each of these job groups in more detail.

Group 1A - Build every commit

One job per branch (i.e. Master Branch and each release branch) that polls the repository's master branch, runs its unit test suite and static analysis (StyleCop and FxCop). If all the unit tests pass and static analysis is within tolerances, label the repository ({JOBNAME}_{BUILDNUMBER}).

Post Build Notification actions for this group are:

  • XUnit Unit Test publication
  • Code coverage with OpenCover and ReportGenerator
  • Email notification sent to the development and test team which includes unit test results and console log with the Extended Email Plugin. Emails are sent on Failure-Any, Unstable-TestFailures and Success

Group 1B - Deploy master branch

This is the first of our parameterised jobs. It uses two awesome plugins: the excellent Active Choices plugin and the Scriptler plugin (Scriptler is Groovy! Sorry, I couldn't resist.) which work together and will enable you to deploy a labelled version of a branch to an environment with two groovy scripts. We'll look at those scripts later.

The purpose of this group of jobs is to deploy a labelled version (remember a labelled version is one that has passed the unit tests and static analysis in a 1A job) of the master branch to the test team's environment (we simply call it TEST). This is the first test environment after development and is where story validation occurs.

These jobs give development and test team members the ability to deploy to the TEST environment and no further.

Post Build Notification actions for this group are:

  • Email notification same as Group 1A

Group 2A - Deploy Release branches

The remaining two pre-production environments are End to End (E2E) and User Acceptance Test (UAT)

Each job in this group is essentially a copy of Group 1B except it:

  • operates on a release branch
  • the groovy scripts are parameterised to pick up the labels (created by the corresponding 1A job) on the release branch
  • the target environments are E2E and UAT.

In the same way that test team members can "pull" versions through to the TEST environment they are testing in, the E2E and UAT Testers can "pull" versions through to the E2E and UAT environments they are testing in. It is the responsibility of the development team to commit fixes to the appropriate branch.

Post Build Notification actions for this group are:

  • Email notification same as Group 1A

Group 2B - Production Deployments

Each job in this group is essentially a copy of Group 2A except the only available target environment is Production. I did this so that only a small group of people could deploy to production and a wider group could not accidentally deploy to production. At a later date I plan to consolidate Group 2A and 2B by incorporating this (Thanks Bruno!).

Now, those two scripts:

Post Build Notification actions for this group are:

  • Email notification same as Group 1A

GetSuccessfulBuilds

def builds = []

def job = jenkins.model.Jenkins.instance.getItemByFullName($FULL_JOB_NAME)

job.builds.each {
    def build = it
    def label = "L" + $JOB_NAME + "_" + build.displayName[1..-1]
    if (hudson.model.Result.SUCCESS == it.result) {
        builds.add(label)
    }
}

return builds;

The $JOB_NAME parameter is the Jenkins job name (should be self-explanatory) and the $FULL_JOB_NAME includes the CloudBees Folder name like so: <FOLDER_NAME>/<JOB_NAME>. This script needs to set a parameter called VERSION_SPEC. This is so the TFS plugin knows to checkout by label.

GetTargetEnvironments

if($IS_PROD_BUILD.toBoolean()) {
    return ["PROD"]
}
else {
    return ["E2E", "UAT", "PILOT"]
}

I pass the text 'true' or 'false' (without the quotes) for the boolean parameter $IS_PROD_BUILD. This script sets a parameter simply called ENV.

Thanks for reading and hopefully you have found this helpful. As always, if you have any questions, feedback or comments leave them in the comments section below.

Till next time...

Thursday 25 February 2016

Improving Deployments to Test Environments

Welcome to the latest JenkinsHeaven post!

This is a follow up to the last post about giving team members 1-button deployments to test environments

Generally speaking the deployments have been working extremely well. Previously what was a 30-minute manual (and therefore error-prone) deployment that the development team had to do (which in and of itself reduced iteration capacity) has been reduced to a 2 minute automated process that just works.

Since the last post there have been some major and a minor (incremental) improvements. I'll talk about the incremental one first.

We decided to move the step that runs any database updates that have been checked in since the last build to the front. We found that it is the most likely to fail and therefore don't want to deploy the web application if it does. By making this simple ordering change the whole build is more transactional in nature.

Is it ok to do a deployment now?

So we are running static analysis, unit tests and code coverage as part of the job that runs on every commit. Everyone thought that was great. Also, we could deploy to any nominated environment. Everyone thought that was awesome.

One small wrinkle: The unit tests were running as part of the run-on-every-commit job but were not a gate keeper to the deployment jobs. As a result the Test team had to keep asking: "Is it ok to deployment now?"

This was an issue and needed to be resolved.

Come with me on the journey of how I spent the last two days (thankfully the start of the iteration) solving this issue before we got back into coding and required a deployment "service". Take heart, it is possible with the right mix of plugins.

Firstly some key points about the environment we are operating in:

  1. Using Web Deploy for deployments means that all the knowledge of the remote target IIS is kept with the solution in publish profiles.
  2. The Test team are required to be able to press a button to execute a deployment.
  3. Due to item 1, an artifact repository and the promoted builds plugin are not much help because Web Deploy wants the workspace that has been tested as input, not the compiled binaries.

The Web Deploy mechanism (which I execute through the /deploy parameter to msbuild) works so well, I wanted to keep this in place unchanged. The problem was therefore reduced to: "How do I hand a successfully unit-tested workspace to a deploy job?"

Job 1 Overview

This is the job that I granted users in the DEV and TEST team read and build permissions. Remember to grant yourself all permissions. Its main purpose is to act as gatekeeper of job 2, which actually does the deploy. Job 1 does this by running the unit tests.

Job 1 Configuration

Block build if certain jobs are running: On (Build Blocker Plugin)

As this job is running the XUnit tests, if it is kicked off while the main build job is running (due to a developer commit) it will fail to execute the XUnit tests and fail due to an empty results file. Blocking this job until the main job finishes. Deployers just have to be a little more patient and wait the extra (up to) 10 minutes for the main job to complete.

Discard old builds

We're going to be archiving the workspace. No matter how much disk space you have your going to want limit how much you keep. I currently have this set this to 10, although I am thinking of reducing it to 5.

Permission to copy artifacts (Copy Artifacts Plugin)

Specify the name of job 2 as a project allowed to copy artifacts

Check out the source code from TFS

Use Update: off. This is really important for our angular typescript application. If one of the developers moves or otherwise alters a .ts file in the solution. You really don't want any old .js and .js.map files hanging around on the filesystem. Get a fresh copy of the entire workspace every time.

Clean the solution

Command line arguments: /t:Clean /p:Configuration=Release;Username=<domain\userId>;Password=<password>

Username and password needs to be of an account of a domain that has access to the internet. In this corporate environment this means that the build can restore NuGet packages.

Rebuild the solution

Command line parameters: /t:Rebuild /p:Configuration=Release;Username=<domain\userId>;Password=<password>

Execute the (XUnit) unit tests (XUnit Plugin)

Windows Batch Command: JenkinsScripts\RunUnitTests.bat

Publish XUnit test result report

Add NUnit-Version N/A (Default) and set Pattern to TestReports\xunit-*output-as-nunit.xml Leave checkbox as is. For both Failed Tests and Skipped Tests set both Total and New to 0

Archive the artifacts

Files to archive: **/*.* (Remember: you want to archive the entire workspace)

Excludes (under Advanced): **/bin/**, **/obj/** (We don't want the binaries)

Archive artifacts only if build is successful: On

Execute other projects

Once we have got to this point we a good to go. Execute Job 2 by specifying its name. I call mine DoDeploy__ (I actually have 3 flavours of deploy DoDeploy_TEST_RELEASE, DoDeploy_TEST_DEBUG, DoDeploy_PILOT_RELEASE)

Job 2 Overview

This is where the rubber hits the road, where we have a green light to deploy.

Job 2 Configuration

No Source Code checkout

We want to get the archived workspace that Job 1 has deemed ok to be deployed. Its the workspace that just got unit tested successfully.

Delete workspace before build starts (Workspace Cleanup Plugin): On

To be sure, to be sure

Copy artifacts from another project

Project Name: Name of Job 1. I call mine: DeployTPS__ because this is what the user wants it to do.

Which build: Upstream job that triggered this build

Artifacts to copy: blank (We want everything baby)

Artifacts not to copy: blank

Target directory: blank (workspace is default)

Parameter filters: blank

Ready to deploy

At this point we have secured for ourselves a workspace that has been successfully unit tested and ready for deployment. QED.

It would make life easier if Jenkins has a plugin that allowed me to more easily archive the workspace, because that's what Web Deploy prefers. It would be easier to do all these steps in one job rather having to split over two.

Some more observations

If you find yourself with the same environment limitations, the above set up does work. However it is not perfect. Be aware of the following things you can and can't do.

You can:

  • Deploy the latest successfully unit tested code to a given environment.

You can't:

  • Deploy artifacts to an artifact repository (because you don't have one) and deploy particular versions of your application to a given environment. This is less than ideal and will be where I'll be working next to enhance our capability.

Thanks for reading and hopefully you have found this helpful. As always, if you have any questions, feedback or comments leave them in the comments section below. Let me know if there is a better way to skin this cat.

Till next time...

Wednesday 11 November 2015

Automating Web deployments to Test Environments

Welcome to the latest JenkinsHeaven post!

Today I'd like to talk about how I set up auto-deployment of the web application as it is an important building block in continuous delivery that I have implemented to our internal corporate test environment.

We added some nice features to the environment that are worth telling you about. To get it running smoothly required dealing with some gotchas.

In this environment the target IIS and Jenkins are on the same machine. This is only because we were having firewall issues in a previous environment. The below approach should still work across machines as long as network permissions are in place.

Let's start by outlining what some of these nice features are and then we'll get in to the detail:

  • Everyone on the development team, including the testers have the ability to build a Jenkins job to deploy the web application to the Test environment
  • The application IIS content directory is completely expunged every time the deployment job is executed
  • Redeploying with WebPublish and MSBuild
  • The minor assembly version number (Properties/AssemblyInfo.cs) is incremented and automatically committed back to TFS source control every time the deployment job is executed

I'll be referring to scripts that I have opensourced here. This is where I will continue to add new scripts and make improvements.

1. Everyone can deploy

I used Jenkins own database to create user accounts and simple passwords for each team member and then used Project-based Matrix Authorization Strategy which is part of the Matrix Authorization Strategy plugin to control access (Overall Read permission at global level).

I then created a job called "DeployToTestEnv" and gave users Read and Build permission in the job config

While this does mean that the testers still need to ask the developers if it the code is stable to deploy each and every time they think about deploying, this minor pestering is far outweighed by the benefit of being able to DEPLOY AT WILL!

In a perfect world, the DeployToTestEnv job would only every run after the main job that runs all the tests on each commit had run successfully, but I was overruled and told the testers must be able to "press a button to deploy".

2. Expunging everything on every deploy

Now we are starting to get into what the job actually does. To ensure that you have deleted ALL the files and folders recursively from the target IIS, I user the IISController.bat script to stop the website and also stop the application pool. I found that if I didn't stop the application pool some files and folders remained locked. As soon as I stopped both, everything could be removed with DeleteSiteFiles.bat.

3. Redeploying with WebPublish and MSBuild

Redeploying is easy with the Deploy.bat script. I pass in 3 parameters. The solution file name (path relative to workspace root), the configuration (usually release) and Publish Profile name (previously set up and connection tested with a user that has management rights on the target IIS.). The web.config. file should probably update the connection string (at a minimum). Have a read of this and this.

In the past I have stopped here and felt pretty good about myself. In our environment, the testers are sitting in an office away from the rest of the team. Its only 5 paces away but that glass wall and door is a barrier. As a result of this, the testers are raising bugs unfinished stories.

4. Auto-increment of minor version number

To combat this and know what version of the application the testers are testing and the dev team is developing, we instituted automatic incrementing of the minor version number in the Properties/AssemblyInfo.cs file. This was achieved in two parts. Firstly, I downloaded and installed this and added it as a project in the solution. Secondly, I run the IncrementMinorVersion.ps1 script as a PowerShell Build Step in Jenkins. This script will checkout the file, increment the minor version number and reliably check the change back to TFS source control. Powershell is cool! The effect is that the minor version number increments by 1 AFTER each deployment to test so that the development team is immediately working on the next release.

Thanks for reading and hopefully you have found this helpful. As always, if you have any questions, feedback or comments leave them in the comments section below.

Till next time...

Friday 25 September 2015

Running Jasmine JS Tests on Jenkins with PhantomJS

Welcome to the latest JenkinsHeaven post!
I'm extremely pleased to be able to write this post for you. Running Javascript tests as part of your Jenkins build with Jasmine 2.2.0 is actually, and refreshingly, pretty easy.
  1. Download and install PhantomJS (currently v2.0) from here.
  2. Copy the contents of the zip file to C:\Program Files (x86)
  3. Create an environment variable called PHANTOMJS_HOME with value C:\Program Files (x86)\phantomjs-2.0.0-windows.
  4. Add %PHANTOMJS_HOME%\bin; to the beginning of the Path environment variable.
  5. At this point we want to test that PhantomJS is installed properly. Open a Command Prompt and execute phantomjs -h. You should see the PhantomJS help output. If you saw that, we're good to continue.
  6. Create a new Freestyle job in Jenkins, setting the Git Repository URL to https://github.com/detro/phantomjs-jasminexml-example
  7. Add a Windows batch command build step with the following two commands:

  8. del jasmineTestResults /s /q
    phantomjs test/phantomjs_jasminexml_runner.js test/test_runner.html jasmineTestResults/

    No need to include exit %%ERRORLEVEL%% as phantomjs_jasminexml_runner.js takes care of this for you internally.

  9. Add a Publish xUnit test result report post-build action
  10. In this post-build action click the Add button and select JUnit
  11. Set the JUnit Pattern to jasmineTestResults/*.xml
  12. In the Failed Tests section set all 4 Build Status Thresholds to 1
  13. Save the build configuration
  14. Run the build
  15. Pat yourself on the back, because 7 tests ran!
Hopefully you'll see something like this when you drill into your job. :)



I'll leave it as an exercise for you to take a look at the example code to see how it is structured. Should be easy for you to transpose to your own project.

The key files to look at when considering how to transpose to your own project:

  • src/tv.js
  • test/test_runner.html
  • test/test_spec.js

test/phantomjs_jasminexml_runner.js does not have to be altered at all.

Till next time...

Wednesday 9 September 2015

Jenkins and Powershell Remoting on Windows

Welcome to the latest JenkinsHeaven post!

Requirement

Auto deployment of a database as part of the build to SQL Server on a remote machine running Windows 2012 R2.

As the application database has views and (at time of writing), DBMigrations does not support views it was decided that we would use Powershell to execute SQL scripts against the SQL Server on the remote machine.

So Jenkins needs to execute Powershell and it needs to execute on a remote machine.

Let's break it down and build it up.

Install the Powershell plugin from the update center.

First we need to prove connectivity. Let's just squirt a file from Jenkins to the remote machine using powershell.

Follow the instructions here to test powershell is working locally on the Jenkins machine.

In Part 2 here, we don't need to do all of it. Let me elaborate.

  1. The "Using SSL on the Jenkins Web Interface" section is optional. I didn't have to do this to get connectivity working.
  2. Set up trustedhosts as per the "Configure the Jenkins Server for Remoting and Script Execution" section. I executed Set-Item WSMan:\localhost\Client\TrustedHosts -Value myserver in the Powershell console on the remote machine specifying the IP address of the Jenkins box for myserver
  3. Additionally, make sure you change line 10 of the Powershell script to use your username (don't forget the domain prefix if you need it.) and specify the correct name of the credential on line 6 that you would have set up in the Global Passwords step earlier. NB. Global Passwords are now under Jenkins > Configure System

Next, we need to open the firewall on the remote machine. Search for "Windows Firewall" and open Windows Firewall with Advanced Security

For each of the following rules:

  • Windows Remote Management - Compatibility Mode (HTTP-In) - Domain
  • Windows Remote Management - Compatibility Mode (HTTP-In) - Private, Public
  • Windows Remote Management (HTTP-In) - Domain
  • Windows Remote Management (HTTP-In) - Private, Public

...do the following:

  1. Enable the rule; and
  2. Right click, Go to Properties > Scope and select "Any IP Address" option in both the Local IP Address and Remote IP Address sections.
5. Open the Powershell command prompt on the remote machine and as per this execute Enable-PSRemoting -Force.

You should now be able to execute the "Create Text File Remotely" job on Jenkins and see the output on the remote machine.

Till next time...