Cypress Edge – Now available for Windows

Supported versions

  • Microsoft Edge for Windows 10 (Canary Build)
  • Microsoft Edge for Windows 10 (Dev Build)
  • Microsoft Edge for Windows 10 (Beta Build)

Instructions for Windows

  1. Download Microsoft Edge version of choice from
  2. Make a new directory
  4. Run npm init
  5. Run npm install @you54f/cypress --save
  6. Run node_modules/.bin/cypress open --browser edgeDev to open in interactive mode, and setup‘s example project
  7. Run node_modules/.bin/cypress run --browser edgeDev or node_modules/.bin/cypress run --browser edgeCanary to run in command line mode.
  8. Rejoice & please pass back some appreciation with a star on the repository! Thanks 🙂

Dynamically generate data in Cypress from CSV/XLSX

A quick walkthrough on how to use data from Excel spreadsheets or CSV files, in order to dynamically generate multiple Cypress tests.

We are going to use a 2 column table witht username & password for our example, but in reality this could be any data. We have the following table in csv & xlsx format.

username password
User1 Password1
User2 Password2
User3 Password3
User4 Password4
User5 Password5
User6 Password6
User7 Password7
User8 Password8
User9 Password9
User10 Password10

And we are going to login into the following page

First we need to convert our XLSX file to JSON with

import { writeFileSync } from "fs";
import * as XLSX from "xlsx";
try {
  const workBook = XLSX.readFile("./testData/testData.xlsx");
  const jsonData = XLSX.utils.sheet_to_json(workBook.Sheets.testData);
    JSON.stringify(jsonData, null, 4),
} catch (e) {
  throw Error(e);

or CSV file to JSON with

import { readFileSync, writeFileSync } from "fs";
import { parse } from "papaparse";
try {
  const csvFile = readFileSync("./testData/testData.csv", "utf8");
  const csvResults = parse(csvFile, {
    header: true,
    complete: csvData =>
    JSON.stringify(csvResults, null, 4),
} catch (e) {
  throw Error(e);

In our cypress test file, we are going to

  1. Import our generated JSON file into testData
  2. Loop over each testDataRow, inside the describe block, and set the data object with our username & password
  3. Setup a mocha context with a dynamically generated title, unique for each data row
  4. A single test is written inside the it block using our data attributes, this will be executed as 10 separate tests
import { login } from "../support/pageObjects/";
const testData = require("../fixtures/testData.json");
describe("Dynamically Generated Tests", () => {
  testData.forEach((testDataRow: any) => {
    const data = {
      username: testDataRow.username,
      password: testDataRow.password
    context(`Generating a test for ${data.username}`, () => {
      it("should fail to login for the specified details", () => {
        login.errorMsg.contains("Your username is invalid!");
Voila – Dynamically generated tests from Excel or CSV files! Enjoy

You can extend this further by

  • Manipulating the data in the test script, prior to using it in your test such as shifting date of birth by an offset
  • Having different outcomes in your test or running different assertions based on a parameter in your test data file.

A full working example can be downloaded here:-

git clone

yarn install

To convert Excel files to JSON

make convertXLStoJSON or npm run convertXLStoJSON

  • File:- testData/convertXLStoJSON.ts
  • Input:- testData/testData.xlsx
  • Output:- cypress/fixtures/testData.json

To convert CSV to JSON

make convertCSVtoJSON or yarn run convertCSVtoJSON

  • File:- testData/convertCSVtoJSON.ts
  • Input:- testData/testData.csv
  • Output:- cypress/fixtures/testDataFromCSV.json

To see the test in action

  • export CYPRESS_SUT_URL=
  • npx cypress open --env configFile=development or make test-local-gui

Open the script login.spec.ts which will generate a test for every entry in the CSV or XLS (default) file.

If you wish to read from the CSV, in the file cypress/integration/login.spec.ts

Change const testData = require("../fixtures/testData.json"); to

const testData = require("../fixtures/testDataFromCSV.json");

Configuring Cypress to work with iFrames & cross-origin sites.

Currently working Browsers & Modes

  •  Chrome Headed
    •  Cypress UI
    •  Cypress CLI

There are a considerations for automating your web application with Cypress, that you may come across, which may lead you to the Cypress Web Security Docs or trawling through Cypress raised issues for potential workarounds/solutions.

Problems you may encounter

Cypress Docs – disabling web security

  • Display insecure content
  • Navigate to any superdomain without cross origin errors
  • Access cross origin iframes that are embedded in your application.

Simply by setting chromeWebSecurity to false in your cypress.json

  "chromeWebSecurity": false

If you set it in your base cypress.json, then you will apply this to all your sites, which may not be ideal, as you may only want to cater for insecure content on your dev machine, but secure content, in testing in prod.

See how to configure Cypress per env configuration files

However we wanted to check a journey that integrates with a 3rd party, and came across some cross site issues

Uncaught DOMException: Blocked a frame with origin "https://your_site_here" from accessing a cross-origin frame.

So we switch off chromeWebSecurity: false and then get this error

Refused to display 'https://your_site_here' in a frame because it set 'X-Frame-Options' to 'sameorigin'.

Looks like these guys had the same issue

Cypress Issue #1763

Cypress Issue #944

So hi-ho, it’s off to docs we go

Chromium Site Isolation Docs


We want to disable the following features

  • --disable-features=CrossSiteDocumentBlockingAlways,CrossSiteDocumentBlockingIfIsolating
  • -disable-features=IsolateOrigins,site-per-process
    • IsolateOrigins- Require dedicated processes for a set of origins, specified as a comma-separated list.
    • site-per-process – Enforces a one-site-per-process security policy: Each renderer process, for its whole lifetime, is dedicated to rendering pages for just one site.
      * Thus, pages from different sites are never in the same process.
      * A renderer process’s access rights are restricted based on its site.
      * All cross-site navigations force process swaps. <iframe>s are rendered out-of-process whenever the src= is cross-site.

So lets add the following to cypress/plugins/index.js

const path = require('path');

module.exports = (on, config) => {
  on('before:browser:launch', (browser = {}, args) => {
    console.log(config, browser, args);
    if ( === 'chrome') {
    return args;

We now want to drop the following headers to allow all pages to be i-framed.

  • ‘content-security-policy’,
  • ‘x-frame-options

We can use Ignore X-Frame headers chrome extension and load it into our cypress instance, so we can download it from and place is your cypress/extensions folder, or you can get the source code directly here, saving the files to cypress/extensions/ignore-x-frame-headers

add the following to cypress/index.js

const path = require('path');

module.exports = (on, config) => {
  on('before:browser:launch', (browser = {}, args) => {
    console.log(config, browser, args);
    if ( === 'chrome') {
      const ignoreXFrameHeadersExtension = path.join(__dirname, '../extensions/ignore-x-frame-headers');
    return args;

We can also automate the download of the extension for CI systems.

npm i chrome-ext-downloader --save-dev or yarn add chrome-ext-downloader --dev

put the following in package.json

  "scripts": {
    "download-extension": "ced gleekbfjekiniecknbkamfmkohkpodhe extensions/ignore-x-frame-headers"
  "dependencies": {
    "chrome-ext-downloader": "^1.0.4",

Our final cypress/plugins/index.js file incorporating both changes, will look like below

const path = require('path');

module.exports = (on, config) => {
  on('before:browser:launch', (browser = {}, args) => {
    console.log(config, browser, args);
    if ( === 'chrome') {
      const ignoreXFrameHeadersExtension = path.join(__dirname, '../extensions/ignore-x-frame-headers');
    return args;

Note:- Since writing this article, the extension has been deleted now from the google extension store, which although it still exists, it means it cannot be downloaded with chrome-ext-downloader

Source code can be found here :-

Extension can still be downloaded from

If there is enough demand, I will republish the source-code and publish to the chrome web store, with full credits to the original author.

Jest-Pact – A Jest-adaptor to help write Pact files with ease

In previous posts, I have spoken about A wonderful set of tools, designed to help you and your team develop smarter, with consumer-driven contract tests.

We use Jest at work to test our TypeScript code, so it made sense to use Jest as our testing framework, to write our Pact unit tests with.

The Jest example on Pact-JS, involve a-lot of setup, which resulted in a fair bit of cognitive-load before a developer could start writing their Contract tests.

Inspired by a post by Tim Jones, one of the maintainers of Pact-JS and a member of the Dius team who built PACT, I decided to build and release an adapter for Jest, which would abstract the pact setup away from the developer, leaving them to concentrate on the tests.


  •  instantiates the PactOptions for you
  •  Setups Pact mock service before and after hooks so you don’t have to
  •  Assign random ports and pass port back to user so we can run in parallel without port clashes

Adapter Installation

npm i jest-pact --save-dev


yarn add jest-pact --dev


pactWith({ consumer: 'MyConsumer', provider: 'MyProvider' }, provider => {
    // regular pact tests go here


Say that your API layer looks something like this:

import axios from 'axios';

const defaultBaseUrl = ""

export const api = (baseUrl = defaultBaseUrl) => ({
     getHealth: () => axios.get(`${baseUrl}/health`)
                    .then(response =>
    /* other endpoints here */

Then your test might look like:

import { pactWith } from 'jest-pact';
import { Matchers } from '@pact-foundation/pact';
import api from 'yourCode';

pactWith({ consumer: 'MyConsumer', provider: 'MyProvider' }, provider => {
  let client;
  beforeEach(() => {
    client = api(provider.mockService.baseUrl)

  describe('health endpoint', () => {
    // Here we set up the interaction that the Pact
    // mock provider will expect.
    // jest-pact takes care of validating and tearing 
    // down the provider for you. 
    beforeEach(() =>
        state: "Server is healthy",
        uponReceiving: 'A request for API health',
        willRespondWith: {
          status: 200,
          body: {
        withRequest: {
          method: 'GET',
          path: '/health',
    // You also test that the API returns the correct 
    // response to the data layer. 
    // Although Pact will ensure that the provider
    // returned the expected object, you need to test that
    // your code recieves the right object.
    // This is often the same as the object that was 
    // in the network response, but (as illustrated 
    // here) not always.
    it('returns server health', () => => {

You can make your tests easier to read by extracting your request and responses:

/* pact.fixtures.js */
import { Matchers } from '@pact-foundation/pact';

export const healthRequest = {
  uponReceiving: 'A request for API health',
  withRequest: {
    method: 'GET',
    path: '/health',

export const healthyResponse = {
  status: 200,
  body: {
import { pactWith } from 'jest-pact';
import { healthRequest, healthyResponse } from "./pact.fixtures";

import api from 'yourCode';

pactWith({ consumer: 'MyConsumer', provider: 'MyProvider' }, provider => {
  let client;
  beforeEach(() => {
    client = api(provider.mockService.baseUrl)

  describe('health endpoint', () => {

    beforeEach(() =>
        state: "Server is healthy",
        willRespondWith: healthyResponse
    it('returns server health', () => => {


pactWith(PactOptions, provider => {
    // regular pact tests go here

interface PactOptions {
  provider: string;
  consumer: string;
  port?: number; // defaults to a random port if not provided
  pactfileWriteMode?: PactFileWriteMode;
  dir? string // defaults to pact/pacts if not provided

type LogLevel = "trace" | "debug" | "info" | "warn" | "error" | "fatal";
type PactFileWriteMode = "overwrite" | "update" | "merge";


  • Log files are written to /pact/logs
  • Pact files are written to /pact/pacts

Jest Watch Mode

By default Jest will watch all your files for changes, which means it will run in an infinite loop as your pact tests will generate json pact files and log files.

You can get round this by using the following watchPathIgnorePatterns: ["pact/logs/*","pact/pacts/*"] in your jest.config.js


module.exports = {
  testMatch: ["**/*.test.(ts|js)", "**/*.it.(ts|js)", "**/*.pacttest.(ts|js)"],
  watchPathIgnorePatterns: ["pact/logs/*", "pact/pacts/*"]

You can now run your tests with jest --watch and when you change a pact file, or your source code, your pact tests will run

Examples of usage of jest-pact

See Jest-Pact-Typescript which showcases a full consumer workflow written in Typescript with Jest, using this adaptor

  •  Example pact tests
    •  AWS v4 Signed API Gateway Provider
    •  Soap API provider
    •  File upload API provider
    •  JSON API provider

Examples Installation

  • clone repository
  • Run yarn install
  • Run yarn run pact-test

Generated pacts will be output in pact/pacts Log files will be output in pact/logs


Slack Reporting for

I’ve been using Cypress for front-end testing for the last year, which we have been executing in our CI pipeline with CircleCI. CircleCI offers slack notifications for builds, but it doesn’t offer the ability to customise the Slack notifications with build metadata. So I decided to write a slack reporter, that would do the following

  • Notify a channel when tests are complete
  • Display the test run status (Passed / Failed / Build Failure), plus number of tests
  • Display VCS metadata (Branch Name / Triggering Commit & Author)
  • Display VCS Pull Requesdt metadata (number and link to PR )
  • Provide a link to CI build log
  • Provide a link to a test report generated with Mochawesome
  • Provide links to screenshots / videos of failing test runs

The source code is available here :-

It has been released as a downloadable package from NPM, read below for details on how to get it, and how to use it.

As this is an add-on for Cypress, we still need a few pre-requisites

1. Download the npm package direct from the registry

npm install cypress-slack-reporter --save-dev


yarn add cypress-slack-reporter --dev

2. Create a Slack incoming webhook URL at Slack Apps

3. Setup an environment variable to hold your webhook, created in the last step and save as SLACK_WEBHOOK_URL

$ export SLACK_WEBHOOK_URL=yourWebhookUrlHere

4. Add the following in your cypress.json file

  "reporter": "cypress-multi-reporters",
  "reporterOptions": {
    "configFile": "reporterOpts.json"

5. Add the following in a newly created reporterOpts.json file

  "reporterEnabled": "mochawesome",
  "mochawesomeReporterOptions": {
    "reportDir": "cypress/reports/mocha",
    "quiet": true,
    "overwrite": false,
    "html": false,
    "json": true

6. Run cypress in run mode, which will generate a mochawesome test report, per spec file.

7. We now need to combine the seperate mochawesome files into a single file using mochawesome-merge

$ mkdir mochareports && npx mochawesome-merge --reportDir cypress/reports/mocha > mochareports/report-$$(date +'%Y%m%d-%H%M%S').json

8. We will now generate our test report with mochawesome, using our consolidated test report

$ npx marge mochareports/*.json -f report-$$(date +'%Y%m%d-%H%M%S') -o mochareports

9. We can now run our Slack Reporter, and set any non-default options

$ npx cypress-slack-reporter --help

  Usage: index.ts [options]

    -v, --version            output the version number
    --vcs-provider [type]    VCS Provider [github|bitbucket|none] (default: "github")
    --ci-provider [type]     CI Provider [circleci|none] (default: "circleci")
    --report-dir [type]      mochawesome json & html test report directory, relative to your package.json (default: "mochareports")
    --screenshot-dir [type]  cypress screenshot directory, relative to your package.json (default: "cypress/screenshots")
    --video-dir [type]       cypress video directory, relative to your package.json (default: "cypress/videos")
    --verbose                show log output
    -h, --help               output usage information

Our generated slack reports will look like below.


Currently we support CircleCI for CI & GitHub/BitBucket VCS’s.

For other providers, please raise a GitHub issue or pass --ci-provider none provider flag to provide a simple slack message based on the mochawesome report status.

It is possible to run to run the slack-reporter programatically via a script

// tslint:disable-next-line: no-reference
/// <reference path='./node_modules/cypress/types/cypress-npm-api.d.ts'/>
import * as CypressNpmApi from "cypress";
import {slackRunner}from "cypress-slack-reporter/bin/slack/slack-alert";
// tslint:disable: no-var-requires
const marge = require("mochawesome-report-generator");
const { merge } = require("mochawesome-merge");
// tslint:disable: no-var-requires{
  reporter: "cypress-multi-reporters",
  reporterOptions: {
    reporterEnabled: "mocha-junit-reporters, mochawesome",
    mochaJunitReportersReporterOptions: {
      mochaFile: "cypress/reports/junit/test_results[hash].xml",
      toConsole: false
    mochawesomeReporterOptions: {
      reportDir: "cypress/reports/mocha",
      quiet: true,
      overwrite: true,
      html: false,
      json: true
  .then(async results => {
    const generatedReport =  await Promise.resolve(generateReport({
      reportDir: "cypress/reports/mocha",
      inline: true,
      saveJson: true,
    // tslint:disable-next-line: no-console
    console.log("Merged report available here:-",generatedReport);
    return generatedReport
  .then(generatedReport => {
    const base = process.env.PWD || ".";
    const program: any = {
      ciProvider: "circleci",
      videoDir: `${base}/cypress/videos`,
      vcsProvider: "github",
      screenshotDir: `${base}/cypress/screenshots`,
      verbose: true,
      reportDir: `${base}/cypress/reports/mocha`
    const ciProvider: string = program.ciProvider;
    const vcsProvider: string = program.vcsProvider;
    const reportDirectory: string = program.reportDir;
    const videoDirectory: string = program.videoDir;
    const screenshotDirectory: string = program.screenshotDir;
    const verbose: boolean = program.verbose;
    // tslint:disable-next-line: no-console
    console.log("Constructing Slack message with the following options", {
    const slack = slackRunner(
     // tslint:disable-next-line: no-console
     console.log("Finished slack upload")

  .catch((err: any) => {
    // tslint:disable-next-line: no-console

function generateReport(options: any) {
  return merge(options).then((report: any) =>
    marge.create(report, options)

I have been extending the reporter, to allow the ability to upload the mochawesome report, and cypress artefacts (screenshots & videos) to an S3 bucket, and use the returned bucket links, for the Slack reporter. It is currently working on a PR, but needs adding to the CLI before it can be added to the master branch.

The new Chromium-based Microsoft Edge for Mac has been leaked — And it works with Cypress, and now you can test it too!

Following on from my previous blog post here about getting Cypress working with Microsoft Edge, I have released versions that you can test out yourself 🙂

An example repository here:-

  1. Download Microsoft Edge for Mac (Canary Build) for MacOS here
  2. Make a new directory
  4. Run npm init
  5. Run npm install @you54f/cypress --save
  6. Run node_modules/.bin/cypress open --browser edge to open in interactive mode, and setup‘s example project
  7. Run node_modules/.bin/cypress run --browser edge to run in command line mode.
  8. Rejoice & please pass back some appreciation with a star on the repository! Thanks 🙂

The new Chromium-based Microsoft Edge for Mac has been leaked — And it works with Cypress.

So I’ve been using Cypress for a while now to test our apps, it’s an incredible testing tool, with many features developers will feel at home with and providing incredibly fast and detailed feedback which remote-browser tools cannot compete with.

However there has been a bone of contention for some. The lack of cross-browser compatibility. For now, it will only work with Chrome and Electron.

Yep, no IE10/11, Firefox, Safari, Opera etc.

Best not delete your favourite Selenium based tool just yet.

However there is some light on the horizon, and from the likes of Microsoft no less.

Rumours floated around late last year that Microsoft were ditching efforts on their budding IE11 replacement Edge, with well, Edge. Just based on Chromium this time. You can get it for Windows 10 here from Windows Insiders.

If you visit the above page on a MacOS, you’ll see a button asking you to be notified, however Twitter user WalkingCat posted up links from Microsoft’s CDN.

Microsoft Edge for Mac (Canary Build)

Microsoft Edge for Mac (Dev Build)

So I thought I would spin up Cypress and see if I could get it to work with Edge but it choked on the folder name.

Hmmm, lets rename the app so it doesn’t have spaces in it.

So we need to tell Cypress about Edge

Its listed now, good start

Lets fire up Cypress runner in GUI mode


Let’s run all the integration tests.

As if they all passed first time. How about the CLI?

Sweet! Not bad for a first run! Now we just need to wait for Microsoft to release Chromium Edge to the masses. Hopefully a linux flavour will be on the horizon, I will keep you posted if so!

Follow the PR to track Cypress & Microsoft Edge –

Thats all folks, thanks for reading, and feel free to follow me @ for more of my fumblings in code.

Update :- I’ve now followed up this with another blog post where I have published a beta version of Cypress with Edge support for testing purposes. See here for the blog post with a link to an example GitHub repo and installation instructions!

Securing the Pact Broker with nginx and LetsEncrypt

Dockerised Pact Broker – Secure Implementation

Background & Aim

The cool guys and girls over at Dius offer a dockerised implementation of the Pact-Broker for free! I know, amazing, right? You can get it right now here

However out of the box, the Docker solution is not secure. There is an example SSL configuration, utilising nginx as a reverse proxy, to allow access solely via HTTPS, provided by the PACT team.

I have extended on this implementation, to ensure we are following current industry standards for a secure nginx implementation.

Additionally we will go through the process of how to generate your own self-signed certificates and register them with a Certificate Authority to give confidence to your stakeholders and site-visitors.

We will only be using open-source tooling because open-source ftw <3.

If you haven’t already read my post about using Pact & Swagger to compliment your development workflow, you can check it out here.


Additional Notes

  • This example will use a dockerised postgres instance, as described in the main pact_broker-docker readme, just so you can run the example end-to-end.
  • If you are able to use your cloud provider to sign your certificates, then you may not need to use lets-encrypt. In my example, I am using a self-managed AWS EC2 instance, which is unable to utilise AWS certificate manager, as you are unable to download the generated ceritifcates. If you are using Fargate, this is not an issue.

Initial Setup

  1. Install Docker on your instance
  2. Copy the contents of ssl_letsencrypt to your instance and rename to pact-broker
  3. Replace the following occurances found in the *.sh & docker-compose.yml files in pact-broker & pact-broker/lets-encrypt
    • domain_name – Replace with your registered domain name
    • email_address – Replace with your email address. It should match the registered domain
    • username – Replace with the name of your user (it is assumed your folder will live in /home/username/pact-broker but you can change to suit)
  4. Rename .env.example to .env.

Get to know your environment file

The .env file contains the credentials we will pass into the docker-compose file and ultimately to the pact-broker. More options can be added as per documentation, but will also require adding into your docker-compose.yml file.

The database variables are setup to talk to the postgres database loaded via docker-compose.


NOTE: Please do not commit your .env file to source control. If you do, consider your credentials comprimised and update them straight away

Generate your Signed SSL cerficate with Lets-Encrypt

Let-Encrypt is an open-source project which will allow you to create SSL certificates, and sign them against the Lets-Encrypt Certificate Authority. It is in a bid to help make the web safer.

  1. Change into the lets-encrypt folder
  2. Run docker-compose up -d. This will load up a single page application that lets-encrypt can read from, in order to verify that the domain is owned by you.
  3. Run ./ – This will generate sample certificates for you, in lets-encrypt/out
  4. Run ./ – This will provide information about the generated certificates for you.
  5. If all the above steps ran ok, we can safely remove the out dir in lets-encrypt/out to remove our staged certificates.
  6. Run ./ – This will generate your signed certificates for you, in lets-encrypt/out
  7. Run ./ – This will provide information about the generated certificates for you.

Certificates will be output to pact-broker/letsencrypt/out/etc/letsencrypt/live//

The folder is actually sym-linked, and the actual certificates live in the archive folder.

Each generated certificate will last for three months, a further section will discuss renewals.

Generate your Diffe-Hellman Param certificate

  1. Change into the lets-encrypt folder
  2. Run ./ This will take a while (5-10 minutes) so go make a brew.

Check your nginx configuration

There is a lot going on in the nginx configuration. I will touch on why each component is there, and you can elect to remove as you wish.

In this section, we are going to add headers to every request, to avoid cross-site scripting attacks

add_header X-XSS-Protection "1; mode=block";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

Remove the nginx version number from responses to avoid leaking implementation details.

server_tokens off;

In the first server block which is for HTTP requests, we do the following

  • Listen to all requests on port 80. Our server name, in the name of the pact broker docker image as defined in the docker-compose.yml
listen 80 default_server;
server_name broker;
  • Only allow GET methods, if accessed via port 80. Add in any request methods you wish to allow. I prefer to whitelist, rather than blacklist.
if ( $request_method !~ ^(GET|HEAD)$ ) {
return 405;

Redirect all HTTP requests, to HTTPS. We drop any request parameters that were provided to avoid any parameter injection in our redirect to HTTPS.

return 301 https://$host;

The second server block is for our HTTPS requests.

  • Listen on port 443 and enable ssl
listen 443 ssl;
server_name broker;
  • Our certificates are loaded in to the docker-container via the docker-compose.yml volumes section, on the following paths.
ssl_certificate "/etc/nginx/ssl/certs/fullchain.pem";
ssl_certificate_key "/etc/nginx/ssl/certs/privkey.pem";
ssl_dhparam "/etc/nginx/ssl/dhparam/dhparams.pem";
  • Enable SSL protocols. TLSv1 is insecure and shouldn’t be used. TLSv1.1 is weak. For compliance reasons, TLSv1 should not be used.
ssl_protocols TLSv1.2 TLSv1.3;
  • Only enable known strong SSL ciphers. It is a balancing act between using strong ciphers and compatability. A site scoring 100% on a cipher test, would not be compatible with all devices. The current set gives 95% on SSLLabs security test.
  • Let’s also tell nginx to use this list
ssl_prefer_server_ciphers on;
  • ecdh provides a nice default for nginx as not all openSSL implementations do it well
  • session tickets don’t provide forward secrecy.
  • Limit the SSL buffer size (default 16k iirc)
  • Maintain SSL connections for 10 minutes
  • Switch of gzip compression as it can be vunerable. Enable if needed.
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_buffer_size 4k;
ssl_session_cache shared:SSL:10m;
gzip off;

Add Strict Transport Security headers

add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
  • I am only enabling the following methods on HTTPS requests.
if ( $request_method !~ ^(POST|PUT|PATCH|GET|HEAD|DELETE)$ ) {
return 405;
  • Whilst implementing webhooks, I noted that URL based tokens are visible to users both rw/ro, to the pact-broker, so we are blocking access to the /webhooks url. This will also block /webhooks/**
  • This shows how you can provide granular control of traffic in nginx, you could allow POST’s only with an if statement.
error_page 418 = @blockAccess;

location /webhooks {
return 418;
location @blockAccess {
deny all;

The following block is used to proxy all requests recieved through nginx, to the pact broker.

  • proxy_set_headers are used to ensure the redirect urls are correct in the HAL browser and additionaly enforce our secure headers.
  • proxy_hide_headers will avoid leaking details of our pact_broker & passenger version.
  • proxy_pass will send our requests recieved on nginx through to the broker.
location / {

# Setting headers for redirects
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Scheme "https";
proxy_set_header X-Forwarded-Port "443";
proxy_set_header X-Forwarded-Ssl "on";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
proxy_set_header X-XSS-Protection "1; mode=block";
proxy_set_header X-Frame-Options DENY;
proxy_set_header X-Content-Type-Options nosniff;

# Hide return headers to avoid leaking implementation details
proxy_hide_header X-Powered-By;
proxy_hide_header X-Pact-Broker-Version;

# Perform the proxy pass to our site
proxy_pass http://broker:80;

Get to know your docker-compose file

Each docker container is connected by a specified network

- docker-network

Standard postgres configuration.

image: postgres
test: psql postgres --command "select 1" -U postgres
- "5432:5432"
POSTGRES_DB: postgres
- docker-network

The pact broker configuration with basic auth enabled.

  • Variables stored in the .env file
    are read by docker-compose on starting the containers
  • Read into the docker-compose file
    with variables prefixed with $
  • You can add additional supported pact parameters, either directly in here, on in your env file.
container_name: 'pact-broker'
image: dius/pact-broker:latest
- postgres
- docker-network

The configuration for nginx.

  • We link the pact broker container, called broker_app, but reference it as broker which is used as our servername in nginx configuration.
  • The first volume link loads in our nginx.conf file
  • The next three volumes point at the out directory of lets-encrypt.
  • The last volume will load in our example site we used for certification, it will be used for renewing our cerificates, which we will touch on after running our example.
container_name: 'pact-nginx'
image: nginx:alpine
- broker_app:broker
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./letsencrypt/out/etc/letsencrypt/live//fullchain.pem:/etc/nginx/ssl/certs/fullchain.pem
- ./letsencrypt/out/etc/letsencrypt/live//privkey.pem:/etc/nginx/ssl/certs/privkey.pem
- ./letsencrypt/out/etc/letsencrypt/live//chain.pem:/etc/nginx/ssl/certs/chain.pem
- ./letsencrypt/dhparam/dhparams.pem:/etc/nginx/ssl/dhparam/dhparams.pem
- ./letsencrypt/out/renewal:/data/letsencrypt
- "80:80"
- "8443:443"
- docker-network

Running our example

If you have not already generated your certificates, please do so now

  1. Change into the lets-encrypt folder
  2. Run docker-compose up -d. This will load up a single page application that lets-encrypt can read from, in order to verify that the domain is owned by you.
  3. Run ./ – This will generate sample certificates for you, in lets-encrypt/out
  4. Run ./ – This will provide information about the generated certificates for you.
  5. If all the above steps ran ok, we can safely remove the out dir in lets-encrypt/out to remove our staged certificates.
  6. Run ./ – This will generate your signed certificates for you, in lets-encrypt/out
  7. Run ./ – This will provide information about the generated certificates for you.

We can now run our secure broker

  1. Modify the docker-compose.yml file as required.
  2. Run docker-compose up to get a running Pact Broker and a clean Postgres database

Testing your setup

curl -v http://localhost
# This will redirect to https

curl -v http://localhost/matrix
# This will redirect to https root, not matrix

curl -v https://localhost/matrix
# This will redirect to https matrix page
# Note we don't provide the flag -k (insecure) as the website is certified

curl -v http://localhost/webhooks
curl -v https://localhost/webhooks
# This will return a 418 error

Renewing your certificates

We generated certificates with LetsEncrypt, however they will expire after 3 months. We have aimed to minimised disruption by incorporating the renewal process into our configuration, so we will just need to run a script to generate them and bounce our app.

  1. Ensure you are in the root folder, in our example the pact-broker folder
  2. Run ./ – This will run a do a dry run of the renewal process, or inform you that you don’t need to generate one yet.
  3. Run ./ – This will run the renewal process and generate you new certicates and restart your docker instance

Certificates will be output to pact-broker/letsencrypt/out/etc/letsencrypt/live//

Note, the folder is the same as our old certificates, so no change to our docker-compose file. This is because this location is actually sym-linked, and the actual certificates live in the archive folder.

Replace the dockerised postgres DB with a proper instance

You will need to make some minor changes to utilise a non-dockerised Postgres instance.

Update the following environment variables in your .env file


and comment out, or remove the following lines from your docker-compose.yml

# postgres:
# image: postgres
# healthcheck:
# test: psql postgres --command "select 1" -U postgres
# ports:
# - "5432:5432"
# environment:
# POSTGRES_USER: postgres
# POSTGRES_DB: postgres

image: dius/pact-broker
# - postgres

General Pact Broker configuration and usage

Documentation for the Pact Broker application itself can be found in the Pact Broker Wiki


See the Troubleshooting page on the wiki.

“Just because you’re paranoid doesn’t mean they aren’t after you.”

Some simple command-line tricks borrowed from the land of devOps to help you analyse logs and gain useful insight.

I’m under attack! Or rather, my EC2 instance is. ( A virtual machine running Ubuntu, hosted in AWS )

It’s not a mega worry for me, as it is just a sandbox for testing/development of home projects. However it is the perfect opportunity for me to demonstrate some techniques you can use to extract useful information from logs.

My virtual machine has been running for around 3 three months with port 22 which is used for ssh, left accessible publicly on the internet. The logs live in /var/log

We are going to use ls to list the contents of the directory & cat to output our file contents directly to the terminal.

$ ls /var/log/secure*

$ cat /var/log/secure*

Mar 10 03:40:27 ip-***-**-**-** sshd[29874]: Invalid user test from port 56702

Oh crap, I thought. Naughty, naughty h4x0rs.

I knew there was alot, but not how many, so let’s count how many failed login attempts we’ve had.

cat /var/log/secure* output the content of each of the 5 files directly to the terminal (the * is a wildcard and will pattern match any files called secure with any suffix)

| this is called a pipe, it will allow you to pass the output of the command to the left of the pipe, as an input to the command to the right of the pipe.

grep -e 'Invalid user' This is a pattern matcher, it will look for every occurance of the words Invalid user and output the entire log line, so we can trim our search to only failed login attempts

wc -l this will count how many words in a document, but we are passing the -l flag which will count how many lines (a single line for each failed login attempt

$ cat /var/log/secure* 
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: reverse mapping checking getaddrinfo for [] failed - POSSIBLE BREAK-IN ATTEMPT!
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Invalid user test from port 56702
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: input_userauth_request: invalid user test [preauth]
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Received disconnect from port 56702:11: Bye Bye [preauth]
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Disconnected from port 56702 [preauth]

$ cat /var/log/secure* | grep -e 'Invalid user'
Mar 10 03:39:35 ip-***-**--**-** sshd[29842]: Invalid user test from port 59039
Mar 10 03:39:36 ip-***-**--**-** sshd[29844]: Invalid user admin from port 41233
Mar 10 03:39:42 ip-***-**--**-** sshd[29850]: Invalid user ism from port 37902
Mar 10 03:39:43 ip-***-**--**-** sshd[29854]: Invalid user admin from port 39222
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Invalid user test from port 56702

$ cat /var/log/secure* | grep -e 'Invalid user' | wc -l

Wow, 16,288 attempts.

We can a combination of 3 commands, to extract a piece of information from each log line and de-duplicate it, so we can find out how many different usernames/IP addresses & ports that they tried.

awk '{print $8}' This will print only the 8th word in each line which is the username in our case. The below example shows how the string is split.

Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Invalid user test from port 56702

$1 = Mar
$2 = 10
$3 = 03:40:27
$4 = ip-***-**--**-**
$5 = sshd[29874]:
$6 = Invalid
$7 = user
$8 = test
$9 = from
$10 =
$11 = port
$12 = 56702

sort This will arrange our list in alphabetical order

uniq this will remove duplicates, but requires a pre-sorted list, which is why we run the output through sort first

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' 

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort | uniq


After we have out sorted list, we can used wc -l again to count the lines. There were 2816 distinct user names, 5252 different IP’s trying 11,300 different ports

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort | uniq | wc -l

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $10}' | sort | uniq | wc -l

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort | uniq | wc -l


Let’s get some insight into the data, we can find out which were the most common usernames, how many times a particular IP address hit us, and which ports are most commonly hit.

uniq -c The -c flag will count the number of occurrences

sort -nr The -nr flag will sort the record in numerical order

less This will you to read the large output in your own time, rather than watch the matrix flash in front of your eyes.

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort | uniq -c | sort -nr

894 admin
681 test
261 postgres
244 oracle
238 user
215 ubuntu
196 nagios
186 guest
182 ftpuser
131 deploy
116 pi
115 git
109 ubnt
104 teamspeak
103 support
100 mysql
95 minecraft
92 tomcat
$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $10}' | sort | uniq -c | sort -nr

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $12}' | sort | uniq -c | sort -nr

9 1920
7 54170
6 58932
6 56054
6 55524
6 53154
6 49414
6 48000
6 47278
6 44800
6 39266
6 37284

Tinkering with the touchbar.

When I started a new job a few months ago. I was given a MacBook Pro 15″, with the infamous touchbar.

I’ve eventually stopped lamenting about the loss of a physical escape key, and have employed some tips & tricks to make it a bit more useful.

No touchbar? touché my friend

If you don’t have a touchbar, but do have a Mac, you can use touché to emulate one on your screen and still use these tricks (bar touchid)

sudo at your fingertips

$ sudo sublime /etc/pam.d/sudo

Add the following line

auth sufficient

And you should be left with something like this

 # sudo: auth account password session
auth sufficient
auth sufficient
auth required
account required
password required
session required

Exit and then open your terminal again, and attempt to sudo and voila. sudo at your fingertip.

$ sudo touch


I’ve always liked knowing the temps of my CPU/GPU/RAM, and fan speeds stemming from overclocking/water-cooling my PC’s but mainly I wanted to quieten my fans without melting the work laptop.

It is ridiculously loud when it spins up a set of Docker containers or a VM and it just doesn’t need to be, so I use MacsFanControl to control the fan speeds, and iStats to keep an eye on some stats.

$ sudo gem install iStats
$ istats all
Total fans in system:   2
CPU temp: 43.13°C ▁▂▃▅▆▇
Battery health: Good
Fan 0 speed: 3461 RPM ▁▂▃▅▆▇
Fan 1 speed: 3502 RPM ▁▂▃▅▆▇
Cycle count: 66 ▁▂▃▅▆▇ 6.6%
Max cycles: 1000
Current charge: 1927 mAh ▁▂▃▅▆▇ 28%
Maximum charge: 7025 mAh ▁▂▃▅▆▇ 95.8%
Design capacity: 7336 mAh
Battery temp: 30.8°C

Sweet, now let’s see if we can get them on the touchbar.

Apple let us modify the touchbar to a degree, but not enough to be able to add custom icons and scripts.

We can use BetterTouchTool but it’s not free, and I am loving open-source software, so I managed to find My Touchbar, My Rules. You can download it with HomeBrew.

$ brew cask install mtmr

Once installed you can find it in your Applications folder, run it and your touchbar will run the default config.

You can also do a 3 finger swipe to adjust brightness or a 2 finger swipe to adjust volume.

Let’s have a look at the config

$ sublime ~/Library/Application\ Support/MTMR/items.json

It is a json config file, defining each button. You can customise with a list of predefined button types listed on their homepage, but you can also write AppleScript or your own scripts and associated them with buttons.

My config is available here as a GitHub Gist and can be seen below

It was inspired by the following plugin for BetterTouchTool

iTerm2 Touchbar integration

If you don’t have iTerm, download it with HomeBrew.

$ brew cask install iterm2

You can view the iTerm2 docs for the touchbar here.


With ZSH and a nifty plugin called zsh-iterm-touchbar, we can get our git info and run our npm run scripts in project folders.

If you aren’t already using zsh, then install it with HomeBrew

brew install zsh

Install OhMyZSH

sh -c "$(curl -fsSL"

Clone the repo in your ZSH directory

$ cd ${ZSH_CUSTOM1:-$ZSH/custom}/plugins 
$ git clone

Then add the plugin into your .zshrc

$ sudo sublime ~/.zshrc
plugins=(... zsh-iterm-touchbar)

Restart your terminal and you should see

In a git enabled folder

Showing run scripts from a package.json

List of options

  • F1 – Current directory 👉
  • F2 – Current git branch, press to display all branches and switch between them 🎋
  • F3 – Current git repo status 🔥 / 🙌
    • + — uncommitted changes in the index;
    • ! — unstaged changes;
    • ? — untracked changes;
    • $ — stashed changes;
    •  — unpulled commits;
    •  — unpushed commits.
  • F4 – Push to origin branch (git push origin [branch]) ✉️
  • F5 – Display npm-run or yarn-run scripts from package.json

More touch-bar resources

An open-source list tracking touchbar projects

Whilst we’re at it

I am using a few zsh plugins which make my life so much easier.


A full list of the out-of-the-box supported plugins

You can find more to install