Pact facts – An interactive history lesson

Did you know, Pact is nearly 10 years old!

As the de-facto leader in contract-testing, the eco-system has grown to be vast, just take a look below

image

However today, I am going to take you on a little journey of how it came to be, and show you what is to come.

TL;DR

> A lot happens in 10 years. We’ve seen it all here at Pact, from the proliferation of micro services, to ever increasing protocols like Protobufs, GraphQL, transports such as gRPC, Websockets and MQTT, EventDrivenArchitectures and data pipelines or emerging standards such as OpenAPI, AsyncAPI and CloudEvents.

> As we launch our Pact Plugin Framework bringing you new possibilities to the Pact eco-system, I’d like to invite you to try an interactive history lesson of Pact, from past, to present and beyond!

> Pact and the Pact-Plugin Framework will unlock the possibility of testing multiple transport and content types. You will see Pact used for gRPC, Protobuf and CSV based messages. Hope it feeds your imagination of the possiblities, it certainly has for me!

image

If it piques your interest, you should sign up for our upcoming webinar to hear more about our exciting news and what it means for you and the software development community.

The birth of Pact Ruby

Pact was originally written by a development team at realestate.com.au in 2013, born out of trying to solve the problem of how to do integration testing for their new Ruby microservices architecture. They threw together a codebase that was to become the first Pact implementation.

git add . && git commit -m 'Gem skeleton' && git push by James Fraser

Screenshot 2022-11-16 at 14 16 59

Ron Holshausen (at the time at DiUS, still one of the present day maintainers of Pact, and co-founder of Pactflow), first commit came shortly after.

Screenshot 2022-11-16 at 14 27 47

A few months later Beth Skurrie (then at DiUS, still one of the present day maintainers of Pact and co-founder of Pactflow ), joined one of the teams that was working with the Pact authors’ team.

She had recently seen a talk by J.B.Rainsberger entitled "Integration tests are a scam", which promoted the concept of "collaboration" and "contract" tests, so she was immediately interested when she was introduced to Pact.

J. B. has since softed his message, as have we, I think we all mellow as we get older 🙂

Beth’s first commit in Pact Ruby

Screenshot 2022-11-16 at 14 33 02

After trying (as most people do) to convince the authors that the provider should be the contract writer, she was soon convinced by Brent Snook, one of the original authors of Pact, of the value of consumer driven contracts. At this stage, she realised that what was missing in the current implementation was the ability to run the same request under different conditions, and "provider states" were born.

Viva la Pact Broker

Screenshot 2022-11-16 at 14 28 48

What the heck is a Pact Broker anyway, Saf?

The Pact Broker (as Pact was) being written to solve our own problem, which was trying to coordinate pact versions between projects.

It is an application that allows you to release customer value quickly and confidently by deploying your services independently and avoiding the bottleneck of integration tests, introducing a pact matrix.

It looks a little like this

image

By testing the Pact Matrix, you can be confident to deploy any service at any time, because your standalone CI tests have told you whether or not you are backwards compatible – no “certification environment” needed. And when there are multiple services in a context, this approach scales linearly, not exponentially like the certification environment approach.

Preaching the message

Soon after, Beth decided that Pact idea was the best thing since sliced bread, and she hasn’t stopped yacking on about it since. Hear Beth, Jon Eaves from REA and Evan Bottcher from ThoughtWorks speak at YOW!2014 in this YouTube video

Want a bit more of Beth? we told you she couldn’t stop yakking

Ron began spreading the message, read a blog post from 2014 here

The birth of Pact JVM

Pact spread around the codebases in the wider program of work at realestate.com.au, until it hit its first Java microservice. realestate.com.au had many JVM projects, so a group of DiUS consultants (including Ron Holshausen again) started the pact-jvm project on a hack day.

Screenshot 2022-11-16 at 14 37 32

Ron raised his first issue, https://github.com/pact-foundation/pact-jvm/issues/31, which led to his first PR

Screenshot 2022-11-16 at 14 43 27

You can watch a talk from Ron here talking about pact and Pact JVM

Like all grown up frameworks, processes are needed, and a Pact Specification was born.

Beth penned the first Pact test cases, which came to be Pact Specification v1.0.0

Screenshot 2022-11-16 at 14 45 42

It was at this stage that the authors realised that the Rubyisms in the format were going to have to be replaced by a non-language specific format, and the idea of the v2 pact-specification arose on Mar 27, 2014 though it would take a while (just over a year) before it became reality.

Screenshot 2022-11-16 at 14 53 18

Soon it became obvious that Javascript UIs would benefit greatly from using Pact with their backend APIs.

After tossing around the idea of implementing Pact yet again in another language, a decision was made to wrap the Ruby implementation (which was packaged as a standalone executable) to avoid the maintenance burden and potential of implementation mismatches. This became the pattern that was used for most of the following Pact implementations. Each language implemented a Pact DSL and mock service/verifier client, and called out to the Ruby mock service process/verifier in the background. The original Ruby JSON syntax was often used between the native clients and the mock service, as it was simpler to implement, however, the mock service took care of writing the actual pact in the v2 format.

The birth of Pact JS

Three versions of Pact-JS have existed, Fuying created the first commit of DiUS/pact-consumer-js-dsl. A familiar face Beth pops along for her first commit.

A few days apart, DiUS/pactjs0 was created, the first commit by Jeff Cann. Ron dropped his first commit ultimately deprecating it a little while later.

Enter Matt Fellows, dropping his first commit. A man of many talents, Matt is still one of the present day maintainers of Pact, as well as co-founding Pactflow.

It’s funny, JavaScript is akin to the bus service, you wait for ages and then three turn up at once 🤯.

Enter the still current Library, Pact-JS. It’s first commit by Tarcio Saraiva.

A few months later, Pact-JS became the sole library going forward

This multi-language capability gave us the ability to start building cross-platform contract testing suites, as demonstrated below with JSON/HTTP interactions in laser focus

image

You can out HTTP based Pact in our interactive tutorial here, in either Java or JavaScript

Pact proliferates – Lead by example

> Since the implementation of the v2 format, newer features have been added, and the v3 and v4 formats add support for handing multiple provider states, messaging, and ‘generators’.

One of the strengths of Pact is its specification, allowing anybody to create a new language binding in an interoperable way. Whilst this has been great at unifying compatibility, the sprawl of languages makes it hard to add significant new features/behaviour into the framework quickly (e.g. GraphQL or Protobuf support).

Wrapping the Ruby implementation allowed new languages to implement Pact quickly, however, it had its downsides. The standalone package worked by bundling the entire Ruby runtime with the codebase using Travelling Ruby, so it was large (~9MB). The native libraries also had to deal with the mock service process management, which could be fiddly on different platforms. It also made it difficult to run consumer tests in parallel, as each mock service process could only handle one thread at a time. The Ruby implementation was also lagging behind in feature development compared to the JVM, as Beth was spending more time on the Pact Broker.

To provide a single Pact implementation that could be used by all the required languages, the decision was made to create a reference implementation in Rust, that could be wrapped by each client language using FFI. The distributable package will be orders of magnitude smaller, and make it easier to run tests in parallel and avoid the process management issues, we have been slowly moving to our Rust core which solves many of the challenges that bundling Ruby presented.

It is worth noting that the "shared core" approach has largely been a successful exercise in this regard. There are many data points, but the implementation of WIP/Pending pacts was released (elapsed, not effort) in just a few weeks for the libraries that wrapped Ruby. In most cases, an update of the Ruby "binaries", mapping flags from the language specific API to dispatch to the underlying Ruby process, a README update and a release was all that was required. In many cases, new functionality is still published with an update to the Ruby binary, which has been automated through a script.

Beth often refers to the Ruby Goldberg machine, in a nod to Rube Goldberg.

image

We would love your engineering support in bringing efficiencies to our CI/CD processes used in our open source projects, or your artistic skills, if someone fancies drawing a Pact Rube Goldberg machine Please note this one is a pet-project work in progress but it does show off message testing in various pact languages (Java, JS, .NET, PHP, Python & Ruby)

image

Pact Plugin Philosophy

Being able to mix and match protocol, transport and interaction mode would be helpful in expanding the use cases.

Further, being able to add custom contract testing behaviour for bespoke use cases would be helpful in situations where we can’t justify the effort to build into the framework itself (custom protocols in banking such as AS2805 come to mind).

To give some sense of magnitude to the challenge, this table showed some of the Pact deficiencies across popular microservice deployments as of a couple of years ago

https://user-images.githubusercontent.com/53900/103729694-1e7e1400-5035-11eb-8d4e-641939791552.png

So the pact plug-in eco system was born, a way to allow new transport types, content matchers/generators and more to easily be added to the pact framework, without needing to wait for the core maintainers to roll it out. You can create your own, for public, private or commercial consumption!

image

Enough blurb, show me da code

Whilst it may be quite technical for some, others will relish the possibilities this will unlock. If you want something or see a use case, but aren’t quite sure how to put it to reality, try out our demos and give us a shout via 🔗 canny, our feature request board, or 🔗 slack

To prove how easy it was, and as a nice little nod back to the Grandmother of Pact, Ruby. Your very own devo avo put his money where his mouth is and built his own.

Try out our pact plug-in framework here

> This will allow you to see Pact and the Pact-Plugin Framework to test multiple transport and content types. You will see Pact used for gRPC, Protobuf and CSV based messages. Hope it feeds your imagination of the possiblities, it certainly has for me!

And to anchor it back to a picture you probably know from our Pact docs, plugins just sit in the middle and help extend the capabalities

image

Choose Possibilities, Choose plugins, Choose Pact!

A thank you to those who got us here

Standing on the shoulders of giants

How contributing back to the technology community can pay you back you in kind.

So I, like many of us in the computer industry, silently or otherwise, suffer from Imposter Syndrome.

As a tester with some technical skills learned well before my Uni years, I felt I wasn’t good enough to write production level code, like the developers.

I came across problems in my day to day work, and rather than ask for help from colleagues who might oust me for being a fraud,I would often run to the internet and seek advice from Dr Google.

He would offer me a course of Stack Overflow, countless blog repos and GitHub repositories chock full of goodies just waiting to be discovered.

As I progressed through my career, the places where we thrived were the places that openly fostered learning, and trying new things. Stepping outside of your comfort zone, and allowing you to fail, to make mistakes and improve.

It began to make me think, how can I help others in the community, what would I write.

The imposter syndrome snuck in again. Who would read your posts, they would all laugh at you. You would totally choke in a presentation. You’ll be fired.

I’ve got a mortgage and I like old cars, so I totally need my job.

So the years went on, and I didn’t write anything publicly about tech. It was really weird because those who know me, will know that I am happy to talk about anything.

In my younger years, I was heavily into forums, first was extreme pc overclocking. I would strap things like refrigeration units to my processors, run various tests and post screenshots and pictures.

As I got older and began to purchase and modify old cars, I would document my adventures, and as I gained knowledge around various subjects, or I completed a particular thing, I would write a Knowledge Base post, to others to follow and contribute too.

It was an amazing experience, I met some of my closest friends who will be with me for the rest of my life, some I will meet in another life.

People know me in the car scene not because they’ve met me, but the information I put out there helped them in some way.

Sure lots of people disagreed with my posts, some did it in other ways, some helped contribute to make improvements and suggestions, but it never caused me an issue. No-one said Saf, you don’t belong here, you don’t know anything about cars. (That bit is true, I still don’t, but I try 🙂 )

So I started contribute to the testing community. I signed up with the Ministry of Testing, and not long after they had a competition to win a trip to Test Bash in New York. I had never been to America, my curiosity was picqued.

The entrance fee? One post on why you want to attend TestBash, and one blog post about your adventures at the event.

So I rolled up my sleeves, and wrote my first tech post that was going to get judged, by testing peers who really have the authority to say you are rubbish, you shall not pass.

I WON >.< Ermergerd!

My employer offered to fund the flight as part of our individual tech budget, so I found myself on a flight to America.

I also started using some open source software for the first time, as I had been tasked with building a testing framework for an API blueprint we could use to roll out across teams. I posted this tweet from the plane

So I had to write a blog post now, I had won the competition and couldn’t possibly flake out

https://thefriendlytester.co.uk/2015/11/yousaf-nabi-ermahgerd-testbash-new-york.html

I moved onto to a new role, and was involved in a greenfield transformation project.

I used Karate / Wiremock & Docker to setup a consumer driven contract based testing workflow.

This involved using wire mock alongside docker to produce mock services, which are used to agree Consumer driven contracts between service producer & consumers.

Mocks were then built to allow autonomous development between squads, and avoid blockers in delivery.

Tests were written against the mock services, which would be run against the developed service, with downstream/upstream services mocked out, to perform component integration testing, and early integration testing is run against each PR initiated via team city.

A shell script would write the test results to each GitHub PR, along with slack notifications, providing all members of the squad, full and early visibility of test coverage and progress.

This proved to be efficient and invaluable, however some of the day-to-day constraints to exploratory test data, proved to be an issue in our test automation, namely around data provisioning.

It was complex, and it seemed difficult for me to sell the concept wider than my smaller engineering teams, who totally got it.

We had one major legacy provider, and we had a reactive relationship, whereby changes would just happen without us knowing.

I felt quite powerless to make the kind of organisational changes in thinking about quality.

I went on a stress control course, it was absolutely amazing.

I felt like Manny from Black Books after eating the little book of calm.

They told me to concentrate on the things I can control, don’t worry about the things I can’t, so I sacked off my job and went to play with old cars for a living

So after a few months had elapsed, I realised I missed being around people, and missed the buzz. I got a call from a former boss, who was now at InfinityWorks who said come along and see how you get on

That was a massive change for me moving into a consultancy role, and it escalated quickly as to how amazing it would be for me, as someone who had been test and quality focused, to being able to actually get involved in every aspect of delivery.

I am eternally grateful for the freedom I was provided here, as it has allowed me to reach heights that honestly would never have been possible in my traditional roles.

On my first account, I ran a whiteboard session about quality and testing with our engineers and one of them mentioned Pact.

As I began to look into it, I realised it encapsulated the work I had done in my previous role around a CDC testing framework, and I loved it.

It didn’t do quite what I wanted it to do however, so as it was open source, I was free to fork it any make whatever changes I wanted. I raised so many issues and fixes that I got recognised in a tech blog by one of IW’s top engineers about open source work at InfinityWorks, read the post here.

This was such a massive boost and properly kicked my imposter syndrome to touch.

I joined the Pact slack channel http://slack.pact.io/ which was a lovely home with lots of people all talking about consumer driven contract testing, and the challenges when moving to distributed event based systems. I had lots of awesome discussions, and met lots of new people, and funnily enough met a fair few people I had worked with in the past.

Ministry of Testing have their own slack group too (Join here — https://www.ministryoftesting.com/slack_invite) , where you can ask any question, or find great people always willing to help and talk about all things test. Just don’t forget about the XY problem

Our client has a big debate over whether to stick with Selenium or move to Cypress. I had never heard of it before, so ran some side by side evaluations and thought it was pretty slick. I wouldn’t be advocating running out and switching all your scripts over, but I loved their focus on developer experience, and not creating a black box UI testing tool.

Their documentation is incredibly good, they have fantastic recipes, blog posts, GitHub examples, so I thought I would pay some things to the Cypress community too.

I wrote some blog posts on how to address a couple of issues and workaround arounds for Cypress — see here for one

This came back to provide me big props at work, as a customer on another client account was struggling, had come across my blog post and then got to have me drop in on a video call to help him out.

I helped the Cypress team get Edge working when it switched to Chromium, and helped beta test Firefox, because multi browser support was a big sticking point for a-lot of people. I personally don’t think its an issue, and I advocate doing most of your checking activities without spinning up your browser, see this post for reasons why

Try not to take things personally, both when using open source software, or maintaining it. There are usually many things going on in peoples lives and it only takes a few extra moments to be courteous. Your post might help another person and it’s always nice to get appreciation for something you’ve created, and may have even long forgotten about. Even if it doesn’t it will remind yourself of something you did a few years back, which reduces our cognitive load, and can help reduce stress.

I thought I would pay back in kind so started creating some tools for Cypress and Pact (You can check this out via https://npm-stat.com)

Don’t forget to link up with people in your local community too.

You can check places like https://www.meetup.com/ or you can join a specialist community such as https://www.ministryoftesting.com/ who hold regional meetups all over the globe.

So I’ve just checked NPM stats today, and combined there is just over 24.5 million downloads of packages. Kind of weird that I was scared to show my code to people for fear of being found out, and my code is out there, it’s not very good but its doing something, and I’m still in a job, so phew!

It’s not just me at InfinityWorks who is passionate about quality and testing, and we have implemented Pact on several client accounts, we’ve ran training sessions with clients, our consultants and our academy superstars. This allowed us to be listed as Pactflow partners which brings the potential for commercial success. This wasn’t my aim when I started off making contributions, but is a mega win for mine, and the rest of our great engineers efforts.

So that all brings me I suppose, to the point of this post.

I spent a load of my spare time giving back little bits of the community, because there had been so many times I have received help myself and been hugely grateful.

That time has a cost, so instead of working on cars, or spending evenings with the family, I’ve been beavering away at a computer.

Karma works in mysterious ways and every single thing I have done in the community, has resulted in positive outcomes, for me individually, for others, for my company, for other companies.

I got offered the dream job, to work with a company, who are aligned in my wants to create beautiful developer experiences, so that teams can safely deploy changes, and spend more time ensuring they are building the right thing, rather than spending time keeping the lights on.

The company is Pactflow and I will be taking on a Developer Advocate / Community Shepherd role (which is quite a nice label to help my tech identity crisis).

I’m so excited about the future and can’t wait to work with Pact’s awesome pool of open source maintainers, contributors and users and the Pactflow team.

Cypress Edge – Now available for Windows

Supported versions

  • Microsoft Edge for Windows 10 (Canary Build)
  • Microsoft Edge for Windows 10 (Dev Build)
  • Microsoft Edge for Windows 10 (Beta Build)

Instructions for Windows

  1. Download Microsoft Edge version of choice from https://www.microsoftedgeinsider.com/en-us/
  2. Make a new directory
  3. Run set CYPRESS_INSTALL_BINARY=https://github.com/YOU54F/cypress/releases/download/v3.5.0/cypress_win.zip
  4. Run npm init
  5. Run npm install @you54f/cypress --save
  6. Run node_modules/.bin/cypress open --browser edgeDev to open in interactive mode, and setup Cypress.io‘s example project
  7. Run node_modules/.bin/cypress run --browser edgeDev or node_modules/.bin/cypress run --browser edgeCanary to run in command line mode.
  8. Rejoice & please pass back some appreciation with a star on the repository! Thanks 🙂

Dynamically generate data in Cypress from CSV/XLSX

A quick walkthrough on how to use data from Excel spreadsheets or CSV files, in order to dynamically generate multiple Cypress tests.

We are going to use a 2 column table witht username & password for our example, but in reality this could be any data. We have the following table in csv & xlsx format.

username password
User1 Password1
User2 Password2
User3 Password3
User4 Password4
User5 Password5
User6 Password6
User7 Password7
User8 Password8
User9 Password9
User10 Password10

And we are going to login into the following page

https://the-internet.herokuapp.com/login

First we need to convert our XLSX file to JSON with https://github.com/SheetJS/js-xlsx

import { writeFileSync } from "fs";
import * as XLSX from "xlsx";
try {
  const workBook = XLSX.readFile("./testData/testData.xlsx");
  const jsonData = XLSX.utils.sheet_to_json(workBook.Sheets.testData);
  writeFileSync(
    "./cypress/fixtures/testData.json",
    JSON.stringify(jsonData, null, 4),
    "utf-8"
  );
} catch (e) {
  throw Error(e);
}

or CSV file to JSON with https://www.papaparse.com/

import { readFileSync, writeFileSync } from "fs";
import { parse } from "papaparse";
try {
  const csvFile = readFileSync("./testData/testData.csv", "utf8");
  const csvResults = parse(csvFile, {
    header: true,
    complete: csvData => csvData.data
  }).data;
  writeFileSync(
    "./cypress/fixtures/testDataFromCSV.json",
    JSON.stringify(csvResults, null, 4),
    "utf-8"
  );
} catch (e) {
  throw Error(e);
}

In our cypress test file, we are going to

  1. Import our generated JSON file into testData
  2. Loop over each testDataRow, inside the describe block, and set the data object with our username & password
  3. Setup a mocha context with a dynamically generated title, unique for each data row
  4. A single test is written inside the it block using our data attributes, this will be executed as 10 separate tests
import { login } from "../support/pageObjects/login.page";
const testData = require("../fixtures/testData.json");
describe("Dynamically Generated Tests", () => {
  testData.forEach((testDataRow: any) => {
    const data = {
      username: testDataRow.username,
      password: testDataRow.password
    };
    context(`Generating a test for ${data.username}`, () => {
      it("should fail to login for the specified details", () => {
        login.visit();
        login.username.type(data.username);
        login.password.type(`${data.password}{enter}`);
        login.errorMsg.contains("Your username is invalid!");
        login.logOutButton.should("not.exist");
      });
    });
  });
});
Voila – Dynamically generated tests from Excel or CSV files! Enjoy

You can extend this further by

  • Manipulating the data in the test script, prior to using it in your test such as shifting date of birth by an offset
  • Having different outcomes in your test or running different assertions based on a parameter in your test data file.

A full working example can be downloaded here:- https://github.com/YOU54F/cypress-dynamic-data

git clone git@github.com:YOU54F/cypress-docker-typescript.git

yarn install

To convert Excel files to JSON

make convertXLStoJSON or npm run convertXLStoJSON

  • File:- testData/convertXLStoJSON.ts
  • Input:- testData/testData.xlsx
  • Output:- cypress/fixtures/testData.json

To convert CSV to JSON

make convertCSVtoJSON or yarn run convertCSVtoJSON

  • File:- testData/convertCSVtoJSON.ts
  • Input:- testData/testData.csv
  • Output:- cypress/fixtures/testDataFromCSV.json

To see the test in action

  • export CYPRESS_SUT_URL=https://the-internet.herokuapp.com
  • npx cypress open --env configFile=development or make test-local-gui

Open the script login.spec.ts which will generate a test for every entry in the CSV or XLS (default) file.

If you wish to read from the CSV, in the file cypress/integration/login.spec.ts

Change const testData = require("../fixtures/testData.json"); to

const testData = require("../fixtures/testDataFromCSV.json");

Configuring Cypress to work with iFrames & cross-origin sites.

Currently working Browsers & Modes

  •  Chrome Headed
    •  Cypress UI
    •  Cypress CLI

There are a considerations for automating your web application with Cypress, that you may come across, which may lead you to the Cypress Web Security Docs or trawling through Cypress raised issues for potential workarounds/solutions.

Problems you may encounter

Cypress Docs – disabling web security

  • Display insecure content
  • Navigate to any superdomain without cross origin errors
  • Access cross origin iframes that are embedded in your application.

Simply by setting chromeWebSecurity to false in your cypress.json

{
  "chromeWebSecurity": false
}

If you set it in your base cypress.json, then you will apply this to all your sites, which may not be ideal, as you may only want to cater for insecure content on your dev machine, but secure content, in testing in prod.

See how to configure Cypress per env configuration files

However we wanted to check a journey that integrates with a 3rd party, and came across some cross site issues

Uncaught DOMException: Blocked a frame with origin "https://your_site_here" from accessing a cross-origin frame.

So we switch off chromeWebSecurity: false and then get this error

Refused to display 'https://your_site_here' in a frame because it set 'X-Frame-Options' to 'sameorigin'.

Looks like these guys had the same issue

Cypress Issue #1763

Cypress Issue #944

So hi-ho, it’s off to docs we go

Chromium Site Isolation Docs

chromium-command-line-switches

We want to disable the following features

  • --disable-features=CrossSiteDocumentBlockingAlways,CrossSiteDocumentBlockingIfIsolating
  • -disable-features=IsolateOrigins,site-per-process
    • IsolateOrigins- Require dedicated processes for a set of origins, specified as a comma-separated list.
    • site-per-process – Enforces a one-site-per-process security policy: Each renderer process, for its whole lifetime, is dedicated to rendering pages for just one site.
      * Thus, pages from different sites are never in the same process.
      * A renderer process’s access rights are restricted based on its site.
      * All cross-site navigations force process swaps. <iframe>s are rendered out-of-process whenever the src= is cross-site.

So lets add the following to cypress/plugins/index.js

const path = require('path');

module.exports = (on, config) => {
  on('before:browser:launch', (browser = {}, args) => {
    console.log(config, browser, args);
    if (browser.name === 'chrome') {
      args.push("--disable-features=CrossSiteDocumentBlockingIfIsolating,CrossSiteDocumentBlockingAlways,IsolateOrigins,site-per-process");
    }
    return args;
  });
};

We now want to drop the following headers to allow all pages to be i-framed.

  • ‘content-security-policy’,
  • ‘x-frame-options

We can use Ignore X-Frame headers chrome extension and load it into our cypress instance, so we can download it from https://chrome-extension-downloader.com/ and place is your cypress/extensions folder, or you can get the source code directly here https://gist.github.com/dergachev/e216b25d9a144914eae2, saving the files to cypress/extensions/ignore-x-frame-headers

add the following to cypress/index.js

const path = require('path');

module.exports = (on, config) => {
  on('before:browser:launch', (browser = {}, args) => {
    console.log(config, browser, args);
    if (browser.name === 'chrome') {
      const ignoreXFrameHeadersExtension = path.join(__dirname, '../extensions/ignore-x-frame-headers');
      args.push(args.push(`--load-extension=${ignoreXFrameHeadersExtension}`));
    }
    return args;
  });
};

We can also automate the download of the extension for CI systems.

npm i chrome-ext-downloader --save-dev or yarn add chrome-ext-downloader --dev

put the following in package.json

{
  "scripts": {
    "download-extension": "ced gleekbfjekiniecknbkamfmkohkpodhe extensions/ignore-x-frame-headers"
  },
  "dependencies": {
    "chrome-ext-downloader": "^1.0.4",
  }
}

Our final cypress/plugins/index.js file incorporating both changes, will look like below

const path = require('path');

module.exports = (on, config) => {
  on('before:browser:launch', (browser = {}, args) => {
    console.log(config, browser, args);
    if (browser.name === 'chrome') {
      const ignoreXFrameHeadersExtension = path.join(__dirname, '../extensions/ignore-x-frame-headers');
      args.push(args.push(`--load-extension=${ignoreXFrameHeadersExtension}`));
      args.push("--disable-features=CrossSiteDocumentBlockingIfIsolating,CrossSiteDocumentBlockingAlways,IsolateOrigins,site-per-process");
    }
    return args;
  });
};

Note:- Since writing this article, the extension has been deleted now from the google extension store, which although it still exists, it means it cannot be downloaded with chrome-ext-downloader

Source code can be found here :- https://gist.github.com/dergachev/e216b25d9a144914eae2

Extension can still be downloaded from https://www.crx4chrome.com/extensions/gleekbfjekiniecknbkamfmkohkpodhe/

If there is enough demand, I will republish the source-code and publish to the chrome web store, with full credits to the original author.

Jest-Pact – A Jest-adaptor to help write Pact files with ease

In previous posts, I have spoken about Pact.io. A wonderful set of tools, designed to help you and your team develop smarter, with consumer-driven contract tests.

We use Jest at work to test our TypeScript code, so it made sense to use Jest as our testing framework, to write our Pact unit tests with.

The Jest example on Pact-JS, involve a-lot of setup, which resulted in a fair bit of cognitive-load before a developer could start writing their Contract tests.

Inspired by a post by Tim Jones, one of the maintainers of Pact-JS and a member of the Dius team who built PACT, I decided to build and release an adapter for Jest, which would abstract the pact setup away from the developer, leaving them to concentrate on the tests.

Features

  •  instantiates the PactOptions for you
  •  Setups Pact mock service before and after hooks so you don’t have to
  •  Assign random ports and pass port back to user so we can run in parallel without port clashes

Adapter Installation

npm i jest-pact --save-dev

OR

yarn add jest-pact --dev

Usage

pactWith({ consumer: 'MyConsumer', provider: 'MyProvider' }, provider => {
    // regular pact tests go here
}

Example

Say that your API layer looks something like this:

import axios from 'axios';

const defaultBaseUrl = "http://your-api.example.com"

export const api = (baseUrl = defaultBaseUrl) => ({
     getHealth: () => axios.get(`${baseUrl}/health`)
                    .then(response => response.data.status)
    /* other endpoints here */
})

Then your test might look like:

import { pactWith } from 'jest-pact';
import { Matchers } from '@pact-foundation/pact';
import api from 'yourCode';

pactWith({ consumer: 'MyConsumer', provider: 'MyProvider' }, provider => {
  let client;
  
  beforeEach(() => {
    client = api(provider.mockService.baseUrl)
  });

  describe('health endpoint', () => {
    // Here we set up the interaction that the Pact
    // mock provider will expect.
    //
    // jest-pact takes care of validating and tearing 
    // down the provider for you. 
    beforeEach(() =>
      provider.addInteraction({
        state: "Server is healthy",
        uponReceiving: 'A request for API health',
        willRespondWith: {
          status: 200,
          body: {
            status: Matchers.like('up'),
          },
        },
        withRequest: {
          method: 'GET',
          path: '/health',
        },
      })
    );
    
    // You also test that the API returns the correct 
    // response to the data layer. 
    //
    // Although Pact will ensure that the provider
    // returned the expected object, you need to test that
    // your code recieves the right object.
    //
    // This is often the same as the object that was 
    // in the network response, but (as illustrated 
    // here) not always.
    it('returns server health', () =>
      client.health().then(health => {
        expect(health).toEqual('up');
      }));
  });

You can make your tests easier to read by extracting your request and responses:

/* pact.fixtures.js */
import { Matchers } from '@pact-foundation/pact';

export const healthRequest = {
  uponReceiving: 'A request for API health',
  withRequest: {
    method: 'GET',
    path: '/health',
  },
};

export const healthyResponse = {
  status: 200,
  body: {
    status: Matchers.like('up'),
  },
} 
import { pactWith } from 'jest-pact';
import { healthRequest, healthyResponse } from "./pact.fixtures";

import api from 'yourCode';

pactWith({ consumer: 'MyConsumer', provider: 'MyProvider' }, provider => {
  let client;
  
  beforeEach(() => {
    client = api(provider.mockService.baseUrl)
  });

  describe('health endpoint', () => {

    beforeEach(() =>
      provider.addInteraction({
        state: "Server is healthy",
        ...healthRequest,
        willRespondWith: healthyResponse
      })
    );
    
    it('returns server health', () =>
      client.health().then(health => {
        expect(health).toEqual('up');
      }));
  });

Configuration

pactWith(PactOptions, provider => {
    // regular pact tests go here
}

interface PactOptions {
  provider: string;
  consumer: string;
  port?: number; // defaults to a random port if not provided
  pactfileWriteMode?: PactFileWriteMode;
  dir? string // defaults to pact/pacts if not provided
}

type LogLevel = "trace" | "debug" | "info" | "warn" | "error" | "fatal";
type PactFileWriteMode = "overwrite" | "update" | "merge";

Defaults

  • Log files are written to /pact/logs
  • Pact files are written to /pact/pacts

Jest Watch Mode

By default Jest will watch all your files for changes, which means it will run in an infinite loop as your pact tests will generate json pact files and log files.

You can get round this by using the following watchPathIgnorePatterns: ["pact/logs/*","pact/pacts/*"] in your jest.config.js

Example

module.exports = {
  testMatch: ["**/*.test.(ts|js)", "**/*.it.(ts|js)", "**/*.pacttest.(ts|js)"],
  watchPathIgnorePatterns: ["pact/logs/*", "pact/pacts/*"]
};

You can now run your tests with jest --watch and when you change a pact file, or your source code, your pact tests will run

Examples of usage of jest-pact

See Jest-Pact-Typescript which showcases a full consumer workflow written in Typescript with Jest, using this adaptor

  •  Example pact tests
    •  AWS v4 Signed API Gateway Provider
    •  Soap API provider
    •  File upload API provider
    •  JSON API provider

Examples Installation

  • clone repository git@github.com:YOU54F/jest-pact-typescript.git
  • Run yarn install
  • Run yarn run pact-test

Generated pacts will be output in pact/pacts Log files will be output in pact/logs

Credits

Slack Reporting for Cypress.io

I’ve been using Cypress for front-end testing for the last year, which we have been executing in our CI pipeline with CircleCI. CircleCI offers slack notifications for builds, but it doesn’t offer the ability to customise the Slack notifications with build metadata. So I decided to write a slack reporter, that would do the following

  • Notify a channel when tests are complete
  • Display the test run status (Passed / Failed / Build Failure), plus number of tests
  • Display VCS metadata (Branch Name / Triggering Commit & Author)
  • Display VCS Pull Requesdt metadata (number and link to PR )
  • Provide a link to CI build log
  • Provide a link to a test report generated with Mochawesome
  • Provide links to screenshots / videos of failing test runs

The source code is available here :- https://github.com/YOU54F/cypress-slack-reporter

It has been released as a downloadable package from NPM, read below for details on how to get it, and how to use it.

As this is an add-on for Cypress, we still need a few pre-requisites

1. Download the npm package direct from the registry

npm install cypress-slack-reporter --save-dev

or

yarn add cypress-slack-reporter --dev

2. Create a Slack incoming webhook URL at Slack Apps

3. Setup an environment variable to hold your webhook, created in the last step and save as SLACK_WEBHOOK_URL

$ export SLACK_WEBHOOK_URL=yourWebhookUrlHere

4. Add the following in your cypress.json file

{
  ...
  "reporter": "cypress-multi-reporters",
  "reporterOptions": {
    "configFile": "reporterOpts.json"
  }
}

5. Add the following in a newly created reporterOpts.json file

{
  "reporterEnabled": "mochawesome",
  "mochawesomeReporterOptions": {
    "reportDir": "cypress/reports/mocha",
    "quiet": true,
    "overwrite": false,
    "html": false,
    "json": true
  }
}

6. Run cypress in run mode, which will generate a mochawesome test report, per spec file.

7. We now need to combine the seperate mochawesome files into a single file using mochawesome-merge

$ mkdir mochareports && npx mochawesome-merge --reportDir cypress/reports/mocha > mochareports/report-$$(date +'%Y%m%d-%H%M%S').json

8. We will now generate our test report with mochawesome, using our consolidated test report

$ npx marge mochareports/*.json -f report-$$(date +'%Y%m%d-%H%M%S') -o mochareports

9. We can now run our Slack Reporter, and set any non-default options

$ npx cypress-slack-reporter --help

  Usage: index.ts [options]

  Options:
    -v, --version            output the version number
    --vcs-provider [type]    VCS Provider [github|bitbucket|none] (default: "github")
    --ci-provider [type]     CI Provider [circleci|none] (default: "circleci")
    --report-dir [type]      mochawesome json & html test report directory, relative to your package.json (default: "mochareports")
    --screenshot-dir [type]  cypress screenshot directory, relative to your package.json (default: "cypress/screenshots")
    --video-dir [type]       cypress video directory, relative to your package.json (default: "cypress/videos")
    --verbose                show log output
    -h, --help               output usage information

Our generated slack reports will look like below.

alerts.png

Currently we support CircleCI for CI & GitHub/BitBucket VCS’s.

For other providers, please raise a GitHub issue or pass --ci-provider none provider flag to provide a simple slack message based on the mochawesome report status.

It is possible to run to run the slack-reporter programatically via a script

// tslint:disable-next-line: no-reference
/// <reference path='./node_modules/cypress/types/cypress-npm-api.d.ts'/>
import * as CypressNpmApi from "cypress";
import {slackRunner}from "cypress-slack-reporter/bin/slack/slack-alert";
// tslint:disable: no-var-requires
const marge = require("mochawesome-report-generator");
const { merge } = require("mochawesome-merge");
// tslint:disable: no-var-requires

CypressNpmApi.run({
  reporter: "cypress-multi-reporters",
  reporterOptions: {
    reporterEnabled: "mocha-junit-reporters, mochawesome",
    mochaJunitReportersReporterOptions: {
      mochaFile: "cypress/reports/junit/test_results[hash].xml",
      toConsole: false
    },
    mochawesomeReporterOptions: {
      reportDir: "cypress/reports/mocha",
      quiet: true,
      overwrite: true,
      html: false,
      json: true
    }
  }
})
  .then(async results => {
    const generatedReport =  await Promise.resolve(generateReport({
      reportDir: "cypress/reports/mocha",
      inline: true,
      saveJson: true,
    }))
    // tslint:disable-next-line: no-console
    console.log("Merged report available here:-",generatedReport);
    return generatedReport
  })
  .then(generatedReport => {
    const base = process.env.PWD || ".";
    const program: any = {
      ciProvider: "circleci",
      videoDir: `${base}/cypress/videos`,
      vcsProvider: "github",
      screenshotDir: `${base}/cypress/screenshots`,
      verbose: true,
      reportDir: `${base}/cypress/reports/mocha`
    };
    const ciProvider: string = program.ciProvider;
    const vcsProvider: string = program.vcsProvider;
    const reportDirectory: string = program.reportDir;
    const videoDirectory: string = program.videoDir;
    const screenshotDirectory: string = program.screenshotDir;
    const verbose: boolean = program.verbose;
    // tslint:disable-next-line: no-console
    console.log("Constructing Slack message with the following options", {
      ciProvider,
      vcsProvider,
      reportDirectory,
      videoDirectory,
      screenshotDirectory,
      verbose
    });
    const slack = slackRunner(
      ciProvider,
      vcsProvider,
      reportDirectory,
      videoDirectory,
      screenshotDirectory,
      verbose
    );
     // tslint:disable-next-line: no-console
     console.log("Finished slack upload")

  })
  .catch((err: any) => {
    // tslint:disable-next-line: no-console
    console.log(err);
  });

function generateReport(options: any) {
  return merge(options).then((report: any) =>
    marge.create(report, options)
  );
}

I have been extending the reporter, to allow the ability to upload the mochawesome report, and cypress artefacts (screenshots & videos) to an S3 bucket, and use the returned bucket links, for the Slack reporter. It is currently working on a PR, but needs adding to the CLI before it can be added to the master branch.

The new Chromium-based Microsoft Edge for Mac has been leaked — And it works with Cypress, and now you can test it too!

Following on from my previous blog post here about getting Cypress working with Microsoft Edge, I have released versions that you can test out yourself 🙂

An example repository here:- https://github.com/YOU54F/cypress-edge

  1. Download Microsoft Edge for Mac (Canary Build) for MacOS here
  2. Make a new directory
  3. Run export CYPRESS_INSTALL_BINARY=https://github.com/YOU54F/cypress/releases/download/v3.2.0-edge.1/cypress-3.2.0-edge.1.zip
  4. Run npm init
  5. Run npm install @you54f/cypress --save
  6. Run node_modules/.bin/cypress open --browser edge to open in interactive mode, and setup Cypress.io‘s example project
  7. Run node_modules/.bin/cypress run --browser edge to run in command line mode.
  8. Rejoice & please pass back some appreciation with a star on the repository! Thanks 🙂

The new Chromium-based Microsoft Edge for Mac has been leaked — And it works with Cypress.

So I’ve been using Cypress for a while now to test our apps, it’s an incredible testing tool, with many features developers will feel at home with and providing incredibly fast and detailed feedback which remote-browser tools cannot compete with.

However there has been a bone of contention for some. The lack of cross-browser compatibility. For now, it will only work with Chrome and Electron.

Yep, no IE10/11, Firefox, Safari, Opera etc.

Best not delete your favourite Selenium based tool just yet.

However there is some light on the horizon, and from the likes of Microsoft no less.

Rumours floated around late last year that Microsoft were ditching efforts on their budding IE11 replacement Edge, with well, Edge. Just based on Chromium this time. You can get it for Windows 10 here from Windows Insiders.

If you visit the above page on a MacOS, you’ll see a button asking you to be notified, however Twitter user WalkingCat posted up links from Microsoft’s CDN.

Microsoft Edge for Mac (Canary Build)

Microsoft Edge for Mac (Dev Build)

So I thought I would spin up Cypress and see if I could get it to work with Edge but it choked on the folder name.

Hmmm, lets rename the app so it doesn’t have spaces in it.

So we need to tell Cypress about Edge

Its listed now, good start

Lets fire up Cypress runner in GUI mode

Result!!!

Let’s run all the integration tests.

As if they all passed first time. How about the CLI?

Sweet! Not bad for a first run! Now we just need to wait for Microsoft to release Chromium Edge to the masses. Hopefully a linux flavour will be on the horizon, I will keep you posted if so!

Follow the PR to track Cypress & Microsoft Edge – https://github.com/cypress-io/cypress/pull/4203

Thats all folks, thanks for reading, and feel free to follow me @ https://github.com/YOU54F for more of my fumblings in code.

Update :- I’ve now followed up this with another blog post where I have published a beta version of Cypress with Edge support for testing purposes. See here for the blog post with a link to an example GitHub repo and installation instructions!

Securing the Pact Broker with nginx and LetsEncrypt

Dockerised Pact Broker – Secure Implementation

Background & Aim

The cool guys and girls over at Dius offer a dockerised implementation of the Pact-Broker for free! I know, amazing, right? You can get it right now here

However out of the box, the Docker solution is not secure. There is an example SSL configuration, utilising nginx as a reverse proxy, to allow access solely via HTTPS, provided by the PACT team.

I have extended on this implementation, to ensure we are following current industry standards for a secure nginx implementation.

Additionally we will go through the process of how to generate your own self-signed certificates and register them with a Certificate Authority to give confidence to your stakeholders and site-visitors.

We will only be using open-source tooling because open-source ftw <3.

If you haven’t already read my post about using Pact & Swagger to compliment your development workflow, you can check it out here.

Prerequisites

Additional Notes

  • This example will use a dockerised postgres instance, as described in the main pact_broker-docker readme, just so you can run the example end-to-end.
  • If you are able to use your cloud provider to sign your certificates, then you may not need to use lets-encrypt. In my example, I am using a self-managed AWS EC2 instance, which is unable to utilise AWS certificate manager, as you are unable to download the generated ceritifcates. If you are using Fargate, this is not an issue.

Initial Setup

  1. Install Docker on your instance
  2. Copy the contents of ssl_letsencrypt to your instance and rename to pact-broker
  3. Replace the following occurances found in the *.sh & docker-compose.yml files in pact-broker & pact-broker/lets-encrypt
    • domain_name – Replace with your registered domain name
    • email_address – Replace with your email address. It should match the registered domain
    • username – Replace with the name of your user (it is assumed your folder will live in /home/username/pact-broker but you can change to suit)
  4. Rename .env.example to .env.

Get to know your environment file

The .env file contains the credentials we will pass into the docker-compose file and ultimately to the pact-broker. More options can be added as per Pact.io documentation, but will also require adding into your docker-compose.yml file.

The database variables are setup to talk to the postgres database loaded via docker-compose.

PACT_BROKER_DATABASE_USERNAME=postgres
PACT_BROKER_DATABASE_PASSWORD=postgres
PACT_BROKER_DATABASE_HOST=postgres
PACT_BROKER_DATABASE_NAME=postgres
PACT_BROKER_BASIC_AUTH_USERNAME=readwrite
PACT_BROKER_BASIC_AUTH_PASSWORD=readwrite
PACT_BROKER_BASIC_AUTH_READ_ONLY_USERNAME=readonly
PACT_BROKER_BASIC_AUTH_READ_ONLY_PASSWORD=readonly

NOTE: Please do not commit your .env file to source control. If you do, consider your credentials comprimised and update them straight away

Generate your Signed SSL cerficate with Lets-Encrypt

Let-Encrypt is an open-source project which will allow you to create SSL certificates, and sign them against the Lets-Encrypt Certificate Authority. It is in a bid to help make the web safer.

  1. Change into the lets-encrypt folder
  2. Run docker-compose up -d. This will load up a single page application that lets-encrypt can read from, in order to verify that the domain is owned by you.
  3. Run ./makecertsstaging.sh – This will generate sample certificates for you, in lets-encrypt/out
  4. Run ./makecertsinfostaging.sh – This will provide information about the generated certificates for you.
  5. If all the above steps ran ok, we can safely remove the out dir in lets-encrypt/out to remove our staged certificates.
  6. Run ./makecerts.sh – This will generate your signed certificates for you, in lets-encrypt/out
  7. Run ./makecertsinfolive.sh – This will provide information about the generated certificates for you.

Certificates will be output to pact-broker/letsencrypt/out/etc/letsencrypt/live//

The folder is actually sym-linked, and the actual certificates live in the archive folder.

Each generated certificate will last for three months, a further section will discuss renewals.

Generate your Diffe-Hellman Param certificate

  1. Change into the lets-encrypt folder
  2. Run ./gen_dhparam.sh. This will take a while (5-10 minutes) so go make a brew.

Check your nginx configuration

There is a lot going on in the nginx configuration. I will touch on why each component is there, and you can elect to remove as you wish.

In this section, we are going to add headers to every request, to avoid cross-site scripting attacks

add_header X-XSS-Protection "1; mode=block";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

Remove the nginx version number from responses to avoid leaking implementation details.

server_tokens off;

In the first server block which is for HTTP requests, we do the following

  • Listen to all requests on port 80. Our server name, in the name of the pact broker docker image as defined in the docker-compose.yml
listen 80 default_server;
server_name broker;
  • Only allow GET methods, if accessed via port 80. Add in any request methods you wish to allow. I prefer to whitelist, rather than blacklist.
if ( $request_method !~ ^(GET|HEAD)$ ) {
return 405;
}

Redirect all HTTP requests, to HTTPS. We drop any request parameters that were provided to avoid any parameter injection in our redirect to HTTPS.

return 301 https://$host;

The second server block is for our HTTPS requests.

  • Listen on port 443 and enable ssl
listen 443 ssl;
server_name broker;
  • Our certificates are loaded in to the docker-container via the docker-compose.yml volumes section, on the following paths.
ssl_certificate "/etc/nginx/ssl/certs/fullchain.pem";
ssl_certificate_key "/etc/nginx/ssl/certs/privkey.pem";
ssl_dhparam "/etc/nginx/ssl/dhparam/dhparams.pem";
  • Enable SSL protocols. TLSv1 is insecure and shouldn’t be used. TLSv1.1 is weak. For compliance reasons, TLSv1 should not be used.
ssl_protocols TLSv1.2 TLSv1.3;
  • Only enable known strong SSL ciphers. It is a balancing act between using strong ciphers and compatability. A site scoring 100% on a cipher test, would not be compatible with all devices. The current set gives 95% on SSLLabs security test.
  • Let’s also tell nginx to use this list
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_prefer_server_ciphers on;
  • ecdh provides a nice default for nginx as not all openSSL implementations do it well
  • session tickets don’t provide forward secrecy.
  • Limit the SSL buffer size (default 16k iirc)
  • Maintain SSL connections for 10 minutes
  • Switch of gzip compression as it can be vunerable. Enable if needed.
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_buffer_size 4k;
ssl_session_cache shared:SSL:10m;
gzip off;

Add Strict Transport Security headers

add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
  • I am only enabling the following methods on HTTPS requests.
if ( $request_method !~ ^(POST|PUT|PATCH|GET|HEAD|DELETE)$ ) {
return 405;
}
  • Whilst implementing webhooks, I noted that URL based tokens are visible to users both rw/ro, to the pact-broker, so we are blocking access to the /webhooks url. This will also block /webhooks/**
  • This shows how you can provide granular control of traffic in nginx, you could allow POST’s only with an if statement.
error_page 418 = @blockAccess;

location /webhooks {
return 418;
}
location @blockAccess {
deny all;
}

The following block is used to proxy all requests recieved through nginx, to the pact broker.

  • proxy_set_headers are used to ensure the redirect urls are correct in the HAL browser and additionaly enforce our secure headers.
  • proxy_hide_headers will avoid leaking details of our pact_broker & passenger version.
  • proxy_pass will send our requests recieved on nginx through to the broker.
location / {

# Setting headers for redirects
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Scheme "https";
proxy_set_header X-Forwarded-Port "443";
proxy_set_header X-Forwarded-Ssl "on";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
proxy_set_header X-XSS-Protection "1; mode=block";
proxy_set_header X-Frame-Options DENY;
proxy_set_header X-Content-Type-Options nosniff;

# Hide return headers to avoid leaking implementation details
proxy_hide_header X-Powered-By;
proxy_hide_header X-Pact-Broker-Version;

# Perform the proxy pass to our site
proxy_pass http://broker:80;
}

Get to know your docker-compose file

Each docker container is connected by a specified network

networks:
- docker-network

Standard postgres configuration.

postgres:
image: postgres
healthcheck:
test: psql postgres --command "select 1" -U postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: postgres
networks:
- docker-network

The pact broker configuration with basic auth enabled.

  • Variables stored in the .env file
    are read by docker-compose on starting the containers
  • Read into the docker-compose file
    with variables prefixed with $
  • You can add additional supported pact parameters, either directly in here, on in your env file.
broker_app:
container_name: 'pact-broker'
image: dius/pact-broker:latest
links:
- postgres
environment:
PACT_BROKER_DATABASE_USERNAME: $PACT_BROKER_DATABASE_USERNAME
PACT_BROKER_DATABASE_PASSWORD: $PACT_BROKER_DATABASE_PASSWORD
PACT_BROKER_DATABASE_HOST: $PACT_BROKER_DATABASE_HOST
PACT_BROKER_DATABASE_NAME: $PACT_BROKER_DATABASE_NAME
PACT_BROKER_BASIC_AUTH_USERNAME: $PACT_BROKER_BASIC_AUTH_USERNAME
PACT_BROKER_BASIC_AUTH_PASSWORD: $PACT_BROKER_BASIC_AUTH_PASSWORD
PACT_BROKER_BASIC_AUTH_READ_ONLY_USERNAME: $PACT_BROKER_BASIC_AUTH_READ_ONLY_USERNAME
PACT_BROKER_BASIC_AUTH_READ_ONLY_PASSWORD: $PACT_BROKER_BASIC_AUTH_READ_ONLY_PASSWORD
PACT_BROKER_LOG_LEVEL: WARN
networks:
- docker-network

The configuration for nginx.

  • We link the pact broker container, called broker_app, but reference it as broker which is used as our servername in nginx configuration.
  • The first volume link loads in our nginx.conf file
  • The next three volumes point at the out directory of lets-encrypt.
  • The last volume will load in our example site we used for certification, it will be used for renewing our cerificates, which we will touch on after running our example.
nginx:
container_name: 'pact-nginx'
image: nginx:alpine
links:
- broker_app:broker
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./letsencrypt/out/etc/letsencrypt/live//fullchain.pem:/etc/nginx/ssl/certs/fullchain.pem
- ./letsencrypt/out/etc/letsencrypt/live//privkey.pem:/etc/nginx/ssl/certs/privkey.pem
- ./letsencrypt/out/etc/letsencrypt/live//chain.pem:/etc/nginx/ssl/certs/chain.pem
- ./letsencrypt/dhparam/dhparams.pem:/etc/nginx/ssl/dhparam/dhparams.pem
- ./letsencrypt/out/renewal:/data/letsencrypt
ports:
- "80:80"
- "8443:443"
networks:
- docker-network

Running our example

If you have not already generated your certificates, please do so now

  1. Change into the lets-encrypt folder
  2. Run docker-compose up -d. This will load up a single page application that lets-encrypt can read from, in order to verify that the domain is owned by you.
  3. Run ./makecertsstaging.sh – This will generate sample certificates for you, in lets-encrypt/out
  4. Run ./makecertsinfostaging.sh – This will provide information about the generated certificates for you.
  5. If all the above steps ran ok, we can safely remove the out dir in lets-encrypt/out to remove our staged certificates.
  6. Run ./makecerts.sh – This will generate your signed certificates for you, in lets-encrypt/out
  7. Run ./makecertsinfolive.sh – This will provide information about the generated certificates for you.

We can now run our secure broker

  1. Modify the docker-compose.yml file as required.
  2. Run docker-compose up to get a running Pact Broker and a clean Postgres database

Testing your setup

curl -v http://localhost
# This will redirect to https

curl -v http://localhost/matrix
# This will redirect to https root, not matrix

curl -v https://localhost/matrix
# This will redirect to https matrix page
# Note we don't provide the flag -k (insecure) as the website is certified

curl -v http://localhost/webhooks
curl -v https://localhost/webhooks
# This will return a 418 error

Renewing your certificates

We generated certificates with LetsEncrypt, however they will expire after 3 months. We have aimed to minimised disruption by incorporating the renewal process into our configuration, so we will just need to run a script to generate them and bounce our app.

  1. Ensure you are in the root folder, in our example the pact-broker folder
  2. Run ./renewcerts_staging.sh – This will run a do a dry run of the renewal process, or inform you that you don’t need to generate one yet.
  3. Run ./renewcerts.sh – This will run the renewal process and generate you new certicates and restart your docker instance

Certificates will be output to pact-broker/letsencrypt/out/etc/letsencrypt/live//

Note, the folder is the same as our old certificates, so no change to our docker-compose file. This is because this location is actually sym-linked, and the actual certificates live in the archive folder.

Replace the dockerised postgres DB with a proper instance

You will need to make some minor changes to utilise a non-dockerised Postgres instance.

Update the following environment variables in your .env file

PACT_BROKER_DATABASE_USERNAME=postgres
PACT_BROKER_DATABASE_PASSWORD=postgres
PACT_BROKER_DATABASE_HOST=postgres
PACT_BROKER_DATABASE_NAME=postgres

and comment out, or remove the following lines from your docker-compose.yml

# postgres:
# image: postgres
# healthcheck:
# test: psql postgres --command "select 1" -U postgres
# ports:
# - "5432:5432"
# environment:
# POSTGRES_USER: postgres
# POSTGRES_PASSWORD: password
# POSTGRES_DB: postgres

broker_app:
image: dius/pact-broker
links:
# - postgres

General Pact Broker configuration and usage

Documentation for the Pact Broker application itself can be found in the Pact Broker Wiki

Troubleshooting

See the Troubleshooting page on the wiki.