Evolving to a Resilient API Future: Lessons from the Past, Tools for the Future

If you would prefer a visual / narrated version of this blog post, feel free to watch my talk at WWDC25

APIs are the digital backbone of modern software, powering everything from online banking and retail to AI-powered automation and connected vehicles. They form the foundation of digital transformation strategies, but as their ubiquity grows, so does the impact of their failure. In this post, we’ll explore the historical pitfalls of API design, the challenges posed by modern microservice architectures, and how contract testing offers a path toward resilient, scalable, and future-ready APIs. 

APIs: The critical infrastructure of the digital age 

APIs aren’t just developer tools anymore – they’re mission-critical infrastructure. Whether it’s unlocking a car via a mobile app or facilitating multi-billion dollar e-commerce transactions, APIs enable seamless digital interactions. But as reliance on APIs deepens, so do the consequences of poorly designed, insecure, or untested endpoints. 

What we’ve learned from 20+ years of API failures 

Let’s explore a few high-profile examples of API-related failures: 

1. Zombie APIs and legacy debt 

APIs are only secure and manageable when they’re visible. A legacy app at T-Mobile, using outdated encryption, was exploited – highlighting the dangers of forgotten endpoints. 

https://www.bleepingcomputer.com/news/security/t-mobile-hacked-to-steal-data-of-37-million-accounts-in-api-data-breach/

2. Insecure design 

Kia discovered the hard way that unsecured API endpoints could allow unauthorized remote access to vehicles. Poor upfront security design led to APIs being misused with minimal information. 

https://vicone.com/blog/now-patched-kia-vulnerabilities-could-have-allowed-remote-control-using-only-a-license-plate-number

3. Knowledge loss and poor documentation 

Twitter (now X) faced major internal disruptions when technical staff were let go without preserving essential knowledge. One internal change took down parts of the platform – proof that tribal knowledge is not a scalable strategy. 

https://www.theverge.com/2023/3/6/23627875/twitter-outage-how-it-happened-engineer-api-shut-down

The growing complexity of API landscapes 

Modern software development has evolved dramatically: 

  • From monoliths to microservices 
  • From waterfall to agile/DevOps 
  • From on-prem to cloud-native, distributed systems 

Each of these transitions introduces new failure points. With microservices, for example, a single application may be split across hundreds of services. This leads to “microservice sprawl,” where system complexity becomes unmanageable. 

A visual metaphor often used is the “microservices death star”—an explosion of interdependent services, each introducing potential failure and maintenance overhead. The following image shows popular services, in their infancy, as the images are around 10 years old. Despite the young age of these services, they were incredibly complex systems, which will have only grown over time. 

Key stats (State of Software Quality – API 2023): 

https://smartbear.com/state-of-software-quality/api

  • 61% of API growth stems from microservices 
  • 81% of companies operate in multi-protocol environments 
  • 57% use three or more protocols (REST, GraphQL, Kafka, etc.) 

According to State of API in 2023, there are a number of obstacles to producing APIs, the most common of which is a lack of time. The second largest contributor is a lack of people, followed by a lack of API design skills.  
 

Design skill gaps may contribute to an over proliferation of microservices. This is problematic as too many APIs or microservices is noted by State of API as the sixth biggest obstacle to producing APIs, leading to “microservice sprawl” 

https://smartbear.com/state-of-software-quality/api/challenges/#

The testing dilemma in a microservice world 

Testing in modern systems is harder than ever. A quote from a real-world API team illustrates this: 

“We’ve got GraphQL, REST, Kafka, EventBridge… Testing inside a highly volatile set of integrated environments is extremely challenging. We need faster feedback and more reliability.” 

So, why does traditional testing fall short? 

  • End-to-end testing is flaky, slow, and brittle 
  • Integration environments are expensive and complex to maintain 
  • Unit tests are fast but lack confidence in integration behavior 

The testing pyramid: Balancing speed and confidence 

To balance cost, confidence, and coverage, the testing pyramid offers guidance: 

  • Base: Unit tests – fast, low cost, low coverage 
  • Middle: Integration tests – fast, high confidence in integrations 
  • Top: End-to-end tests – highest confidence, but fragile and slow 

We might choose to shift our integration testing concerns into unit testing, but it brings along a false sense of confidence for developers. 

The role of API specifications 

API specifications like OpenAPI and AsyncAPI have become the standard for documenting APIs: 

  • OpenAPI for RESTful services 
  • AsyncAPI for event-driven architectures 

These specifications serve as machine- and human-readable contracts. They enable automation in SDK generation, testing, governance and many other aspects of the software developer lifecycle. 

But they come with pitfalls: 

Challenges: 

  • Provider drift: When the implementation diverges from the spec. 
  • Consumer drift: When changes to the spec break real-world consumers. 
  • Versioning chaos: Without insight into actual usage, it’s hard to manage versions effectively. 
  • Lack of governance: Teams building APIs independently without central standards lead to inconsistency. 

Tools like SmartBear Spectral (by Stoplight) and Vacuum help enforce governance rules using custom linters for OpenAPI/AsyncAPI specifications. 

Enter contract testing: Verifying real integration needs 

Contract testing solves a central problem in API development: verifying that consumers and providers agree on how to interact – without needing to spin up full integration environments

What is contract testing? 

Contract testing validates: 

  • The consumer’s expectations of the API 
  • The provider’s ability to meet those expectations 

Rather than mocking with guesses, contract tests are based on real interaction contracts

The Pact workflow: 

  1. Consumers write tests → Pact generates contracts (as JSON) 
  1. Contracts are published to a broker (e.g., PactFlow) 
  1. Providers verify these contracts against real code 
  1. Safe deployments are validated via can-i-deploy tooling 

Contract testing helps shift validation left in the development lifecycle, enabling: 

  • Safer refactoring 
  • Independent deployments 
  • Fewer integration surprises 

From code to design-first: Bidirectional contract testing 

If your team uses OpenAPI or AsyncAPI specs, Pact now supports bidirectional contract testing. This means: 

  • You can verify the API spec against real-world usage 
  • You reduce onboarding time for teams using design-first approaches 

Pact contracts must be both concrete (behavioral) and valid subsets of your spec. This helps eliminate provider drift while maintaining human-readable documentation. 

This now extends in SmartBear API Hub, to provide API Architects and consumers the ability to edit API specifications and determine impacted consumers, directly in their editor. 

What’s next? AI, agentic consumers, and Arazzo 

AI-generated contract tests 

To lower the barrier of entry, AI-generated contract testing can scaffold tests based on: 

  • OpenAPI specs 
  • Code annotations 
  • Patterns learned from 10+ years of contract testing best practices 

Maximising the value of end-to-end testing, whilst addressing pitfalls 

When we consider our test pyramid, we get the most confidence from activities performed higher up the stack. End to end tests ensure we are satisfied that the core journeys in our system work as expected, but have a large co-ordination effort to setup. 
 
These activities are normally driven by a browser, this is very costly in terms of time/flakiness due to the remote nature of a browser interaction. 

Subcutaneous (under the skin) testing allows us to drive user facing journeys, traditionally performed by the browser, by instead using the API calls that bind them. 

•✅ More reliable tests 

•✅ High-confidence levels 

•❌ No leading specification to document these API focused workflows 

Agentic workflows & Arazzo 

With AI agents poised to become major API consumers, workflow-level testing will be essential. The new Arazzo specification (by the OpenAPI Initiative) defines workflows and user journeys in deterministic, testable ways. 

Example use case: 

  • An agent books travel by composing multiple APIs (search, booking, payment) 
  • Arazzo describes this as a sequence of API calls with clear data flow 

This future-focused tooling enables: 

  • Visual flow documentation 
  • Testable orchestrations across APIs 
  • Safe automation in AI-driven environments 

Final thoughts: Resilient APIs require intentional design and testing 

To evolve your APIs for the future, consider this roadmap: 

Use OpenAPI/AsyncAPI to standardize and document 
Enforce rules with tools like Spectral 
Apply contract testing to decouple integration testing 
Use Pact (or equivalent) to enable safe deployments 
Prepare for AI consumers and document your E2E journeys with Arazzo workflows 

With thoughtful governance, scalable testing, and a commitment to consumer visibility, your APIs can evolve gracefully over time, allowing your organisation the freedom to adapt to ever-changing markets and demands. 

Pact facts – An interactive history lesson

Did you know, Pact is nearly 10 years old!

As the de-facto leader in contract-testing, the eco-system has grown to be vast, just take a look below

image

However today, I am going to take you on a little journey of how it came to be, and show you what is to come.

TL;DR

> A lot happens in 10 years. We’ve seen it all here at Pact, from the proliferation of micro services, to ever increasing protocols like Protobufs, GraphQL, transports such as gRPC, Websockets and MQTT, EventDrivenArchitectures and data pipelines or emerging standards such as OpenAPI, AsyncAPI and CloudEvents.

> As we launch our Pact Plugin Framework bringing you new possibilities to the Pact eco-system, I’d like to invite you to try an interactive history lesson of Pact, from past, to present and beyond!

> Pact and the Pact-Plugin Framework will unlock the possibility of testing multiple transport and content types. You will see Pact used for gRPC, Protobuf and CSV based messages. Hope it feeds your imagination of the possiblities, it certainly has for me!

image

If it piques your interest, you should sign up for our upcoming webinar to hear more about our exciting news and what it means for you and the software development community.

The birth of Pact Ruby

Pact was originally written by a development team at realestate.com.au in 2013, born out of trying to solve the problem of how to do integration testing for their new Ruby microservices architecture. They threw together a codebase that was to become the first Pact implementation.

git add . && git commit -m 'Gem skeleton' && git push by James Fraser

Screenshot 2022-11-16 at 14 16 59

Ron Holshausen (at the time at DiUS, still one of the present day maintainers of Pact, and co-founder of Pactflow), first commit came shortly after.

Screenshot 2022-11-16 at 14 27 47

A few months later Beth Skurrie (then at DiUS, still one of the present day maintainers of Pact and co-founder of Pactflow ), joined one of the teams that was working with the Pact authors’ team.

She had recently seen a talk by J.B.Rainsberger entitled "Integration tests are a scam", which promoted the concept of "collaboration" and "contract" tests, so she was immediately interested when she was introduced to Pact.

J. B. has since softed his message, as have we, I think we all mellow as we get older 🙂

Beth’s first commit in Pact Ruby

Screenshot 2022-11-16 at 14 33 02

After trying (as most people do) to convince the authors that the provider should be the contract writer, she was soon convinced by Brent Snook, one of the original authors of Pact, of the value of consumer driven contracts. At this stage, she realised that what was missing in the current implementation was the ability to run the same request under different conditions, and "provider states" were born.

Viva la Pact Broker

Screenshot 2022-11-16 at 14 28 48

What the heck is a Pact Broker anyway, Saf?

The Pact Broker (as Pact was) being written to solve our own problem, which was trying to coordinate pact versions between projects.

It is an application that allows you to release customer value quickly and confidently by deploying your services independently and avoiding the bottleneck of integration tests, introducing a pact matrix.

It looks a little like this

image

By testing the Pact Matrix, you can be confident to deploy any service at any time, because your standalone CI tests have told you whether or not you are backwards compatible – no “certification environment” needed. And when there are multiple services in a context, this approach scales linearly, not exponentially like the certification environment approach.

Preaching the message

Soon after, Beth decided that Pact idea was the best thing since sliced bread, and she hasn’t stopped yacking on about it since. Hear Beth, Jon Eaves from REA and Evan Bottcher from ThoughtWorks speak at YOW!2014 in this YouTube video

Want a bit more of Beth? we told you she couldn’t stop yakking

Ron began spreading the message, read a blog post from 2014 here

The birth of Pact JVM

Pact spread around the codebases in the wider program of work at realestate.com.au, until it hit its first Java microservice. realestate.com.au had many JVM projects, so a group of DiUS consultants (including Ron Holshausen again) started the pact-jvm project on a hack day.

Screenshot 2022-11-16 at 14 37 32

Ron raised his first issue, https://github.com/pact-foundation/pact-jvm/issues/31, which led to his first PR

Screenshot 2022-11-16 at 14 43 27

You can watch a talk from Ron here talking about pact and Pact JVM

Like all grown up frameworks, processes are needed, and a Pact Specification was born.

Beth penned the first Pact test cases, which came to be Pact Specification v1.0.0

Screenshot 2022-11-16 at 14 45 42

It was at this stage that the authors realised that the Rubyisms in the format were going to have to be replaced by a non-language specific format, and the idea of the v2 pact-specification arose on Mar 27, 2014 though it would take a while (just over a year) before it became reality.

Screenshot 2022-11-16 at 14 53 18

Soon it became obvious that Javascript UIs would benefit greatly from using Pact with their backend APIs.

After tossing around the idea of implementing Pact yet again in another language, a decision was made to wrap the Ruby implementation (which was packaged as a standalone executable) to avoid the maintenance burden and potential of implementation mismatches. This became the pattern that was used for most of the following Pact implementations. Each language implemented a Pact DSL and mock service/verifier client, and called out to the Ruby mock service process/verifier in the background. The original Ruby JSON syntax was often used between the native clients and the mock service, as it was simpler to implement, however, the mock service took care of writing the actual pact in the v2 format.

The birth of Pact JS

Three versions of Pact-JS have existed, Fuying created the first commit of DiUS/pact-consumer-js-dsl. A familiar face Beth pops along for her first commit.

A few days apart, DiUS/pactjs0 was created, the first commit by Jeff Cann. Ron dropped his first commit ultimately deprecating it a little while later.

Enter Matt Fellows, dropping his first commit. A man of many talents, Matt is still one of the present day maintainers of Pact, as well as co-founding Pactflow.

It’s funny, JavaScript is akin to the bus service, you wait for ages and then three turn up at once 🤯.

Enter the still current Library, Pact-JS. It’s first commit by Tarcio Saraiva.

A few months later, Pact-JS became the sole library going forward

This multi-language capability gave us the ability to start building cross-platform contract testing suites, as demonstrated below with JSON/HTTP interactions in laser focus

image

You can out HTTP based Pact in our interactive tutorial here, in either Java or JavaScript

Pact proliferates – Lead by example

> Since the implementation of the v2 format, newer features have been added, and the v3 and v4 formats add support for handing multiple provider states, messaging, and ‘generators’.

One of the strengths of Pact is its specification, allowing anybody to create a new language binding in an interoperable way. Whilst this has been great at unifying compatibility, the sprawl of languages makes it hard to add significant new features/behaviour into the framework quickly (e.g. GraphQL or Protobuf support).

Wrapping the Ruby implementation allowed new languages to implement Pact quickly, however, it had its downsides. The standalone package worked by bundling the entire Ruby runtime with the codebase using Travelling Ruby, so it was large (~9MB). The native libraries also had to deal with the mock service process management, which could be fiddly on different platforms. It also made it difficult to run consumer tests in parallel, as each mock service process could only handle one thread at a time. The Ruby implementation was also lagging behind in feature development compared to the JVM, as Beth was spending more time on the Pact Broker.

To provide a single Pact implementation that could be used by all the required languages, the decision was made to create a reference implementation in Rust, that could be wrapped by each client language using FFI. The distributable package will be orders of magnitude smaller, and make it easier to run tests in parallel and avoid the process management issues, we have been slowly moving to our Rust core which solves many of the challenges that bundling Ruby presented.

It is worth noting that the "shared core" approach has largely been a successful exercise in this regard. There are many data points, but the implementation of WIP/Pending pacts was released (elapsed, not effort) in just a few weeks for the libraries that wrapped Ruby. In most cases, an update of the Ruby "binaries", mapping flags from the language specific API to dispatch to the underlying Ruby process, a README update and a release was all that was required. In many cases, new functionality is still published with an update to the Ruby binary, which has been automated through a script.

Beth often refers to the Ruby Goldberg machine, in a nod to Rube Goldberg.

image

We would love your engineering support in bringing efficiencies to our CI/CD processes used in our open source projects, or your artistic skills, if someone fancies drawing a Pact Rube Goldberg machine Please note this one is a pet-project work in progress but it does show off message testing in various pact languages (Java, JS, .NET, PHP, Python & Ruby)

image

Pact Plugin Philosophy

Being able to mix and match protocol, transport and interaction mode would be helpful in expanding the use cases.

Further, being able to add custom contract testing behaviour for bespoke use cases would be helpful in situations where we can’t justify the effort to build into the framework itself (custom protocols in banking such as AS2805 come to mind).

To give some sense of magnitude to the challenge, this table showed some of the Pact deficiencies across popular microservice deployments as of a couple of years ago

https://user-images.githubusercontent.com/53900/103729694-1e7e1400-5035-11eb-8d4e-641939791552.png

So the pact plug-in eco system was born, a way to allow new transport types, content matchers/generators and more to easily be added to the pact framework, without needing to wait for the core maintainers to roll it out. You can create your own, for public, private or commercial consumption!

image

Enough blurb, show me da code

Whilst it may be quite technical for some, others will relish the possibilities this will unlock. If you want something or see a use case, but aren’t quite sure how to put it to reality, try out our demos and give us a shout via 🔗 canny, our feature request board, or 🔗 slack

To prove how easy it was, and as a nice little nod back to the Grandmother of Pact, Ruby. Your very own devo avo put his money where his mouth is and built his own.

Try out our pact plug-in framework here

> This will allow you to see Pact and the Pact-Plugin Framework to test multiple transport and content types. You will see Pact used for gRPC, Protobuf and CSV based messages. Hope it feeds your imagination of the possiblities, it certainly has for me!

And to anchor it back to a picture you probably know from our Pact docs, plugins just sit in the middle and help extend the capabalities

image

Choose Possibilities, Choose plugins, Choose Pact!

A thank you to those who got us here

Standing on the shoulders of giants

How contributing back to the technology community can pay you back you in kind.

So I, like many of us in the computer industry, silently or otherwise, suffer from Imposter Syndrome.

As a tester with some technical skills learned well before my Uni years, I felt I wasn’t good enough to write production level code, like the developers.

I came across problems in my day to day work, and rather than ask for help from colleagues who might oust me for being a fraud,I would often run to the internet and seek advice from Dr Google.

He would offer me a course of Stack Overflow, countless blog repos and GitHub repositories chock full of goodies just waiting to be discovered.

As I progressed through my career, the places where we thrived were the places that openly fostered learning, and trying new things. Stepping outside of your comfort zone, and allowing you to fail, to make mistakes and improve.

It began to make me think, how can I help others in the community, what would I write.

The imposter syndrome snuck in again. Who would read your posts, they would all laugh at you. You would totally choke in a presentation. You’ll be fired.

I’ve got a mortgage and I like old cars, so I totally need my job.

So the years went on, and I didn’t write anything publicly about tech. It was really weird because those who know me, will know that I am happy to talk about anything.

In my younger years, I was heavily into forums, first was extreme pc overclocking. I would strap things like refrigeration units to my processors, run various tests and post screenshots and pictures.

As I got older and began to purchase and modify old cars, I would document my adventures, and as I gained knowledge around various subjects, or I completed a particular thing, I would write a Knowledge Base post, to others to follow and contribute too.

It was an amazing experience, I met some of my closest friends who will be with me for the rest of my life, some I will meet in another life.

People know me in the car scene not because they’ve met me, but the information I put out there helped them in some way.

Sure lots of people disagreed with my posts, some did it in other ways, some helped contribute to make improvements and suggestions, but it never caused me an issue. No-one said Saf, you don’t belong here, you don’t know anything about cars. (That bit is true, I still don’t, but I try 🙂 )

So I started contribute to the testing community. I signed up with the Ministry of Testing, and not long after they had a competition to win a trip to Test Bash in New York. I had never been to America, my curiosity was picqued.

The entrance fee? One post on why you want to attend TestBash, and one blog post about your adventures at the event.

So I rolled up my sleeves, and wrote my first tech post that was going to get judged, by testing peers who really have the authority to say you are rubbish, you shall not pass.

I WON >.< Ermergerd!

My employer offered to fund the flight as part of our individual tech budget, so I found myself on a flight to America.

I also started using some open source software for the first time, as I had been tasked with building a testing framework for an API blueprint we could use to roll out across teams. I posted this tweet from the plane

So I had to write a blog post now, I had won the competition and couldn’t possibly flake out

https://thefriendlytester.co.uk/2015/11/yousaf-nabi-ermahgerd-testbash-new-york.html

I moved onto to a new role, and was involved in a greenfield transformation project.

I used Karate / Wiremock & Docker to setup a consumer driven contract based testing workflow.

This involved using wire mock alongside docker to produce mock services, which are used to agree Consumer driven contracts between service producer & consumers.

Mocks were then built to allow autonomous development between squads, and avoid blockers in delivery.

Tests were written against the mock services, which would be run against the developed service, with downstream/upstream services mocked out, to perform component integration testing, and early integration testing is run against each PR initiated via team city.

A shell script would write the test results to each GitHub PR, along with slack notifications, providing all members of the squad, full and early visibility of test coverage and progress.

This proved to be efficient and invaluable, however some of the day-to-day constraints to exploratory test data, proved to be an issue in our test automation, namely around data provisioning.

It was complex, and it seemed difficult for me to sell the concept wider than my smaller engineering teams, who totally got it.

We had one major legacy provider, and we had a reactive relationship, whereby changes would just happen without us knowing.

I felt quite powerless to make the kind of organisational changes in thinking about quality.

I went on a stress control course, it was absolutely amazing.

I felt like Manny from Black Books after eating the little book of calm.

They told me to concentrate on the things I can control, don’t worry about the things I can’t, so I sacked off my job and went to play with old cars for a living

So after a few months had elapsed, I realised I missed being around people, and missed the buzz. I got a call from a former boss, who was now at InfinityWorks who said come along and see how you get on

That was a massive change for me moving into a consultancy role, and it escalated quickly as to how amazing it would be for me, as someone who had been test and quality focused, to being able to actually get involved in every aspect of delivery.

I am eternally grateful for the freedom I was provided here, as it has allowed me to reach heights that honestly would never have been possible in my traditional roles.

On my first account, I ran a whiteboard session about quality and testing with our engineers and one of them mentioned Pact.

As I began to look into it, I realised it encapsulated the work I had done in my previous role around a CDC testing framework, and I loved it.

It didn’t do quite what I wanted it to do however, so as it was open source, I was free to fork it any make whatever changes I wanted. I raised so many issues and fixes that I got recognised in a tech blog by one of IW’s top engineers about open source work at InfinityWorks, read the post here.

This was such a massive boost and properly kicked my imposter syndrome to touch.

I joined the Pact slack channel http://slack.pact.io/ which was a lovely home with lots of people all talking about consumer driven contract testing, and the challenges when moving to distributed event based systems. I had lots of awesome discussions, and met lots of new people, and funnily enough met a fair few people I had worked with in the past.

Ministry of Testing have their own slack group too (Join here — https://www.ministryoftesting.com/slack_invite) , where you can ask any question, or find great people always willing to help and talk about all things test. Just don’t forget about the XY problem

Our client has a big debate over whether to stick with Selenium or move to Cypress. I had never heard of it before, so ran some side by side evaluations and thought it was pretty slick. I wouldn’t be advocating running out and switching all your scripts over, but I loved their focus on developer experience, and not creating a black box UI testing tool.

Their documentation is incredibly good, they have fantastic recipes, blog posts, GitHub examples, so I thought I would pay some things to the Cypress community too.

I wrote some blog posts on how to address a couple of issues and workaround arounds for Cypress — see here for one

This came back to provide me big props at work, as a customer on another client account was struggling, had come across my blog post and then got to have me drop in on a video call to help him out.

I helped the Cypress team get Edge working when it switched to Chromium, and helped beta test Firefox, because multi browser support was a big sticking point for a-lot of people. I personally don’t think its an issue, and I advocate doing most of your checking activities without spinning up your browser, see this post for reasons why

Try not to take things personally, both when using open source software, or maintaining it. There are usually many things going on in peoples lives and it only takes a few extra moments to be courteous. Your post might help another person and it’s always nice to get appreciation for something you’ve created, and may have even long forgotten about. Even if it doesn’t it will remind yourself of something you did a few years back, which reduces our cognitive load, and can help reduce stress.

I thought I would pay back in kind so started creating some tools for Cypress and Pact (You can check this out via https://npm-stat.com)

Don’t forget to link up with people in your local community too.

You can check places like https://www.meetup.com/ or you can join a specialist community such as https://www.ministryoftesting.com/ who hold regional meetups all over the globe.

So I’ve just checked NPM stats today, and combined there is just over 24.5 million downloads of packages. Kind of weird that I was scared to show my code to people for fear of being found out, and my code is out there, it’s not very good but its doing something, and I’m still in a job, so phew!

It’s not just me at InfinityWorks who is passionate about quality and testing, and we have implemented Pact on several client accounts, we’ve ran training sessions with clients, our consultants and our academy superstars. This allowed us to be listed as Pactflow partners which brings the potential for commercial success. This wasn’t my aim when I started off making contributions, but is a mega win for mine, and the rest of our great engineers efforts.

So that all brings me I suppose, to the point of this post.

I spent a load of my spare time giving back little bits of the community, because there had been so many times I have received help myself and been hugely grateful.

That time has a cost, so instead of working on cars, or spending evenings with the family, I’ve been beavering away at a computer.

Karma works in mysterious ways and every single thing I have done in the community, has resulted in positive outcomes, for me individually, for others, for my company, for other companies.

I got offered the dream job, to work with a company, who are aligned in my wants to create beautiful developer experiences, so that teams can safely deploy changes, and spend more time ensuring they are building the right thing, rather than spending time keeping the lights on.

The company is Pactflow and I will be taking on a Developer Advocate / Community Shepherd role (which is quite a nice label to help my tech identity crisis).

I’m so excited about the future and can’t wait to work with Pact’s awesome pool of open source maintainers, contributors and users and the Pactflow team.

Cypress Edge – Now available for Windows

Supported versions

  • Microsoft Edge for Windows 10 (Canary Build)
  • Microsoft Edge for Windows 10 (Dev Build)
  • Microsoft Edge for Windows 10 (Beta Build)

Instructions for Windows

  1. Download Microsoft Edge version of choice from https://www.microsoftedgeinsider.com/en-us/
  2. Make a new directory
  3. Run set CYPRESS_INSTALL_BINARY=https://github.com/YOU54F/cypress/releases/download/v3.5.0/cypress_win.zip
  4. Run npm init
  5. Run npm install @you54f/cypress --save
  6. Run node_modules/.bin/cypress open --browser edgeDev to open in interactive mode, and setup Cypress.io‘s example project
  7. Run node_modules/.bin/cypress run --browser edgeDev or node_modules/.bin/cypress run --browser edgeCanary to run in command line mode.
  8. Rejoice & please pass back some appreciation with a star on the repository! Thanks 🙂

Dynamically generate data in Cypress from CSV/XLSX

A quick walkthrough on how to use data from Excel spreadsheets or CSV files, in order to dynamically generate multiple Cypress tests.

We are going to use a 2 column table witht username & password for our example, but in reality this could be any data. We have the following table in csv & xlsx format.

username password
User1 Password1
User2 Password2
User3 Password3
User4 Password4
User5 Password5
User6 Password6
User7 Password7
User8 Password8
User9 Password9
User10 Password10

And we are going to login into the following page

https://the-internet.herokuapp.com/login

First we need to convert our XLSX file to JSON with https://github.com/SheetJS/js-xlsx

import { writeFileSync } from "fs";
import * as XLSX from "xlsx";
try {
  const workBook = XLSX.readFile("./testData/testData.xlsx");
  const jsonData = XLSX.utils.sheet_to_json(workBook.Sheets.testData);
  writeFileSync(
    "./cypress/fixtures/testData.json",
    JSON.stringify(jsonData, null, 4),
    "utf-8"
  );
} catch (e) {
  throw Error(e);
}

or CSV file to JSON with https://www.papaparse.com/

import { readFileSync, writeFileSync } from "fs";
import { parse } from "papaparse";
try {
  const csvFile = readFileSync("./testData/testData.csv", "utf8");
  const csvResults = parse(csvFile, {
    header: true,
    complete: csvData => csvData.data
  }).data;
  writeFileSync(
    "./cypress/fixtures/testDataFromCSV.json",
    JSON.stringify(csvResults, null, 4),
    "utf-8"
  );
} catch (e) {
  throw Error(e);
}

In our cypress test file, we are going to

  1. Import our generated JSON file into testData
  2. Loop over each testDataRow, inside the describe block, and set the data object with our username & password
  3. Setup a mocha context with a dynamically generated title, unique for each data row
  4. A single test is written inside the it block using our data attributes, this will be executed as 10 separate tests
import { login } from "../support/pageObjects/login.page";
const testData = require("../fixtures/testData.json");
describe("Dynamically Generated Tests", () => {
  testData.forEach((testDataRow: any) => {
    const data = {
      username: testDataRow.username,
      password: testDataRow.password
    };
    context(`Generating a test for ${data.username}`, () => {
      it("should fail to login for the specified details", () => {
        login.visit();
        login.username.type(data.username);
        login.password.type(`${data.password}{enter}`);
        login.errorMsg.contains("Your username is invalid!");
        login.logOutButton.should("not.exist");
      });
    });
  });
});
Voila – Dynamically generated tests from Excel or CSV files! Enjoy

You can extend this further by

  • Manipulating the data in the test script, prior to using it in your test such as shifting date of birth by an offset
  • Having different outcomes in your test or running different assertions based on a parameter in your test data file.

A full working example can be downloaded here:- https://github.com/YOU54F/cypress-dynamic-data

git clone git@github.com:YOU54F/cypress-docker-typescript.git

yarn install

To convert Excel files to JSON

make convertXLStoJSON or npm run convertXLStoJSON

  • File:- testData/convertXLStoJSON.ts
  • Input:- testData/testData.xlsx
  • Output:- cypress/fixtures/testData.json

To convert CSV to JSON

make convertCSVtoJSON or yarn run convertCSVtoJSON

  • File:- testData/convertCSVtoJSON.ts
  • Input:- testData/testData.csv
  • Output:- cypress/fixtures/testDataFromCSV.json

To see the test in action

  • export CYPRESS_SUT_URL=https://the-internet.herokuapp.com
  • npx cypress open --env configFile=development or make test-local-gui

Open the script login.spec.ts which will generate a test for every entry in the CSV or XLS (default) file.

If you wish to read from the CSV, in the file cypress/integration/login.spec.ts

Change const testData = require("../fixtures/testData.json"); to

const testData = require("../fixtures/testDataFromCSV.json");

Configuring Cypress to work with iFrames & cross-origin sites.

Currently working Browsers & Modes

  •  Chrome Headed
    •  Cypress UI
    •  Cypress CLI

There are a considerations for automating your web application with Cypress, that you may come across, which may lead you to the Cypress Web Security Docs or trawling through Cypress raised issues for potential workarounds/solutions.

Problems you may encounter

Cypress Docs – disabling web security

  • Display insecure content
  • Navigate to any superdomain without cross origin errors
  • Access cross origin iframes that are embedded in your application.

Simply by setting chromeWebSecurity to false in your cypress.json

{
  "chromeWebSecurity": false
}

If you set it in your base cypress.json, then you will apply this to all your sites, which may not be ideal, as you may only want to cater for insecure content on your dev machine, but secure content, in testing in prod.

See how to configure Cypress per env configuration files

However we wanted to check a journey that integrates with a 3rd party, and came across some cross site issues

Uncaught DOMException: Blocked a frame with origin "https://your_site_here" from accessing a cross-origin frame.

So we switch off chromeWebSecurity: false and then get this error

Refused to display 'https://your_site_here' in a frame because it set 'X-Frame-Options' to 'sameorigin'.

Looks like these guys had the same issue

Cypress Issue #1763

Cypress Issue #944

So hi-ho, it’s off to docs we go

Chromium Site Isolation Docs

chromium-command-line-switches

We want to disable the following features

  • --disable-features=CrossSiteDocumentBlockingAlways,CrossSiteDocumentBlockingIfIsolating
  • -disable-features=IsolateOrigins,site-per-process
    • IsolateOrigins- Require dedicated processes for a set of origins, specified as a comma-separated list.
    • site-per-process – Enforces a one-site-per-process security policy: Each renderer process, for its whole lifetime, is dedicated to rendering pages for just one site.
      * Thus, pages from different sites are never in the same process.
      * A renderer process’s access rights are restricted based on its site.
      * All cross-site navigations force process swaps. <iframe>s are rendered out-of-process whenever the src= is cross-site.

So lets add the following to cypress/plugins/index.js

const path = require('path');

module.exports = (on, config) => {
  on('before:browser:launch', (browser = {}, args) => {
    console.log(config, browser, args);
    if (browser.name === 'chrome') {
      args.push("--disable-features=CrossSiteDocumentBlockingIfIsolating,CrossSiteDocumentBlockingAlways,IsolateOrigins,site-per-process");
    }
    return args;
  });
};

We now want to drop the following headers to allow all pages to be i-framed.

  • ‘content-security-policy’,
  • ‘x-frame-options

We can use Ignore X-Frame headers chrome extension and load it into our cypress instance, so we can download it from https://chrome-extension-downloader.com/ and place is your cypress/extensions folder, or you can get the source code directly here https://gist.github.com/dergachev/e216b25d9a144914eae2, saving the files to cypress/extensions/ignore-x-frame-headers

add the following to cypress/index.js

const path = require('path');

module.exports = (on, config) => {
  on('before:browser:launch', (browser = {}, args) => {
    console.log(config, browser, args);
    if (browser.name === 'chrome') {
      const ignoreXFrameHeadersExtension = path.join(__dirname, '../extensions/ignore-x-frame-headers');
      args.push(args.push(`--load-extension=${ignoreXFrameHeadersExtension}`));
    }
    return args;
  });
};

We can also automate the download of the extension for CI systems.

npm i chrome-ext-downloader --save-dev or yarn add chrome-ext-downloader --dev

put the following in package.json

{
  "scripts": {
    "download-extension": "ced gleekbfjekiniecknbkamfmkohkpodhe extensions/ignore-x-frame-headers"
  },
  "dependencies": {
    "chrome-ext-downloader": "^1.0.4",
  }
}

Our final cypress/plugins/index.js file incorporating both changes, will look like below

const path = require('path');

module.exports = (on, config) => {
  on('before:browser:launch', (browser = {}, args) => {
    console.log(config, browser, args);
    if (browser.name === 'chrome') {
      const ignoreXFrameHeadersExtension = path.join(__dirname, '../extensions/ignore-x-frame-headers');
      args.push(args.push(`--load-extension=${ignoreXFrameHeadersExtension}`));
      args.push("--disable-features=CrossSiteDocumentBlockingIfIsolating,CrossSiteDocumentBlockingAlways,IsolateOrigins,site-per-process");
    }
    return args;
  });
};

Note:- Since writing this article, the extension has been deleted now from the google extension store, which although it still exists, it means it cannot be downloaded with chrome-ext-downloader

Source code can be found here :- https://gist.github.com/dergachev/e216b25d9a144914eae2

Extension can still be downloaded from https://www.crx4chrome.com/extensions/gleekbfjekiniecknbkamfmkohkpodhe/

If there is enough demand, I will republish the source-code and publish to the chrome web store, with full credits to the original author.

Jest-Pact – A Jest-adaptor to help write Pact files with ease

In previous posts, I have spoken about Pact.io. A wonderful set of tools, designed to help you and your team develop smarter, with consumer-driven contract tests.

We use Jest at work to test our TypeScript code, so it made sense to use Jest as our testing framework, to write our Pact unit tests with.

The Jest example on Pact-JS, involve a-lot of setup, which resulted in a fair bit of cognitive-load before a developer could start writing their Contract tests.

Inspired by a post by Tim Jones, one of the maintainers of Pact-JS and a member of the Dius team who built PACT, I decided to build and release an adapter for Jest, which would abstract the pact setup away from the developer, leaving them to concentrate on the tests.

Features

  •  instantiates the PactOptions for you
  •  Setups Pact mock service before and after hooks so you don’t have to
  •  Assign random ports and pass port back to user so we can run in parallel without port clashes

Adapter Installation

npm i jest-pact --save-dev

OR

yarn add jest-pact --dev

Usage

pactWith({ consumer: 'MyConsumer', provider: 'MyProvider' }, provider => {
    // regular pact tests go here
}

Example

Say that your API layer looks something like this:

import axios from 'axios';

const defaultBaseUrl = "http://your-api.example.com"

export const api = (baseUrl = defaultBaseUrl) => ({
     getHealth: () => axios.get(`${baseUrl}/health`)
                    .then(response => response.data.status)
    /* other endpoints here */
})

Then your test might look like:

import { pactWith } from 'jest-pact';
import { Matchers } from '@pact-foundation/pact';
import api from 'yourCode';

pactWith({ consumer: 'MyConsumer', provider: 'MyProvider' }, provider => {
  let client;
  
  beforeEach(() => {
    client = api(provider.mockService.baseUrl)
  });

  describe('health endpoint', () => {
    // Here we set up the interaction that the Pact
    // mock provider will expect.
    //
    // jest-pact takes care of validating and tearing 
    // down the provider for you. 
    beforeEach(() =>
      provider.addInteraction({
        state: "Server is healthy",
        uponReceiving: 'A request for API health',
        willRespondWith: {
          status: 200,
          body: {
            status: Matchers.like('up'),
          },
        },
        withRequest: {
          method: 'GET',
          path: '/health',
        },
      })
    );
    
    // You also test that the API returns the correct 
    // response to the data layer. 
    //
    // Although Pact will ensure that the provider
    // returned the expected object, you need to test that
    // your code recieves the right object.
    //
    // This is often the same as the object that was 
    // in the network response, but (as illustrated 
    // here) not always.
    it('returns server health', () =>
      client.health().then(health => {
        expect(health).toEqual('up');
      }));
  });

You can make your tests easier to read by extracting your request and responses:

/* pact.fixtures.js */
import { Matchers } from '@pact-foundation/pact';

export const healthRequest = {
  uponReceiving: 'A request for API health',
  withRequest: {
    method: 'GET',
    path: '/health',
  },
};

export const healthyResponse = {
  status: 200,
  body: {
    status: Matchers.like('up'),
  },
} 
import { pactWith } from 'jest-pact';
import { healthRequest, healthyResponse } from "./pact.fixtures";

import api from 'yourCode';

pactWith({ consumer: 'MyConsumer', provider: 'MyProvider' }, provider => {
  let client;
  
  beforeEach(() => {
    client = api(provider.mockService.baseUrl)
  });

  describe('health endpoint', () => {

    beforeEach(() =>
      provider.addInteraction({
        state: "Server is healthy",
        ...healthRequest,
        willRespondWith: healthyResponse
      })
    );
    
    it('returns server health', () =>
      client.health().then(health => {
        expect(health).toEqual('up');
      }));
  });

Configuration

pactWith(PactOptions, provider => {
    // regular pact tests go here
}

interface PactOptions {
  provider: string;
  consumer: string;
  port?: number; // defaults to a random port if not provided
  pactfileWriteMode?: PactFileWriteMode;
  dir? string // defaults to pact/pacts if not provided
}

type LogLevel = "trace" | "debug" | "info" | "warn" | "error" | "fatal";
type PactFileWriteMode = "overwrite" | "update" | "merge";

Defaults

  • Log files are written to /pact/logs
  • Pact files are written to /pact/pacts

Jest Watch Mode

By default Jest will watch all your files for changes, which means it will run in an infinite loop as your pact tests will generate json pact files and log files.

You can get round this by using the following watchPathIgnorePatterns: ["pact/logs/*","pact/pacts/*"] in your jest.config.js

Example

module.exports = {
  testMatch: ["**/*.test.(ts|js)", "**/*.it.(ts|js)", "**/*.pacttest.(ts|js)"],
  watchPathIgnorePatterns: ["pact/logs/*", "pact/pacts/*"]
};

You can now run your tests with jest --watch and when you change a pact file, or your source code, your pact tests will run

Examples of usage of jest-pact

See Jest-Pact-Typescript which showcases a full consumer workflow written in Typescript with Jest, using this adaptor

  •  Example pact tests
    •  AWS v4 Signed API Gateway Provider
    •  Soap API provider
    •  File upload API provider
    •  JSON API provider

Examples Installation

  • clone repository git@github.com:YOU54F/jest-pact-typescript.git
  • Run yarn install
  • Run yarn run pact-test

Generated pacts will be output in pact/pacts Log files will be output in pact/logs

Credits

Slack Reporting for Cypress.io

I’ve been using Cypress for front-end testing for the last year, which we have been executing in our CI pipeline with CircleCI. CircleCI offers slack notifications for builds, but it doesn’t offer the ability to customise the Slack notifications with build metadata. So I decided to write a slack reporter, that would do the following

  • Notify a channel when tests are complete
  • Display the test run status (Passed / Failed / Build Failure), plus number of tests
  • Display VCS metadata (Branch Name / Triggering Commit & Author)
  • Display VCS Pull Requesdt metadata (number and link to PR )
  • Provide a link to CI build log
  • Provide a link to a test report generated with Mochawesome
  • Provide links to screenshots / videos of failing test runs

The source code is available here :- https://github.com/YOU54F/cypress-slack-reporter

It has been released as a downloadable package from NPM, read below for details on how to get it, and how to use it.

As this is an add-on for Cypress, we still need a few pre-requisites

1. Download the npm package direct from the registry

npm install cypress-slack-reporter --save-dev

or

yarn add cypress-slack-reporter --dev

2. Create a Slack incoming webhook URL at Slack Apps

3. Setup an environment variable to hold your webhook, created in the last step and save as SLACK_WEBHOOK_URL

$ export SLACK_WEBHOOK_URL=yourWebhookUrlHere

4. Add the following in your cypress.json file

{
  ...
  "reporter": "cypress-multi-reporters",
  "reporterOptions": {
    "configFile": "reporterOpts.json"
  }
}

5. Add the following in a newly created reporterOpts.json file

{
  "reporterEnabled": "mochawesome",
  "mochawesomeReporterOptions": {
    "reportDir": "cypress/reports/mocha",
    "quiet": true,
    "overwrite": false,
    "html": false,
    "json": true
  }
}

6. Run cypress in run mode, which will generate a mochawesome test report, per spec file.

7. We now need to combine the seperate mochawesome files into a single file using mochawesome-merge

$ mkdir mochareports && npx mochawesome-merge --reportDir cypress/reports/mocha > mochareports/report-$$(date +'%Y%m%d-%H%M%S').json

8. We will now generate our test report with mochawesome, using our consolidated test report

$ npx marge mochareports/*.json -f report-$$(date +'%Y%m%d-%H%M%S') -o mochareports

9. We can now run our Slack Reporter, and set any non-default options

$ npx cypress-slack-reporter --help

  Usage: index.ts [options]

  Options:
    -v, --version            output the version number
    --vcs-provider [type]    VCS Provider [github|bitbucket|none] (default: "github")
    --ci-provider [type]     CI Provider [circleci|none] (default: "circleci")
    --report-dir [type]      mochawesome json & html test report directory, relative to your package.json (default: "mochareports")
    --screenshot-dir [type]  cypress screenshot directory, relative to your package.json (default: "cypress/screenshots")
    --video-dir [type]       cypress video directory, relative to your package.json (default: "cypress/videos")
    --verbose                show log output
    -h, --help               output usage information

Our generated slack reports will look like below.

alerts.png

Currently we support CircleCI for CI & GitHub/BitBucket VCS’s.

For other providers, please raise a GitHub issue or pass --ci-provider none provider flag to provide a simple slack message based on the mochawesome report status.

It is possible to run to run the slack-reporter programatically via a script

// tslint:disable-next-line: no-reference
/// <reference path='./node_modules/cypress/types/cypress-npm-api.d.ts'/>
import * as CypressNpmApi from "cypress";
import {slackRunner}from "cypress-slack-reporter/bin/slack/slack-alert";
// tslint:disable: no-var-requires
const marge = require("mochawesome-report-generator");
const { merge } = require("mochawesome-merge");
// tslint:disable: no-var-requires

CypressNpmApi.run({
  reporter: "cypress-multi-reporters",
  reporterOptions: {
    reporterEnabled: "mocha-junit-reporters, mochawesome",
    mochaJunitReportersReporterOptions: {
      mochaFile: "cypress/reports/junit/test_results[hash].xml",
      toConsole: false
    },
    mochawesomeReporterOptions: {
      reportDir: "cypress/reports/mocha",
      quiet: true,
      overwrite: true,
      html: false,
      json: true
    }
  }
})
  .then(async results => {
    const generatedReport =  await Promise.resolve(generateReport({
      reportDir: "cypress/reports/mocha",
      inline: true,
      saveJson: true,
    }))
    // tslint:disable-next-line: no-console
    console.log("Merged report available here:-",generatedReport);
    return generatedReport
  })
  .then(generatedReport => {
    const base = process.env.PWD || ".";
    const program: any = {
      ciProvider: "circleci",
      videoDir: `${base}/cypress/videos`,
      vcsProvider: "github",
      screenshotDir: `${base}/cypress/screenshots`,
      verbose: true,
      reportDir: `${base}/cypress/reports/mocha`
    };
    const ciProvider: string = program.ciProvider;
    const vcsProvider: string = program.vcsProvider;
    const reportDirectory: string = program.reportDir;
    const videoDirectory: string = program.videoDir;
    const screenshotDirectory: string = program.screenshotDir;
    const verbose: boolean = program.verbose;
    // tslint:disable-next-line: no-console
    console.log("Constructing Slack message with the following options", {
      ciProvider,
      vcsProvider,
      reportDirectory,
      videoDirectory,
      screenshotDirectory,
      verbose
    });
    const slack = slackRunner(
      ciProvider,
      vcsProvider,
      reportDirectory,
      videoDirectory,
      screenshotDirectory,
      verbose
    );
     // tslint:disable-next-line: no-console
     console.log("Finished slack upload")

  })
  .catch((err: any) => {
    // tslint:disable-next-line: no-console
    console.log(err);
  });

function generateReport(options: any) {
  return merge(options).then((report: any) =>
    marge.create(report, options)
  );
}

I have been extending the reporter, to allow the ability to upload the mochawesome report, and cypress artefacts (screenshots & videos) to an S3 bucket, and use the returned bucket links, for the Slack reporter. It is currently working on a PR, but needs adding to the CLI before it can be added to the master branch.

The new Chromium-based Microsoft Edge for Mac has been leaked — And it works with Cypress, and now you can test it too!

Following on from my previous blog post here about getting Cypress working with Microsoft Edge, I have released versions that you can test out yourself 🙂

An example repository here:- https://github.com/YOU54F/cypress-edge

  1. Download Microsoft Edge for Mac (Canary Build) for MacOS here
  2. Make a new directory
  3. Run export CYPRESS_INSTALL_BINARY=https://github.com/YOU54F/cypress/releases/download/v3.2.0-edge.1/cypress-3.2.0-edge.1.zip
  4. Run npm init
  5. Run npm install @you54f/cypress --save
  6. Run node_modules/.bin/cypress open --browser edge to open in interactive mode, and setup Cypress.io‘s example project
  7. Run node_modules/.bin/cypress run --browser edge to run in command line mode.
  8. Rejoice & please pass back some appreciation with a star on the repository! Thanks 🙂

The new Chromium-based Microsoft Edge for Mac has been leaked — And it works with Cypress.

So I’ve been using Cypress for a while now to test our apps, it’s an incredible testing tool, with many features developers will feel at home with and providing incredibly fast and detailed feedback which remote-browser tools cannot compete with.

However there has been a bone of contention for some. The lack of cross-browser compatibility. For now, it will only work with Chrome and Electron.

Yep, no IE10/11, Firefox, Safari, Opera etc.

Best not delete your favourite Selenium based tool just yet.

However there is some light on the horizon, and from the likes of Microsoft no less.

Rumours floated around late last year that Microsoft were ditching efforts on their budding IE11 replacement Edge, with well, Edge. Just based on Chromium this time. You can get it for Windows 10 here from Windows Insiders.

If you visit the above page on a MacOS, you’ll see a button asking you to be notified, however Twitter user WalkingCat posted up links from Microsoft’s CDN.

Microsoft Edge for Mac (Canary Build)

Microsoft Edge for Mac (Dev Build)

So I thought I would spin up Cypress and see if I could get it to work with Edge but it choked on the folder name.

Hmmm, lets rename the app so it doesn’t have spaces in it.

So we need to tell Cypress about Edge

Its listed now, good start

Lets fire up Cypress runner in GUI mode

Result!!!

Let’s run all the integration tests.

As if they all passed first time. How about the CLI?

Sweet! Not bad for a first run! Now we just need to wait for Microsoft to release Chromium Edge to the masses. Hopefully a linux flavour will be on the horizon, I will keep you posted if so!

Follow the PR to track Cypress & Microsoft Edge – https://github.com/cypress-io/cypress/pull/4203

Thats all folks, thanks for reading, and feel free to follow me @ https://github.com/YOU54F for more of my fumblings in code.

Update :- I’ve now followed up this with another blog post where I have published a beta version of Cypress with Edge support for testing purposes. See here for the blog post with a link to an example GitHub repo and installation instructions!