The new Chromium-based Microsoft Edge for Mac has been leaked — And it works with Cypress, and now you can test it too!

Following on from my previous blog post here about getting Cypress working with Microsoft Edge, I have released versions that you can test out yourself 🙂

An example repository here:- https://github.com/YOU54F/cypress-edge

  1. Download Microsoft Edge for Mac (Canary Build) for MacOS here
  2. Make a new directory
  3. Run export CYPRESS_INSTALL_BINARY=https://github.com/YOU54F/cypress/releases/download/v3.2.0-edge.1/cypress-3.2.0-edge.1.zip
  4. Run npm init
  5. Run npm install @you54f/cypress --save
  6. Run node_modules/.bin/cypress open --browser edge to open in interactive mode, and setup Cypress.io‘s example project
  7. Run node_modules/.bin/cypress run --browser edge to run in command line mode.
  8. Rejoice & please pass back some appreciation with a star on the repository! Thanks 🙂

The new Chromium-based Microsoft Edge for Mac has been leaked — And it works with Cypress.

So I’ve been using Cypress for a while now to test our apps, it’s an incredible testing tool, with many features developers will feel at home with and providing incredibly fast and detailed feedback which remote-browser tools cannot compete with.

However there has been a bone of contention for some. The lack of cross-browser compatibility. For now, it will only work with Chrome and Electron.

Yep, no IE10/11, Firefox, Safari, Opera etc.

Best not delete your favourite Selenium based tool just yet.

However there is some light on the horizon, and from the likes of Microsoft no less.

Rumours floated around late last year that Microsoft were ditching efforts on their budding IE11 replacement Edge, with well, Edge. Just based on Chromium this time. You can get it for Windows 10 here from Windows Insiders.

If you visit the above page on a MacOS, you’ll see a button asking you to be notified, however Twitter user WalkingCat posted up links from Microsoft’s CDN.

Microsoft Edge for Mac (Canary Build)

Microsoft Edge for Mac (Dev Build)

So I thought I would spin up Cypress and see if I could get it to work with Edge but it choked on the folder name.

Hmmm, lets rename the app so it doesn’t have spaces in it.

So we need to tell Cypress about Edge

Its listed now, good start

Lets fire up Cypress runner in GUI mode

Result!!!

Let’s run all the integration tests.

As if they all passed first time. How about the CLI?

Sweet! Not bad for a first run! Now we just need to wait for Microsoft to release Chromium Edge to the masses. Hopefully a linux flavour will be on the horizon, I will keep you posted if so!

Follow the PR to track Cypress & Microsoft Edge – https://github.com/cypress-io/cypress/pull/4203

Thats all folks, thanks for reading, and feel free to follow me @ https://github.com/YOU54F for more of my fumblings in code.

Update :- I’ve now followed up this with another blog post where I have published a beta version of Cypress with Edge support for testing purposes. See here for the blog post with a link to an example GitHub repo and installation instructions!

Securing the Pact Broker with nginx and LetsEncrypt

Dockerised Pact Broker – Secure Implementation

Background & Aim

The cool guys and girls over at Dius offer a dockerised implementation of the Pact-Broker for free! I know, amazing, right? You can get it right now here

However out of the box, the Docker solution is not secure. There is an example SSL configuration, utilising nginx as a reverse proxy, to allow access solely via HTTPS, provided by the PACT team.

I have extended on this implementation, to ensure we are following current industry standards for a secure nginx implementation.

Additionally we will go through the process of how to generate your own self-signed certificates and register them with a Certificate Authority to give confidence to your stakeholders and site-visitors.

We will only be using open-source tooling because open-source ftw <3.

If you haven’t already read my post about using Pact & Swagger to compliment your development workflow, you can check it out here.

Prerequisites

Additional Notes

  • This example will use a dockerised postgres instance, as described in the main pact_broker-docker readme, just so you can run the example end-to-end.
  • If you are able to use your cloud provider to sign your certificates, then you may not need to use lets-encrypt. In my example, I am using a self-managed AWS EC2 instance, which is unable to utilise AWS certificate manager, as you are unable to download the generated ceritifcates. If you are using Fargate, this is not an issue.

Initial Setup

  1. Install Docker on your instance
  2. Copy the contents of ssl_letsencrypt to your instance and rename to pact-broker
  3. Replace the following occurances found in the *.sh & docker-compose.yml files in pact-broker & pact-broker/lets-encrypt
    • domain_name – Replace with your registered domain name
    • email_address – Replace with your email address. It should match the registered domain
    • username – Replace with the name of your user (it is assumed your folder will live in /home/username/pact-broker but you can change to suit)
  4. Rename .env.example to .env.

Get to know your environment file

The .env file contains the credentials we will pass into the docker-compose file and ultimately to the pact-broker. More options can be added as per Pact.io documentation, but will also require adding into your docker-compose.yml file.

The database variables are setup to talk to the postgres database loaded via docker-compose.

PACT_BROKER_DATABASE_USERNAME=postgres
PACT_BROKER_DATABASE_PASSWORD=postgres
PACT_BROKER_DATABASE_HOST=postgres
PACT_BROKER_DATABASE_NAME=postgres
PACT_BROKER_BASIC_AUTH_USERNAME=readwrite
PACT_BROKER_BASIC_AUTH_PASSWORD=readwrite
PACT_BROKER_BASIC_AUTH_READ_ONLY_USERNAME=readonly
PACT_BROKER_BASIC_AUTH_READ_ONLY_PASSWORD=readonly

NOTE: Please do not commit your .env file to source control. If you do, consider your credentials comprimised and update them straight away

Generate your Signed SSL cerficate with Lets-Encrypt

Let-Encrypt is an open-source project which will allow you to create SSL certificates, and sign them against the Lets-Encrypt Certificate Authority. It is in a bid to help make the web safer.

  1. Change into the lets-encrypt folder
  2. Run docker-compose up -d. This will load up a single page application that lets-encrypt can read from, in order to verify that the domain is owned by you.
  3. Run ./makecertsstaging.sh – This will generate sample certificates for you, in lets-encrypt/out
  4. Run ./makecertsinfostaging.sh – This will provide information about the generated certificates for you.
  5. If all the above steps ran ok, we can safely remove the out dir in lets-encrypt/out to remove our staged certificates.
  6. Run ./makecerts.sh – This will generate your signed certificates for you, in lets-encrypt/out
  7. Run ./makecertsinfolive.sh – This will provide information about the generated certificates for you.

Certificates will be output to pact-broker/letsencrypt/out/etc/letsencrypt/live//

The folder is actually sym-linked, and the actual certificates live in the archive folder.

Each generated certificate will last for three months, a further section will discuss renewals.

Generate your Diffe-Hellman Param certificate

  1. Change into the lets-encrypt folder
  2. Run ./gen_dhparam.sh. This will take a while (5-10 minutes) so go make a brew.

Check your nginx configuration

There is a lot going on in the nginx configuration. I will touch on why each component is there, and you can elect to remove as you wish.

In this section, we are going to add headers to every request, to avoid cross-site scripting attacks

add_header X-XSS-Protection "1; mode=block";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

Remove the nginx version number from responses to avoid leaking implementation details.

server_tokens off;

In the first server block which is for HTTP requests, we do the following

  • Listen to all requests on port 80. Our server name, in the name of the pact broker docker image as defined in the docker-compose.yml
listen 80 default_server;
server_name broker;
  • Only allow GET methods, if accessed via port 80. Add in any request methods you wish to allow. I prefer to whitelist, rather than blacklist.
if ( $request_method !~ ^(GET|HEAD)$ ) {
return 405;
}

Redirect all HTTP requests, to HTTPS. We drop any request parameters that were provided to avoid any parameter injection in our redirect to HTTPS.

return 301 https://$host;

The second server block is for our HTTPS requests.

  • Listen on port 443 and enable ssl
listen 443 ssl;
server_name broker;
  • Our certificates are loaded in to the docker-container via the docker-compose.yml volumes section, on the following paths.
ssl_certificate "/etc/nginx/ssl/certs/fullchain.pem";
ssl_certificate_key "/etc/nginx/ssl/certs/privkey.pem";
ssl_dhparam "/etc/nginx/ssl/dhparam/dhparams.pem";
  • Enable SSL protocols. TLSv1 is insecure and shouldn’t be used. TLSv1.1 is weak. For compliance reasons, TLSv1 should not be used.
ssl_protocols TLSv1.2 TLSv1.3;
  • Only enable known strong SSL ciphers. It is a balancing act between using strong ciphers and compatability. A site scoring 100% on a cipher test, would not be compatible with all devices. The current set gives 95% on SSLLabs security test.
  • Let’s also tell nginx to use this list
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_prefer_server_ciphers on;
  • ecdh provides a nice default for nginx as not all openSSL implementations do it well
  • session tickets don’t provide forward secrecy.
  • Limit the SSL buffer size (default 16k iirc)
  • Maintain SSL connections for 10 minutes
  • Switch of gzip compression as it can be vunerable. Enable if needed.
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_buffer_size 4k;
ssl_session_cache shared:SSL:10m;
gzip off;

Add Strict Transport Security headers

add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
  • I am only enabling the following methods on HTTPS requests.
if ( $request_method !~ ^(POST|PUT|PATCH|GET|HEAD|DELETE)$ ) {
return 405;
}
  • Whilst implementing webhooks, I noted that URL based tokens are visible to users both rw/ro, to the pact-broker, so we are blocking access to the /webhooks url. This will also block /webhooks/**
  • This shows how you can provide granular control of traffic in nginx, you could allow POST’s only with an if statement.
error_page 418 = @blockAccess;

location /webhooks {
return 418;
}
location @blockAccess {
deny all;
}

The following block is used to proxy all requests recieved through nginx, to the pact broker.

  • proxy_set_headers are used to ensure the redirect urls are correct in the HAL browser and additionaly enforce our secure headers.
  • proxy_hide_headers will avoid leaking details of our pact_broker & passenger version.
  • proxy_pass will send our requests recieved on nginx through to the broker.
location / {

# Setting headers for redirects
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Scheme "https";
proxy_set_header X-Forwarded-Port "443";
proxy_set_header X-Forwarded-Ssl "on";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
proxy_set_header X-XSS-Protection "1; mode=block";
proxy_set_header X-Frame-Options DENY;
proxy_set_header X-Content-Type-Options nosniff;

# Hide return headers to avoid leaking implementation details
proxy_hide_header X-Powered-By;
proxy_hide_header X-Pact-Broker-Version;

# Perform the proxy pass to our site
proxy_pass http://broker:80;
}

Get to know your docker-compose file

Each docker container is connected by a specified network

networks:
- docker-network

Standard postgres configuration.

postgres:
image: postgres
healthcheck:
test: psql postgres --command "select 1" -U postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: postgres
networks:
- docker-network

The pact broker configuration with basic auth enabled.

  • Variables stored in the .env file
    are read by docker-compose on starting the containers
  • Read into the docker-compose file
    with variables prefixed with $
  • You can add additional supported pact parameters, either directly in here, on in your env file.
broker_app:
container_name: 'pact-broker'
image: dius/pact-broker:latest
links:
- postgres
environment:
PACT_BROKER_DATABASE_USERNAME: $PACT_BROKER_DATABASE_USERNAME
PACT_BROKER_DATABASE_PASSWORD: $PACT_BROKER_DATABASE_PASSWORD
PACT_BROKER_DATABASE_HOST: $PACT_BROKER_DATABASE_HOST
PACT_BROKER_DATABASE_NAME: $PACT_BROKER_DATABASE_NAME
PACT_BROKER_BASIC_AUTH_USERNAME: $PACT_BROKER_BASIC_AUTH_USERNAME
PACT_BROKER_BASIC_AUTH_PASSWORD: $PACT_BROKER_BASIC_AUTH_PASSWORD
PACT_BROKER_BASIC_AUTH_READ_ONLY_USERNAME: $PACT_BROKER_BASIC_AUTH_READ_ONLY_USERNAME
PACT_BROKER_BASIC_AUTH_READ_ONLY_PASSWORD: $PACT_BROKER_BASIC_AUTH_READ_ONLY_PASSWORD
PACT_BROKER_LOG_LEVEL: WARN
networks:
- docker-network

The configuration for nginx.

  • We link the pact broker container, called broker_app, but reference it as broker which is used as our servername in nginx configuration.
  • The first volume link loads in our nginx.conf file
  • The next three volumes point at the out directory of lets-encrypt.
  • The last volume will load in our example site we used for certification, it will be used for renewing our cerificates, which we will touch on after running our example.
nginx:
container_name: 'pact-nginx'
image: nginx:alpine
links:
- broker_app:broker
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./letsencrypt/out/etc/letsencrypt/live//fullchain.pem:/etc/nginx/ssl/certs/fullchain.pem
- ./letsencrypt/out/etc/letsencrypt/live//privkey.pem:/etc/nginx/ssl/certs/privkey.pem
- ./letsencrypt/out/etc/letsencrypt/live//chain.pem:/etc/nginx/ssl/certs/chain.pem
- ./letsencrypt/dhparam/dhparams.pem:/etc/nginx/ssl/dhparam/dhparams.pem
- ./letsencrypt/out/renewal:/data/letsencrypt
ports:
- "80:80"
- "8443:443"
networks:
- docker-network

Running our example

If you have not already generated your certificates, please do so now

  1. Change into the lets-encrypt folder
  2. Run docker-compose up -d. This will load up a single page application that lets-encrypt can read from, in order to verify that the domain is owned by you.
  3. Run ./makecertsstaging.sh – This will generate sample certificates for you, in lets-encrypt/out
  4. Run ./makecertsinfostaging.sh – This will provide information about the generated certificates for you.
  5. If all the above steps ran ok, we can safely remove the out dir in lets-encrypt/out to remove our staged certificates.
  6. Run ./makecerts.sh – This will generate your signed certificates for you, in lets-encrypt/out
  7. Run ./makecertsinfolive.sh – This will provide information about the generated certificates for you.

We can now run our secure broker

  1. Modify the docker-compose.yml file as required.
  2. Run docker-compose up to get a running Pact Broker and a clean Postgres database

Testing your setup

curl -v http://localhost
# This will redirect to https

curl -v http://localhost/matrix
# This will redirect to https root, not matrix

curl -v https://localhost/matrix
# This will redirect to https matrix page
# Note we don't provide the flag -k (insecure) as the website is certified

curl -v http://localhost/webhooks
curl -v https://localhost/webhooks
# This will return a 418 error

Renewing your certificates

We generated certificates with LetsEncrypt, however they will expire after 3 months. We have aimed to minimised disruption by incorporating the renewal process into our configuration, so we will just need to run a script to generate them and bounce our app.

  1. Ensure you are in the root folder, in our example the pact-broker folder
  2. Run ./renewcerts_staging.sh – This will run a do a dry run of the renewal process, or inform you that you don’t need to generate one yet.
  3. Run ./renewcerts.sh – This will run the renewal process and generate you new certicates and restart your docker instance

Certificates will be output to pact-broker/letsencrypt/out/etc/letsencrypt/live//

Note, the folder is the same as our old certificates, so no change to our docker-compose file. This is because this location is actually sym-linked, and the actual certificates live in the archive folder.

Replace the dockerised postgres DB with a proper instance

You will need to make some minor changes to utilise a non-dockerised Postgres instance.

Update the following environment variables in your .env file

PACT_BROKER_DATABASE_USERNAME=postgres
PACT_BROKER_DATABASE_PASSWORD=postgres
PACT_BROKER_DATABASE_HOST=postgres
PACT_BROKER_DATABASE_NAME=postgres

and comment out, or remove the following lines from your docker-compose.yml

# postgres:
# image: postgres
# healthcheck:
# test: psql postgres --command "select 1" -U postgres
# ports:
# - "5432:5432"
# environment:
# POSTGRES_USER: postgres
# POSTGRES_PASSWORD: password
# POSTGRES_DB: postgres

broker_app:
image: dius/pact-broker
links:
# - postgres

General Pact Broker configuration and usage

Documentation for the Pact Broker application itself can be found in the Pact Broker Wiki

Troubleshooting

See the Troubleshooting page on the wiki.

“Just because you’re paranoid doesn’t mean they aren’t after you.”

Some simple command-line tricks borrowed from the land of devOps to help you analyse logs and gain useful insight.

I’m under attack! Or rather, my EC2 instance is. ( A virtual machine running Ubuntu, hosted in AWS )

It’s not a mega worry for me, as it is just a sandbox for testing/development of home projects. However it is the perfect opportunity for me to demonstrate some techniques you can use to extract useful information from logs.

My virtual machine has been running for around 3 three months with port 22 which is used for ssh, left accessible publicly on the internet. The logs live in /var/log

We are going to use ls to list the contents of the directory & cat to output our file contents directly to the terminal.

$ ls /var/log/secure*
secure
secure-20181230
secure-20190106
secure-20190307
secure-20190310

$ cat /var/log/secure*

Mar 10 03:40:27 ip-***-**-**-** sshd[29874]: Invalid user test from 189.63.115.134 port 56702
*** MY SCREEN BUFFER INSTANTLY FILLED UP WITH STUFF LIKE THAT ***

Oh crap, I thought. Naughty, naughty h4x0rs.

I knew there was alot, but not how many, so let’s count how many failed login attempts we’ve had.

cat /var/log/secure* output the content of each of the 5 files directly to the terminal (the * is a wildcard and will pattern match any files called secure with any suffix)

| this is called a pipe, it will allow you to pass the output of the command to the left of the pipe, as an input to the command to the right of the pipe.

grep -e 'Invalid user' This is a pattern matcher, it will look for every occurance of the words Invalid user and output the entire log line, so we can trim our search to only failed login attempts

wc -l this will count how many words in a document, but we are passing the -l flag which will count how many lines (a single line for each failed login attempt

$ cat /var/log/secure* 
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: reverse mapping checking getaddrinfo for bd3f7386.virtua.com.br [189.63.115.134] failed - POSSIBLE BREAK-IN ATTEMPT!
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Invalid user test from 189.63.115.134 port 56702
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: input_userauth_request: invalid user test [preauth]
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Received disconnect from 189.63.115.134 port 56702:11: Bye Bye [preauth]
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Disconnected from 189.63.115.134 port 56702 [preauth]

$ cat /var/log/secure* | grep -e 'Invalid user'
Mar 10 03:39:35 ip-***-**--**-** sshd[29842]: Invalid user test from 185.20.197.116 port 59039
Mar 10 03:39:36 ip-***-**--**-** sshd[29844]: Invalid user admin from 14.139.127.91 port 41233
Mar 10 03:39:42 ip-***-**--**-** sshd[29850]: Invalid user ism from 94.132.46.32 port 37902
Mar 10 03:39:43 ip-***-**--**-** sshd[29854]: Invalid user admin from 219.142.28.206 port 39222
Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Invalid user test from 189.63.115.134 port 56702

$ cat /var/log/secure* | grep -e 'Invalid user' | wc -l
16288

Wow, 16,288 attempts.

We can a combination of 3 commands, to extract a piece of information from each log line and de-duplicate it, so we can find out how many different usernames/IP addresses & ports that they tried.

awk '{print $8}' This will print only the 8th word in each line which is the username in our case. The below example shows how the string is split.

Mar 10 03:40:27 ip-***-**--**-** sshd[29874]: Invalid user test from 189.63.115.134 port 56702

$1 = Mar
$2 = 10
$3 = 03:40:27
$4 = ip-***-**--**-**
$5 = sshd[29874]:
$6 = Invalid
$7 = user
$8 = test
$9 = from
$10 = 189.63.115.134
$11 = port
$12 = 56702

sort This will arrange our list in alphabetical order

uniq this will remove duplicates, but requires a pre-sorted list, which is why we run the output through sort first

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' 
daniel
uftp
deploy
sammy
transfer
uftp
uftp
travelblog

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort
zo
zookeeper
zqsun
zule
zv
zwji
zxin10
zxin10
zxin10
zxin10

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort | uniq

zk
zly
zn
znc
zo
zookeeper
zqsun
zule
zv
zwji
zxin10

After we have out sorted list, we can used wc -l again to count the lines. There were 2816 distinct user names, 5252 different IP’s trying 11,300 different ports

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort | uniq | wc -l

2816
$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $10}' | sort | uniq | wc -l

5252
$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort | uniq | wc -l

11300

Let’s get some insight into the data, we can find out which were the most common usernames, how many times a particular IP address hit us, and which ports are most commonly hit.

uniq -c The -c flag will count the number of occurrences

sort -nr The -nr flag will sort the record in numerical order

less This will you to read the large output in your own time, rather than watch the matrix flash in front of your eyes.

$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $8}' | sort | uniq -c | sort -nr

894 admin
681 test
261 postgres
244 oracle
238 user
215 ubuntu
196 nagios
186 guest
182 ftpuser
131 deploy
116 pi
115 git
109 ubnt
104 teamspeak
103 support
100 mysql
95 minecraft
92 tomcat
$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $10}' | sort | uniq -c | sort -nr

432 111.230.251.46
328 1.237.178.27
272 142.93.191.55
243 115.231.8.189
219 217.170.205.77
180 198.98.53.194
174 106.12.85.241
152 18.207.226.35
120 178.128.96.131
100 220.191.194.22
$ cat /var/log/secure* | grep -e 'Invalid user' | awk '{print $12}' | sort | uniq -c | sort -nr

9 1920
7 54170
6 58932
6 56054
6 55524
6 53154
6 49414
6 48000
6 47278
6 44800
6 39266
6 37284

Tinkering with the touchbar.

When I started a new job a few months ago. I was given a MacBook Pro 15″, with the infamous touchbar.

I’ve eventually stopped lamenting about the loss of a physical escape key, and have employed some tips & tricks to make it a bit more useful.

No touchbar? touché my friend

If you don’t have a touchbar, but do have a Mac, you can use touché to emulate one on your screen and still use these tricks (bar touchid)

sudo at your fingertips

$ sudo sublime /etc/pam.d/sudo

Add the following line

auth sufficient pam_tid.so

And you should be left with something like this

 # sudo: auth account password session
auth sufficient pam_tid.so
auth sufficient pam_smartcard.so
auth required pam_opendirectory.so
account required pam_permit.so
password required pam_deny.so
session required pam_permit.so

Exit and then open your terminal again, and attempt to sudo and voila. sudo at your fingertip.

$ sudo touch

iStats

I’ve always liked knowing the temps of my CPU/GPU/RAM, and fan speeds stemming from overclocking/water-cooling my PC’s but mainly I wanted to quieten my fans without melting the work laptop.

It is ridiculously loud when it spins up a set of Docker containers or a VM and it just doesn’t need to be, so I use MacsFanControl to control the fan speeds, and iStats to keep an eye on some stats.

$ sudo gem install iStats
$ istats all
Total fans in system:   2
CPU temp: 43.13°C ▁▂▃▅▆▇
Battery health: Good
Fan 0 speed: 3461 RPM ▁▂▃▅▆▇
Fan 1 speed: 3502 RPM ▁▂▃▅▆▇
Cycle count: 66 ▁▂▃▅▆▇ 6.6%
Max cycles: 1000
Current charge: 1927 mAh ▁▂▃▅▆▇ 28%
Maximum charge: 7025 mAh ▁▂▃▅▆▇ 95.8%
Design capacity: 7336 mAh
Battery temp: 30.8°C

Sweet, now let’s see if we can get them on the touchbar.

Apple let us modify the touchbar to a degree, but not enough to be able to add custom icons and scripts.

We can use BetterTouchTool but it’s not free, and I am loving open-source software, so I managed to find My Touchbar, My Rules. You can download it with HomeBrew.

$ brew cask install mtmr

Once installed you can find it in your Applications folder, run it and your touchbar will run the default config.

You can also do a 3 finger swipe to adjust brightness or a 2 finger swipe to adjust volume.

Let’s have a look at the config

$ sublime ~/Library/Application\ Support/MTMR/items.json

It is a json config file, defining each button. You can customise with a list of predefined button types listed on their homepage, but you can also write AppleScript or your own scripts and associated them with buttons.

My config is available here as a GitHub Gist and can be seen below

It was inspired by the following plugin for BetterTouchTool

iTerm2 Touchbar integration

If you don’t have iTerm, download it with HomeBrew.

$ brew cask install iterm2

You can view the iTerm2 docs for the touchbar here.

zsh-iterm-touchbar

With ZSH and a nifty plugin called zsh-iterm-touchbar, we can get our git info and run our npm run scripts in project folders.

If you aren’t already using zsh, then install it with HomeBrew

brew install zsh

Install OhMyZSH

sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"

Clone the repo in your ZSH directory

$ cd ${ZSH_CUSTOM1:-$ZSH/custom}/plugins 
$ git clone https://github.com/iam4x/zsh-iterm-touchbar.git

Then add the plugin into your .zshrc

$ sudo sublime ~/.zshrc
TOUCHBAR_GIT_ENABLED=true 
plugins=(... zsh-iterm-touchbar)

Restart your terminal and you should see

In a git enabled folder

Showing run scripts from a package.json

List of options

  • F1 – Current directory 👉
  • F2 – Current git branch, press to display all branches and switch between them 🎋
  • F3 – Current git repo status 🔥 / 🙌
    • + — uncommitted changes in the index;
    • ! — unstaged changes;
    • ? — untracked changes;
    • $ — stashed changes;
    •  — unpulled commits;
    •  — unpushed commits.
  • F4 – Push to origin branch (git push origin [branch]) ✉️
  • F5 – Display npm-run or yarn-run scripts from package.json

More touch-bar resources

An open-source list tracking touchbar projects

https://github.com/zakrid/awesome-touchbar

Whilst we’re at it

I am using a few zsh plugins which make my life so much easier.

plugins=(
git
dotenv
osx
yarn
npm
node
nvm
docker
iterm2
brew
battery
alias-tips
zsh-iterm-touchbar
zsh-autosuggestions
zsh-syntax-highlighting
)

A full list of the out-of-the-box supported plugins

https://github.com/robbyrussell/oh-my-zsh/tree/master/plugins

You can find more to install

https://github.com/unixorn/awesome-zsh-plugins#plugins

Cross-browser testing, without the browser.

Reading time:- 10 mins

Follow along @ https://github.com/YOU54F/react-ts-testing-example

Consider the following statements that you might come across as a tester on a web app project.

  • Must be compatible with latest browser versions
  • Must be compatible with mobile

Pretty vague right? There are lots of browsers, and lots of versions. My definition of latest might be different to yours. Reminds me of this gem.

A small list of of some of the top browsers, but there are plenty more.

  • Safari
  • Safari iOS
  • Opera
  • Internet Explorer
  • Internet Explorer Mobile
  • Edge
  • Firefox
  • Chrome

When they say mobile, that’s a wide range to cover. They didn’t even mention latest there. There are quite a few operating systems, and a multitude of versions, by dozens of manufacturers. The stakeholder could expect their dusty old iPad 1 & Windows 8 touchscreen tablet’s to work.

  • Apple iOS
  • Google Android.
  • Windows Phone
  • Palm OS
  • Symbian OS

Let’s see if we can help scope that requirement down a bit, to something a bit more manageable.

Know Your Audience

Some of the most popular sites for checking the worlds browser stat usage are

https://www.w3counter.com/globalstats.php

http://gs.statcounter.com/

Both are excellent for noticing trends in usage but if you base your testing strategy for cross-browser testing on this alone, you will find that you may be targeting the wrong market.

Chrome’s market-share dominance won’t ring true in an office full of locked-down corporate boxes with Internet Explorer.

If your company doesn’t already have analytics which records site visitors, Insist that they do. It will be pivotal in ensuring your product will work, for the people who want to use it by providing tailored facts about what devices are in use for your particular product.

The challenges for web developers and testers alike

With the advent of HTML5, SCSS, Javascript ES5 & 6 with their snazzy new tool-belts, many old browsers (and not so old) are left in the dust. A promise here, and object.assign there and before you know it, you’ve just stopped anyone accessing your website in IE11 and earlier versions.

Oops.

With so many browsers out there, and more importantly so many versions each supporting different feature-sets, it can be hard to keep track, and even harder to test.

We could buy a load of real devices (logistical nightmare) or test in the cloud with a device provider such as BrowserStack / SauceLabs / TestingBot (pricey).

Both of those approaches, even if we use automation, give us slow feedback. What if we could test in 800+ browsers when our developers are writing their code. It is no substitute to UI testing, but augments our dev/testing process nicely.

We can give our developers a helping hand, and help mitigate risk as testers, so we don’t have to have such a reliance of cross-browser testing via traditional methods.

Can I use it? Yes, you can!

Some great people have compiled maintained matrices of supported HTML/CSS & JavaScript features against browsers split by version. These sites should be bread-and-butter to your front end developers, but if not, please point them in the direction of them.

ECMAScript 5/6/7 compatibility tables

http://kangax.github.io/compat-table

Browser support tables for modern web technologies – HTML5/CSS3

https://caniuse.com

Gotta catch ’em all, gotta catch them early.

Software defects cost money. How much money usually depends on where it is found in the software life-cycle. An ambiguous requirement that could otherwise be firmed up into a set of agreed testable criteria before any code is written, may become something entirely different from what was envisioned, when unveiled.

I said before, we aren’t going to use any browsers for our testing, and although we want to test early, we can’t test before any code is written, as the technique we are going to apply is static-analysis.

Static analysis, also called static code analysis, is a method of computer program debugging that is done by examining the code without executing the program. The process provides an understanding of the code structure, and can help to ensure that the code adheres to industry standards.

Wikipedia

We can utilise some tools, to cross-reference our code against the compatibility databases, targeting only the browsers and versions we are interested in.

Our developer can run these locally to keep them on track, and we can add these checks to our CI pipeline, to ensure that any new code is validated against these databases.

So hopefully by now, we have compiled a list of browsers we want to support, either by analytics, or by discussion and agreement with your stakeholders.

We are going to use a tool called browserslist to define our list of browsers to our tools.

https://github.com/browserslist/browserslist

Add it into your project as a dev dependency

$ yarn add browserslist --dev 
or
$ npm install browserslist --save-dev

Add browserslist config to your package.json

"browserslist": [
">0.2%",
"not dead",
"not ie <= 11",
"not op_mini all"
]

This command will list the currently targeted browsers in your project, based in your defined list above

$ npx browserslist

CSS Checking with StyleLint

Stylelint-no-unsupported-browser-features – This plugin checks if the CSS you’re using is supported by the browsers you’re targeting. It uses doiuse to detect browser support.

https://github.com/ismay/stylelint-no-unsupported-browser-features

Add it to your project (along with stylelint and the standard-config)

$ yarn add stylelint stylelint-config-standard stylelint-no-unsupported-browser-features --dev

or

$ npm install stylelint stylelint-config-standard stylelint-no-unsupported-browser-features --save-dev

Setup your stylelint config

$ touch .stylelintrc
{
"extends": "stylelint-config-standard",
"plugins": [
"stylelint-no-unsupported-browser-features"
],
"rules": {
"plugin/no-unsupported-browser-features": [true, {
"severity": "warning"
}]
}
}

Run it with

$ npx stylelint 

JavaScript Checking with esLint against caniuse db

eslint-plugin-compat – Lint the browser compatibility of your compiled JS code

https://github.com/amilajack/eslint-plugin-compat

Add it to your project

$ yarn add eslint-plugin-compat --dev
or
$ npm install eslint-plugin-compat --save-dev

Add the eslint config to your package.json

"eslintConfig": {
"parser": "babel-eslint",
"plugins": [
"compat"
],
"rules": {
"compat/compat": "warn"
},
"settings": {
"polyfills": [
""
]
}
}

Run it as follows

$ npx eslint 

Javascript Checking against ECMAScript 5/6/7 compatibility tables – 2 options

Compat.js – Static analysis tool for detecting browser compatibility issues in JavaScript and HTML.

https://github.com/jgardella/compat

Add it to your project

$ yarn add compat-cli --dev
or
$ npm install compat-cli --save-dev

setup your config file but note this tool does not use the browserslist configuration, hence listed versions, replacing <path/to/js> to where your bundled javascript lives.

$ touch .compatrc.json
{
"target": "",
"ignoreFeatures": ["Object static methods"],
"jsEnvs": ["ie11","chrome74",
"edge16","firefox67","safari12_1",
"ios12","samsung8_2"]
}

Run compat test with options defined in above config

$ npx compat

Passing the supportedEnvs flag will show available browsers

$ npx compat --supportedEnvs

Compat-CLI – ECMAScript 5/6/7 compatibility tables CLI

https://github.com/kamilogorek/compat-cli

Another CLI client that does the same as the above too, I haven’t tried this yet.

So now we’ve found some issues? What can we do?

Polyfill.io – A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will. The following website will list what polyfill’s are required for a particular feature.

https://polyfill.io/v3/

Autofixer PostCSS plugin to parse CSS and add vendor prefixes to CSS rules using values from Can I Use. It is recommended by Google and used in Twitter and Alibaba.

https://github.com/postcss/autoprefixer

babel-preset-env – a smart preset that allows you to use the latest JavaScript without needing to micromanage which syntax transforms (and optionally, browser polyfills) are needed by your target environment(s). This both makes your life easier and JavaScript bundles smaller!

https://babeljs.io/docs/en/next/babel-preset-env.html

Modernizr – A JavaScript library that detects HTML5 and CSS3 features in the user’s browser

https://modernizr.com/

Ok, so I don’t need to test on an actual browser, just mash polyfill into everything?

Not quite, trying to support every browser with lots of polyfills will leave you with a bloated web-app and no-one likes a slow site, you will have non-functional requirements to meet as well.

We haven’t performed any functional testing here on our UI, and there is no substitute to performing automated cross-browser testing of key journeys for confidence.

However you won’t need to rely on them so heavily, to keep you informed of browser compatibility and version inconsistency, so they remain lean, quick and useful rather than the painful entanglement that they can become.

Cool story bro, but wheres the code?

I have created a simple react website in typescript with some features not supported in older browsers, it incorporates the tools discussed in the article with a working example you can experiment with and build upon.

It also showcases some other UI testing tools which will be discussed in further articles.

  • Unit-testing React Components with Jest & Enzyme
  • Code-coverage with Istanbul
  • Unit test reporting with Jest-Junit & Jest-Stare
  • UI browser testing with Cypress.io
  • UI test reporting with MochaAwesome
  • Alerting via Slack
  • CI integration with CircleCI

It is available on Github – https://github.com/YOU54F/react-ts-testing-example

Protecting your API development workflows with Swagger/OpenAPI & Pact.io

Reading Time:- Get a cup of tea and pull up a beanbag, I’m going to say 15 minutes

Background

API’s (Application Programming Interfaces) have been around since 2000. I remember utilising my first one back at Uni, based in Java to modify a Lego Mindstorms NXT robotics kit to run an Artificial-Neural Network.

I dropped into the world of micro-services 5 years ago, along with a team of great engineers helping to define a service blueprint that could be used as a framework for new service providers.

I took this new found knowledge onto my next employer, who were setting up a new micro-service architecture to serve a web front-end, whilst communicating to a legacy backend through an abstraction layer, which was just another API.

We employed consumer-driven contract testing but with little collaboration from dependencies, we ended with CDC tests solely maintained by consumers which only served to protect us from changes of which we had little visibility, but it works better in collaboration and offers more benefits, than hinderances.

The aim of this article is to provide you with some tools and techniques to aid and promote collaboration with your API development workflows in order to increase the quality of your product. (After all, I am a Software Tester!)

Using Swagger to document your API

I am a big advocate of using hand-crafted Swagger definitions of each service as one of the first outputs of our developement.

Why?

  • You are possibly in an Agile world and you want to start rapid prototyping.
  • Documentation is lean – Everyone has probably laughed at you when you suggest that we write the specifications for the API right off the bat rather than 100 discussions that always have little output (see next point), let’s make it cheap and easy to change so it’s not as onerous.
  • Getting multiple teams together is difficult and often non-productive – your team and the teams you integrate with don’t want to sit in long meetings about boring things like what data-type a field should be.
  • Contracts are key – If we can agree what the request/response will look like, we don’t care about the underlying implementation, as long as it honours them. It also doesn’t mean they are set in stone, as a fixed artefact. They can change as the project develops. It also makes testers happy, as we can plan for integration testing early.
  • Swagger is cheap to write, anyone can do it, for free. – https://editor.swagger.io/

So your team want’s to start writing code and the provider hasn’t come up with an API contract yet? ( They are probably in long boring data-field style meetings ).

Fine. Let’s document the API we want to see with https://editor.swagger.io/ and save the JSON or YAML version to your machine, stick it in version control and give your provider access. Swagger will provide validation and auto-completion as you type to ensure that the swagger you have written, can be developed. ( A big bonus, I can assure you! )

They are chuffed now as they have a view as to what one of their consumers’ wants, documented in a validated specification. They can make changes as necessary, and via a pull requests, collaboratively develop the specification, and ensure that everyone is in the loop. This collaboration relationship is great to build up, as it will help us immensely in testing our services can integrate together.

So now we have our API, defined in swagger. Either written by us, our provider, or collaboratively. Whichever way you’ve got it, don’t trust it. Even though the Swagger editor provides validation whilst writing it, you can save the file with errors. Now I know you won’t do that, but someone else might. Don’t worry, we got it.

npm install --global swagger-cli

swagger-cli validate /path/to/swagger.json

Hopefully this will report no errors, but if it does, it will tell you what needs resolving. If you add this into your CI process, when someone checks in a change to the specifications, you can catch it on a pull-request, ensuring that anyone working from the specification in master, has a valid copy.

Consumer Driven Contract testing with Pact

Now we have an API specification, we can start developing our application however there are a couple of things we want to consider.

  • What happens if our provider isn’t ready for several months – how can we integration test it?

We can use the notion of CDC testing, also known as consumer driven contract testing, although I prefer the term collaborative driven contract testing. The former almost implies the consumer has free-reign to drive their own requirements, however it requires agreements from both parties, and participation, hence collaboration.

To perform our contract testing, we will use Pact.io (https://pact.io)

Pact (noun):

A formal agreement between individuals or parties.

Synonyms: agreement, protocol, deal, contract

Pacts are synonymous with API design, but how often do these get broken.

We might find that after developing our consumer or provider in isolation, when we come to integrate the systems in a test environment, that expectations are not being met, resulting in systems failing to communicate.

We can use Pact and its toolset, to enable us to generate contracts (pairs of request/responses) saved as a JSON file to use for component integration testing, in complete isolation from a service dependency whilst publishing these contracts to a central broker which can be queried at a later date by when the service is available. 

Utilising Pact in a consumer development flow
  1. Using the Pact DSL, the expected request and response are registered with the mock service.
  2. The consumer test code fires a real request to a mock provider (created by the Pact framework).
  3. The mock provider compares the actual request with the expected request, and emits the expected response if the comparison is correct.
  4. The consumer test code confirms that the response was correctly understood
  5. Tests pass!
  6. Results published to a broker as a JSON file
  7. Results are tagged with a branch tag for later querying

Always read the manual

In the above example, after the tests pass (step 5), we publish the results to the broker. This presents a problem, thankfully an avoidable one. We might find that the PACT contract tests have drifted from the Swagger specification. It may be that a required field is missing from the request, or the tests expected an array, but the swagger specification defined it as a string.

npm install -g swagger-mock-validator

swagger-mock-validator path/to/swagger.yml path/to/pact.json

This tool will tell us where that drift occurs, and you can fail your CI step, correct any errors and then re-run your CI build, ensuring that you publish post successful specification validation.

This gives us a high level of confidence, that as long as our provider sticks to the swagger specification, we should be in a good position come pact validation when our provider has their first build. We will talk about their part of the PACT testing pipeline shortly.

Fake it till you make it with Pact & Swagger Tools

So I know, trying to get your developers to write more unit tests, might seem like a hard sell, but what if we could re-use those interactions we had generated in a previous test to drive our higher level component integration & UI tests.

To help reduce the number of interactions that need verifying, you will want to use flexible matching on both requests and responses. (https://github.com/pact-foundation/pact-js#matching)

The mock service, is part of PACT’s ruby standalone package – https://github.com/pact-foundation/pact-mock_service

Usage:
  pact-stub-service PACT_URI ...

We create a docker image containing the standalone executable and copy in the consumer-provider JSON contract, for each service we need to mock. We can then run these locally, but CI will publish these to AWS in order to perform e2e tests, whilst we await our providers first build.

So we are now using our PACT contracts, that we generated, to perform our component integration and e2e testing, with mocks. Everything is green on your dashboard. It looks great right, time to relax.

Provider validation process with PACT

Utilising Pact in a provider development flow
  1. The provider retrieves its clients pacts from the broker
  2. Each request is sent to the provider, and the actual response it generates is compared with the minimal expected response described in the consumer test
  3. Provider verification passes if each request generates a response, that contains at least, the data described in the minimal expected result
  4. Tests passed and results published to the broker

Using the PACT broker to protect deployments

Each application version should be tagged in the broker with the name of the stage (eg. test, staging, production) as it is deployed.

This enables you to use the following very simple command to check if the application version you are about to deploy is compatible with every other application version already deployed in that environment

$ pact-broker can-i-deploy --pacticipant PACTICIPANT --version VERSION --to TAG --broker-base-url BROKER_BASE_URL

Take home points

  • Write your API specifications in Swagger
  • Store them in version control and give access to any providers/consumers for collaboration
  • Validate the swagger specifications are correct with swagger-cli
  • Write pact tests in a unit-testing framework of your choice, using one of the many different language implementations of Pact. (We use pact-js & Jest, written in TypeScript)
  • Run the tests during CI to generate the contract
  • Validate the generated pact contract against the swagger specification during CI
  • If it passes, publish the pact contract to the pact broker, tag it with the branch name.
  • If it is part of a development/staging/production additionally tag it with an identifier
  • Consumers can generate mock providers from the pact contract to use in integration / UI / e2e testing
  • Providers can read from the pact broker and test that they meet consumer expectations, as pact will mock the clients requests specified in the contracts.
  • All participants can use the can-i-deploy tool at CI time, to check if its compatible with other consumer/providers in a specific environment.

For a later blog post

  • Follow up from this blog post, with real code based examples on a Github repository you can clone, fork and play with for real.
  • How pact can help you avoid supporting multiple versions of API’s and deprecate features/endpoints gracefully .
  • Validating your developed service against your hand-crafted swagger specification that your tester won’t stop banging on about.