Free Quote
Blog

Auto-Testing Your EV Certificate To Ensure A Superior User Experience

Why EV Certificates Matter

As opposed to the more common Domain Validation (DV) certificate, Extended Validation (EV) certs offer visitors a much higher level of assurance.

By putting the website’s legal entity name in the URL bar, an EV cert helps you clearly identify who you are dealing with.

Don’t be fooled by the green padlock of the “regular” DV certificate: all it tells you is your communication with the website is private. Not trustworthy, just private. And it’s been famously said you could be having a private conversation with Satan.

In fact, this is exactly what’s happening. A recent phishing trends and intelligence report reveals a spike in the number of malicious sites that use SSL/TLS certificates. According to PhishLabs, fraudulent sites utilizing DV certificates made up over 10% of all phishing attacks in the first quarter of 2017.

EV verification, by contrast, requires the Certificate Authority (CA) to confirm a business’s legal identity as well as its physical and operational existence before a certificate can be issued, mitigating such threats:

The HTTPS Risks

While good for security, user experience, and SEO, using HTTPS (no matter the certificate type) across your entire site certainly has its risks.

A full-scale HTTP-to-HTTPS migration could saddle you with a host of problems from day one. From mixed content errors to having to retool parts of your backend to incorrectly installed certificates to faulty redirects — the list can go on.

Then there’s the question of ensuring your HTTPS setup is functional 24/7. After all, HTTPS or certificate errors could result in a big fat browser warning that —- far from providing extra assurance — would deter visitors and could actually stop your traffic dead.

That’s why HTTPS migration and maintenance should be a well-planned, robust process. A process that relies on automated tools to provide continuous testing, monitoring, and notifications — so you can take action quickly if issues occur.

Why Black-Box Testing Is Sometimes Best

As discussed above, one of the good reasons to use EV certs is that they instill a greater sense of trust in users, since the browser will highlight part of the URL bar in green. The question is, how do you make sure all your critical landing pages have your company name next to the padlock, without doing a lot of manual checking?

Of course, you could automate this by using some type of HTTP client to load the page, grab all the requests sent, and examine their protocol. In theory, if all requests are served over HTTPS, you should be good. But this type of lower-level white-box test tells you nothing about what the user actually sees in the URL bar, which matters for EV certificates.

In terms of user experience, a better way to address this problem is to mimic real-world user actions. Load the page in a real browser, “look” at the URL bar, identify the green area, and compare it to the expected result (a baseline image).

All of this could be easily done with a bit of custom scripting.

Requirements and Assumptions

For this custom automation solution we’ll use a Debian box with bash, Selenium, and ImageMagick. We’ll also need the latest version of Chrome or Chromium installed. (We could, of course, use any major browser supported by Selenium, but for demo purposes let’s stick to Chrome.)

To get started, simply Install the required software packages by running the following commands as root:

apt-get update
apt-get install chromium chromium-driver python-selenium imagemagick xvfb

Then, prepare your baseline image (we will use a 142×20 image of the browser’s padlock area) that we will run our comparison against. The baseline image should be placed in the same folder as the scripts below.

Organizing the Code

Let’s split our code into two scripts.

The helper Python script will:

  1. run Chrome headlessly using the Selenium automation library;
  2. navigate to the desired URL;
  3. use ImageMagick to grab a full-screen screenshot of the headless desktop, including the browser and its URL bar (we have to use something like ImageMagick here, as making screenshots of the browser window is something Selenium cannot do);

Our main shell script will:

  1. grab all the critical landing page URLs (by using some kind of spider, such as wget, or by reading from a text file or a database, depending on your setup and needs);
  2. iterate over all the URLs and call the helper script to produce a browser screenshot;
  3. use ImageMagick to compare the browser screenshot with a baseline image;
  4. report any image comparison mismatches and produce diff images highlighting the problem.

The Helper Script

Let’s make a Python script and call it walker.py. Here’s what a simplified yet complete version might look like:

#!/usr/bin/python2.7

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import random
import getopt
import sys
import time
import subprocess


def print_usage():
    print "Usage examples: \n\t" + \
        sys.argv[0] + ' --url=http://example.com --out=/tmp/file.png' + "\n"


# See: https://docs.python.org/2/library/getopt.html
try:
    opts, args = getopt.getopt(
        sys.argv[1:], 'u:o:h', ['url=', 'out=', 'help'])
except getopt.GetoptError:
    print_usage()
    sys.exit(1)

url = None
out = None

for opt, arg in opts:
    if opt in ('-u', '--url'):
        url = arg
    elif opt in ('-o', '--out'):
        out = arg
    elif opt in ('-h', '--help'):
        print_usage()
        sys.exit(0)
    else:
        print_usage()
        sys.exit(1)

# check arguments:
if not url:
    print 'URL not provided'
    print_usage()
    sys.exit(1)
elif not out:
    print 'Output file not provided'
    print_usage()
    sys.exit(1)

print 'URL: ' + url
print 'Output file: ' + out

# main flow:
chrome_options = Options()
chrome_options.add_argument('disable-infobars')

# NB: We do not want Chrome to ignore SSL errors.
chrome_options.add_experimental_option(
    "excludeSwitches", ["ignore-certificate-errors"])

browser = webdriver.Chrome(
    chrome_options=chrome_options,
    service_args=["--verbose", "--log-path=/tmp/chromedriver.log"])

# maximize the window so the URL bar is in a predictable position at all times:
browser.maximize_window()

# load the URL:
browser.get(url)

# wait a little just in case:
time.sleep(random.randint(1, 3))

# make desktop screenshot:
subprocess.check_call(["import", '-window', 'root', out])

# exit browser:
browser.quit()

As you can see, the script takes two parameters: page URL and output image file name.

Note that we maximize the browser window to make sure the URL bar with the area that should be highlighted in green is in a predictable position at all times. This is required for accurate cropping with ImageMagick later on.

The Main Script

For the sake of simplicity, let’s assume our main script will read URLs from a plain text file. Let’s loop over the URLs, grab screenshots using walker.py, and compare them to our baseline image:

#!/bin/bash

set -u

# variables:
in_file="urls.txt"
out_dir="./images"
baseline_image="baseline.png"

# clean up:
test -d "$out_dir" && \
    find "$out_dir" -type f -name '*.png' -delete && \
        find "$out_dir" -type d -empty -delete

# main flow:
mapfile -t urls < "$in_file"
for u in "${urls[@]}"; do

    # prepare destination:
    out_file=$(tr '/:' '_' <<< "$u")
    out_page="$out_file"
    mkdir -p "$out_dir/$out_page"

    # grab screenshot of desktop:
    out_file="$out_dir/$out_page/$out_file".png
    xvfb-run \
        --auto-servernum \
        --server-args="-ac -screen 0 800x600x24 -nolisten tcp" \
            python2.7 ./walker.py --url="$u" --out="$out_file"
    xvfb_rv=$?
    [ "$xvfb_rv" -gt 0 ] && { echo "There was an error running walker"; exit $xvfb_rv; }

    # crop screenshot:
    test_image="${out_file%.png}"+cropped.png
    convert "$out_file" -crop "142x20+106+47" "$test_image"

    # compare screenshot:
    compare -dissimilarity-threshold 1 -fuzz '1%' -metric AE \
        -highlight-color red "$baseline_image" \
            "$test_image" "${out_file%.png}"+diff.png
    compare_rv=$?

    echo ""
    if [ "$compare_rv" -gt 0 ]
        then echo "Image comparison failed"
        else echo "Image comparison completed successfully"
    fi

    # copy baseline image:
    cp "$baseline_image" "$out_dir/$out_page/"

    # generate a side-by-side diff with labels:
    if [ "$compare_rv" -gt 0 ]; then
        # annotate images:
        label="baseline"
        for i in "$baseline_image" "$test_image"; do
            convert "$i" -background white -font 'Open-Sans' -fill black label:"$label" +swap -gravity center -append "$out_dir/$out_page/$(basename "${i%.png}"+label.png)"
            [ "$label" = "baseline" ] && label="current"
        done

        # make diff image:
        convert \
            "$out_dir/$out_page/${baseline_image%.png}"+label.png \
            "$out_dir/$out_page/$(basename "${test_image%.png}"+label.png)" \
                +append -gravity center -background gray -splice 1x0 "${out_file%.png}"+diff+side-by-side.png

        # remove annotated images:
        rm "$out_dir/$out_page/"*label.png
    fi
done

exit

Again, the script should be straightforward. Image comparison and diff generation is done using the tried-and-true ImageMagick, which is a very powerful library with a ton of options you can use to fine-tune the comparison algorithm, if needed.

Understanding the Results

Running the main script above will populate the output folder with image artifacts for every URL, including:

  • the baseline image
  • the screenshot of the browser window
  • the overlay diff image, and
  • the side-by-side diff image (made only if image comparison fails)

The resulting side-by-side comparison images, for example, might look something like this:

This type of comparison makes it easy to spot any and all types of SSL/TLS or HTTPS errors visible to the end-user, including any EV-related problems. If all tests pass with no errors, you know your visitors are getting the best HTTPS user experience possible, on all business-critical pages.

The script can be integrated into your test pipeline using any automation solution such as Jenkins. It’s also easy to use some kind of templating library like mustache to generate a JUnit-compatible output from the shell script.

The generated images can also be archived as build artifacts, for example, making it easy to review the failed builds at any time in the future.

Conclusion

If you don’t use an EV certificate, consider getting one. This will provide your visitors or clients with an extra layer of assurance that is hard for fraudsters to duplicate.

And if you have HTTPS deployed across your site, make sure you have automatic tools in place to monitor all the important pages for any problems. With major browsers — especially Chrome — pushing for a more secure web, you want your users to have the best possible experience navigating your site.

Do you use any type of HTTPS-related testing or monitoring? Tell us in the comments.

We are a custom product development firm with a lot of experience in web, mobile, and API testing automation and an in-house framework to handle any automation challenge you can throw at it. If you have a QA project in mind, drop us a line, and let’s see where we could help.

  • Nov 12th, 2017 at 1:05 am
    Mark Jefferson

    Nice writeup. Yep, sometimes black box testing is definitely the way to go!

  • Nov 13th, 2017 at 12:47 pm
    Ganashraya

    Good and creative approach to browser testing. Thank you for vlauable code

Leave a Comment
Your email will not be published