After many years of experience we realized that for some of our clients, one of the most important part of the application is the UI – how the application looks from the visual point of view. We came up with the solution, our Codete tool tracks all visual changes and compares it to the previous state of the application. This solution helps to prevent unexpected styling changes.

Problem – Full regression testing require tremendous effort

How can we make a full regression of our platform before each of the releases, to be sure we have not damaged something unexpected when making our changes?

One of the biggest challenges was to perform full regression tests for the very data-sensitive application. Fintech products are dedicated to the professionals, therefore calculations impact on the decisions potential managers would make. Ensuring that both, data and UI layer is correctly turned out to be our ultimate goal, in order to prevent unexpected bugs/regressions.

Full regression of any web platform is a process which requires tremendous effort and is error-prone. It can be automated in a classical way, covered with test cases and scenarios, but it is extremely hard to cover the application in literal 100%. Doing the full regression manually is not a solution usually, due to the effort needed, and spending a few days of work doing so before each release, would be an extreme waste of workforce.

Idea of QA platform: QAcheck

First of all, we need to store the valid state of the application so we can assert if anything has changed. As covering the whole application with test cases is time-consuming, we have thought about a different document covering the whole application – the sitemap. To ensure that we will use proper version as a model, we should test the application manually, which will take some time, but we plan to do it only once. Afterward, we can store the state of the applications by making screenshots of every page. As we have a sitemap we can go through every subpage and make a screenshot of it displayed on different browsers.

After we have a base we can compare to – every time we would like to make a regression, we can make new screenshots of the application, and compare those with the base ones if we spot a difference we know that something is happening.
This non-standard approach should be the most efficient as the applications tend to change often and managing and replacing the screenshots is the easiest way to keep those up to date.

Solution: Screenshot comparison based testing with a few twists

First, we were looking for a solution to make screenshots, both to make a base and later on during the regression. It is also important to keep in mind that web application looks different on different browsers/devices. We want to see the website in the exact same way as the end user do. To achieve that we use PhantomJS and Selenium framework using Facebook webdriver with the same Chromedriver as the end user. To mimic a bigger variety of browsers, we used Browserstack platform and a few scripts which allowed us to make a full set of photos of different application pages. After we had those base screenshots we were ready to perform the regression before the next release.

After making a set of changes, we run the same procedure to create new screenshots of the application. Afterward, we compare new screenshots with the base ones. To do so we can use any image diff tool which allows pixel-to-pixel comparison.
As a result, we produce a list of screens where any differences were found, even with single pixels different. This indicates that something changed in particular screens, and has to be checked.

If the difference was caused by new functionality or change intentionally made to the application – it should be tested like any other task done during the development, and a new screen should be used as a base moving forward.
If the change was not intentional, it means that we have found a bug, which otherwise might reach the production environment.

Data Science enchantments

Reinforcement learning for page crawler

Our MVP solution was based on sitemap and Selenium-based crawlers to reach different parts of the application. More complex parts of the application, like multi-step forms, are especially complex to reach with classical crawling. Those require recording/writing automated scripts using tools like Selenium or Cucumber. We would like to explore the idea of automating that process. We want to use reinforcement learning processes to find different states of the application and ways to reach those. We need to define good exploration mechanisms, possibly based on Markov decision processes. We will use test cases done for different projects as a base and we teach our crawler what are typical cases which should be covered and how to fill typical fields of forms.

As a result, we want to automate the process of creating Selenium scripts covering test cases and scenarios not accessible from the sitemap itself.

Machine Learning application for change classification

Extracting more data from picture comparison might give us a much better description of the actual bug. Idea is to explore the possibility of automated recognition of what has changed on the given screen, so a proper bug ticket can be generated.

  • Difference map: each pixel changed between the two versions should be visible as a map or short summary. This mode is already implemented and ready for usage, created using a conventional approach.
  • Natural language description: instead of pixel-to-pixel change, the system can show it’s users what abstract part of the target has changed, e.g. login button in the left top corner moved to the left one and got shrunk by 15%. For this approach deep learning image captioning seems the most promising approach. Namely usage of Recurrent Neural Networks (LSTMs) with Convolutional Networks supplemented by more standard programming approach and/or image processing techniques should yield good results.This project can be provided as a SaaS, in such case customer will be provided with the highly-available system. Codete’s bet should be Kubernetes with GPU support in the cloud, using Google’s Kubernetes Engine. Due to a multi-instance application approach continuous updates and fixing during system usage will be easily possible (continuous delivery of software). Another approach would be the customer-ready service to be run locally on one’s hardware. Here Docker with NVidia’s GPU support plugin should be used. Hardware setup will be automated or for each project’s user done individually by Codete’s staff as CUDA, specific speed-ups are necessary for neural network inference.

 

Get started with our open source, QAcheck platform. It’s free. 

Project Manager

I am a Project Manager, with multiple international projects completed over last years. I am an experienced PHP Symfony developer and big enthusiast of Agile methodologies.