Most Recommended Software Testing Books to Read in 2020 and Beyond

As we kick of a new decade of software development and testing, and as digital continuous to challenge test automation engineers that are trying to fit the testing within shorter then ever cycles, here are the top recommended software testing books to consider – the order isn’t the priority, they are all awesome books and equally recommended:

#1 Agile Testing – Lisa Crispin and Janet Gregory

Readers will come away from this book understanding

  • How to get testers engaged in agile development
  • Where testers and QA managers fit on an agile team
  • What to look for when hiring an agile tester
  • How to transition from a traditional cycle to agile development
  • How to complete testing activities in short iterations
  • How to use tests to successfully guide development
  • How to overcome barriers to test automation

#2 More Agile Testing – Lisa Crispin and Janet Gregory

Main learning objectives from the book:

  • How to clarify testing activities within the team
  • Ways to collaborate with business experts to identify valuable features and deliver the right capabilities
  • How to design automated tests for superior reliability and easier maintenance
  • How agile team members can improve and expand their testing skills
  • How to plan “just enough,” balancing small increments with larger feature sets and the entire system
  • How to use testing to identify and mitigate risks associated with your current agile processes and to prevent defects
  • How to address challenges within your product or organizational context
  • How to perform exploratory testing using “personas” and “tours”
  • Exploratory testing approaches that engage the whole team, using test charters with session- and thread-based techniques
  • How to bring new agile testers up to speed quickly–without overwhelming them

#3 Hands on Mobile App Testing – Daniel Knott

Readers will learn how to

  • Establish your optimal mobile test and launch strategy
  • Create tests that reflect your customers, data networks, devices, and business models
  • Choose and implement the best Android and iOS testing tools
  • Automate testing while ensuring comprehensive coverage
  • Master both functional and nonfunctional approaches to testing
  • Address mobile’s rapid release cycles
  • Test on emulators, simulators, and actual devices
  • Test native, hybrid, and Web mobile apps
  • Gain value from crowd and cloud testing (and understand their limitations)
  • Test database access and local storage
  • Drive value from testing throughout your app life-cycle
  • Start testing wearables, connected homes/cars, and Internet of Things devices

#4 Enterprise Continuous Testing – Wolfgang Platz

Enterprise Continuous Testing: Transforming Testing for Agile and DevOps introduces a Continuous Testing strategy that helps enterprises accelerate and prioritize testing to meet the needs of fast-paced Agile and DevOps initiatives. Software testing has traditionally been the enemy of speed and innovation—a slow, costly process that delays releases while delivering questionable business value. This new strategy helps you test smarter, so testing provides rapid insight into what matters most to the business.

#5 Continuous Testing for DevOps Professionals – Eran Kinsbruner and Leading Market Experts

Continuous Testing for DevOps Professionals is the definitive guide for DevOps teams and covers the best practices required to excel at Continuous Testing (CT) at each step of the DevOps pipeline. It was developed in collaboration with top industry experts from across the DevOps domain from leading companies such as CloudBees, Tricentis, Testim.io, Test.ai, Perfecto, and many more.The book is aimed at all DevOps practitioners, including software developers, testers, operations managers, and IT/business executives

#6 AccelerateThe Science of Lean Software and DevOps, Nicole Forsgren

How can we apply technology to drive business value? For years, we’ve been told that the performance of software delivery teams doesn’t matter―that it can’t provide a competitive advantage to our companies. Through four years of groundbreaking research to include data collected from the State of DevOps reports conducted with Puppet, Dr. Nicole Forsgren, Jez Humble, and Gene Kim set out to find a way to measure software delivery performance―and what drives it―using rigorous statistical methods. This book presents both the findings and the science behind that research, making the information accessible for readers to apply in their own organizations.

#7 Complete Guide to Test Automation, Arnon Axelrod

  • Know the real value to be expected from test automation
  • Discover the key traits that will make your test automation project succeed
  • Be aware of the different considerations to take into account when planning automated tests vs. manual tests
  • Determine who should implement the tests and the implications of this decision
  • Architect the test project and fit it to the architecture of the tested application
  • Design and implement highly reliable automated tests
  • Begin gaining value from test automation earlier
  • Integrate test automation into the business processes of the development team
  • Leverage test automation to improve your organization’s performance and quality, even without formal authority
  • Understand how different types of automated tests will fit into your testing strategy, including unit testing, load and performance testing, visual testing, and more

#8 The DevOps Handbook, Gene Kim, Jez Humble, Patrick Debois, John Willis

More than ever, the effective management of technology is critical for business competitiveness. For decades, technology leaders have struggled to balance agility, reliability, and security. The consequences of failure have never been greater―whether it’s the healthcare.gov debacle, cardholder data breaches, or missing the boat with Big Data in the cloud. And yet, high performers using DevOps principles, such as Google, Amazon, Facebook, Etsy, and Netflix, are routinely and reliably deploying code into production hundreds, or even thousands, of times per day. Following in the footsteps of The Phoenix Project, The DevOps Handbook shows leaders how to replicate these incredible outcomes, by showing how to integrate Product Management, Development, QA, IT Operations, and Information Security to elevate your company and win in the marketplace

#9 The Digital Quality Handbook – Eran Kinsbruner and Industry Experts

As mobile and web technologies continue to expand and basically drives large organizational business in virtually every vertical or industry, it is critical to understand how to take existing release practices for mobile and web apps to the next level, including software development life cycle (SDLC), tools, quality, etc. Organizations which are already enjoying the power of digital are still struggling with various challenges that can be related to many factors, such as:

  • SDLC and processes maturity
  • Expanding test coverage to include more non-functional testing, user condition testing, etc.
  • Coping with existing limitations of open source tools and frameworks
  • Sustaining correctly sized and up-to-date mobile test labs
  • Getting proper quality insights upon each test cycle prior and post production
  • Branching wisely cross-platform and cross-feature test suites

#10 Specification by Example , How Successful Teams Deliver the Right Software, Gojko Adzic

Gojko has a list of great books, this is one of the great ones, but check out other of his books as well.

Specification by Example is an emerging practice for creating software based on realistic examples, bridging the communication gap between business stakeholders and the dev teams building the software. In this book, author Gojko Adzic distills interviews with successful teams worldwide, sharing how they specify, develop, and deliver software, without defects, in short iterative delivery cycles.

#11 Clean CodeA Handbook of Agile Software Craftsmanship, Robert C. Martin

Readers will come away from this book understanding

  • How to tell the difference between good and bad code
  • How to write good code and how to transform bad code into good code
  • How to create good names, good functions, good objects, and good classes
  • How to format code for maximum readability
  • How to implement complete error handling without obscuring code logic
  • How to unit test and practice test-driven development

#12 Continuous DeliveryReliable Software Releases through Build, Test, and Deployment Automation, Jez Humble and David Farley

The authors introduce state-of-the-art techniques, including automated infrastructure management and data migration, and the use of virtualization. For each, they review key issues, identify best practices, and demonstrate how to mitigate risks. Coverage includes

  • Automating all facets of building, integrating, testing, and deploying software
  • Implementing deployment pipelines at team and organizational levels
  • Improving collaboration between developers, testers, and operations
  • Developing features incrementally on large and distributed teams
  • Implementing an effective configuration management strategy
  • Automating acceptance testing, from analysis to implementation
  • Testing capacity and other non-functional requirements
  • Implementing continuous deployment and zero-downtime releases
  • Managing infrastructure, data, components and dependencies
  • Navigating risk management, compliance, and auditing

#13 The Guide to Software Testability, Ash Coleman and Rob Meaney

Learn practical insights on how testability can help bring teams together to observe, control and understand the systems they build. Enabling them to better meet customer needs, achieve a transparent level of quality and predictability of delivery.

#14 A Practical Guide to Testing in DevOps, Katrina Clokie

A Practical Guide to Testing in DevOps offers direction and advice to anyone involved in testing in a DevOps environment

#15 Test Automation University, Angie Jones and Applitools

While not a book, a great shout out to my colleague, friend and co-author in one of my above mentioned books Angie Jones for launching, leading and building the Test Automation University. This is obviously an additional online resource that complements books and other written material.

Summary

I am sure that there are plenty of other great and practical books, but i went with this list as a start. If you feel that there must be an additional up to date book, reach out to me, and I will be more than happy to add to this list.

Happy Reading!

 

 

Core Pillars of Stable Test Automation Suites

For those who follow me, you already know that to build a continuous testing suites that scales, and that lasts for more than few software iterations, teams must follow rigorous processes.

When i say teams – I mean developers, test engineers and business testers.

 

 

This post isn’t about balancing the work load between personas, but rather on continuously sharpening the test suites value to the entire team.

As we close a decade and enter a new one, and with some cool advancements in the technologies that include AI/ML but also challenging platforms like PWAs, Foldables, 5G and IOT – it’s the perfect time to re-evaluate our testing management and maintenance processes.

What Causes Test Instabilities?

Especially across the digital landscape, there are quite a few recurring patterns that causes test instabilities or flakiness.

  • There are the scripts and frameworks that we use – in this category we can list object locators strategy, popups (security, system), coding skills, timing and steps synchronizations and more.
  • There are backend issues around services, networking and up to date test data and environments.
  • Lab related issues that includes platforms being unavailable or poorly setup for testing (screen is locked, network is disconnected, app isn’t installed, device isn’t in ready state mode)
  • Lastly, Orchestration – This is a tricky root cause since there are not that many early warnings to these kind of issues up until you execute and scale up. Not having sufficient platform to test against, or knowing that a platform is in use is often hard to find (but – not impossible :))

In addition to the above 4 major categories, there are also of course the software development dynamics that includes advancements in the product, market changes that impact your test scripts and infrastructure, and also the external dependencies that you cannot always control (network carriers, 3rd party services that you depend on, etc.).

Test Automation Certification for Value vs. ROI

While there is no 100% safety net for the above mentioned RCAs (root cause analysis), but there are few steps that teams can take and enforce to minimize the risks.

My few recommended tips that should be examined every 2-3 software releases/iterations are:

  1. Think with the end in mind when you write a test automation script. Will the test provide value more than just once? Which test suites/types such test will fit (regression, functional, unit, etc.)
  2. Consider the maintainability of such test script over time, how difficult will that be? and will it be worth the hassle?
  3. Certify the test script prior to integrating it into any pipeline. Certification should address the stability of the test across multiple platforms (only works on my machine is not an option), test execution duration (Less than 5 minutes please), If it is a test that detected defects it needs to be scored somehow as a higher value test that others.
  4. Monitor and measure the impact of your test suites and correct not longer than once a quarter!

The above image is an example of an external dependency that can impact the performance of your app. Assuming you test your mobile app against real cellular networks, you should acknowledge that not every carrier behave and performs the same, and it may impact your overall results. Knowing, measuring and taking this into account can save false negatives and other wrong speculations.

Lastly, and as this section implies – it is time we stop measuring test automation ROI and $ but clear value and risk mitigation to the business. Test automation value is by far more wide than the legacy ROI metrics that were simply covering for cost of test creation and execution regardless of what the test executions actually results in.

I have recently delivered a session about test automation stability for the OnlineTestConf. You can view the recording and slides here

Power Of Analytics

While a lot of the above might be clear to many practitioners, what i often see in the market is that the time element prevents teams from actually seeing what’s wring with their testing infrastructures. Teams invest a lot of money, resources and of course time into building a cool set of testing suites, however once their done, the auto pilot steps in, and until something is broken, these suites remains blind spots. This is again, a legacy practice, and in the modern DevOps/Agile/Continuous Testing era, teams ought to monitor and gain maximum visibility into their test code continuously!

As the above visual suggest – only when you look into the reports in a smart way, you can improve quality, deliver fast and up to date feedback and Yes – get some ROI (in a form of value) from your investment.

Imagine you can upon demand answer questions like –

  • what are my most problematic test scenarios?
  • what are my most flaky or buggy platforms (mobile/web)?
  • which job within my CI is the longest?
  • Which test cases must be retired or be placed in a “blocked” bucket for maintenance (due to popups, objects, other)?

Knowing the above answers upon and perhaps prior to clicking on the RUN button could be a major productivity boost to your entire cycle, wouldn’t it?

The below are 2 real life example from the Perfecto tool, but you can think about similar implementation as well – In the below, based on AI and smart algorithms the tool scans the test reports for pre-defined patterns like popups, elements not found, platforms in use and others, but also allows teams to customize these and add unique and relevant RCAs on their own to better slice and dice the test data.

Bottom Line

As we wrap a decade of software development and testing, it’s perhaps time that we take our test automation a level up and inject smart algorithms, processes and more so we are always on top of what we build, and improve whenever there is a need.

Happy Testing!

Eliminating Mobile Test Automation Flakiness and More

Mobile testing by definition is an unstable, flaky and unpredictable activity.

When you think you covered all corners and created a “stable” environment, still, your test cycle often get stuck due to 1 or few items.

In this post, I’ll try to identify some of the key root causes for test automation flakiness, and suggest some preventive actions to eliminate them.

What Can Block Your Test Automation Flow?

From an ongoing experience, the key items that often block test automation of mobile apps are the following:

  • Popups – security, available OS upgrades, login issues, etc.
  • Ready state of DUTs – test meets device in a wrong state
  • Environment – device battery level, network connectivity, etc.
  • Tools and Tets Framework fit – Are you using the right tool for the job?
  • Use of the “right” objects identifiers, POM, automation best practices
  • Automation at Scale – what to automate, on what platforms?

All of the above contribute in one way or the other to the end-to-end test automation execution.

We can divide the above 6 bullets into 2 sections:

  1. Environment
  2. Best Practices

Solving The Environment Factor in Mobile Test Automation

In order to address the test environment contribution to test flakiness, engineers need to have full control over the environment they operate in.

If the test environment and the devices under test (DUT) are not fully managed, controlled and secured the entire operation is at risk. In addition, when we state the term “Test Environment Readiness” it should reflect the following:

  1. Devices are always cleaned up prior to the test execution or are in a known “state”/baseline to the developers and testers
  2. If there are repetitive known popups such as security permissions, install/uninstall popups, OS upgrades, or other app-specific popups, they should be accounted for either in the pre-requisites of the test or should be prevented proactively prior to the execution.
  3. Network stability often is a key for unstable testing – engineers need to make sure that the devices are connected to WiFi or cellular network prior to testing execution start. This can be done either as a pre-requisite validation of the network or through a more generic environment monitoring.

 

Following Best Practices

In previous blogs, I addressed the importance of selecting the right testing frameworks, IDEs as well as leveraging the cloud as part of test automation at scale. After eliminating the risks of the test environment in the above section, it is important to make sure that both developers and test automation engineers follow proper guidelines and practices for their test automation workflow.

Since testing shifted left towards the development team, it is important that both dev and test align on few things:

  1. What to automate?
  2. On what platforms to test?
  3. How to automate (best practices)?
  4. Which tools should be used to automate?
  5. What goes into CI, and what is left outside?
  6. What is the role of Manual and Exploratory testing in the overall cycle?
  7. What is the role of Non-Functional testing in the cycle?

The above points (partial list)  covers some fundamental questions that each individual should be asking continuously to assure that his team is heading in the right direction.

Each of the above bullets can be attributed to at list one if not many best practices.

  1. To address the key question on what to automate, here’s a great tool (see screenshot below) provided by Angie Jones. In her suggested tool, each test scenario should be validated through some kind of metric that adds to a score. The highest scored test cases will be great candidates for automation, while the lowest ones obviously can be skipped.
  2. To address the 2nd question on platform selection, teams should monitor their web and mobile ongoing traffic, perform market research and learn from existing market reports/guides that addresses test coverage.
  3. Answering the question “How to automate” is a big one :). There is more than 1 thing to keep in mind, but in general – automation should be something repetitive, stable, time and cost efficient – if there is something preventing one or more of these objectives, it’s a sign that you’re not following best practices. Some best practices can be around using proper object identifiers, others can reflect building the automation framework with proper tags and catches so when something breaks it is easy to address, and more.
  4. The question around tools and test frameworks again is a big one. Answering it right depends on the project requirements and complexity, the application type (native, web, responsive, PWA), test types (functional, non-functional, unit). In the end, it is important to have a mix of tools that can “play” nicely together and provide unique value without stepping on each other.
  5. Tests that enter CI should be picked very carefully. Here, it is not about the quantity of the tests but about the quality, stability, and value these tests can bring focusing on fast feedback. If a test requires heavy env. setup is flaky by nature, takes too much time to run, it might not be wise to include it in the CI.
  6. Addressing the various testing types in Qs 6 and 7 – it depends on the project objectives, however, there is a clear value for each of the following tests in  mobile app quality assurance:
    1. Accessibility and performance provide key UX indicators about your app and should be automated as much as possible
    2. Security testing is a common oversight by many teams and should be covered through various code analysis, OWASP validations and more.
    3. Exploratory, manual and crowd testing provide another layer of test coverage and insights into your overall app quality, hence, should be in your test plan and divided throughout the DevOps cycle.

Happy Testing

Getting Started With Headless Browser Testing

[Guest Blog by Uzi Eilon, CTO, Perfecto]

The “shift left” trend is actually happening, developers as part of the DevOps pipeline need to test more and add more automation testing in order to release faster.

In addition, those tests are almost the last barrier before production, because the traditional testing is going away.

In such case, the standard unit tests are not good enough, and the E2E tests are complicated and require a longer time of setup and prepare.

This is the reason both Google and Mozilla released new JS headless browsers to help their developers to execute automation tests.

The same happened in the mobile area where Apple and Google released the XCUItest and Espresso.

Headless browsers provide the following capabilities in order for the developers to use it:

  • Same language , same IDE , same working environment:
    Most of the web develops work with  JS so these browsers are JS platform , to add new test you should open new class and write standard JS code.
  • FAST Feedback & Execution
    These tests need to be executed fast (sometime every commit) , these browser reduce the UI and rendering “noise” connect to the element directly  and run very fase.
  • Easy to setup
    Developers time is expensive, and developers will not add complicated processes for test , the setup of the tools is a simple npm installation.
  • Access to all the DevTools capabilities
    Developers need more details , these tools give access to all the DevTools data includes accessibility, network, log , security and more.
    Smart tests can be very powerful and cover not only the functionally but also the efficiency

In order to understand more I played with Puppeteer and I’m happy share my thoughts with you.

Installation

Very simple

npm i –save puppeteer

Documentation

Not a lot of examples or discussions about specific issues but I did find the API documentation that contains everything I was looking for.

Objects identification

Intuitive – Same way I connected to my object via any JS  .

Example:

  • by id : page.type(‘#firstName’,‘Uzi’);·
  • by class page.type(‘.class,‘Uzi’);

Sync and waiting for elements

In this case, I have to admit I struggled with the standard wait for navigation command, it was not stable:

await page.waitForNavigation({waitUntil:‘load’})

at the end I used the following :

await page.waitForSelector(‘#firstName’,{visible:true}).then(()=>
{      // do the actions per page
page.screenshot({path: ‘then.png’,fullPage: false})
});

 

UI

As part of my test I tried to verify the screen by taking a screenshot, I liked the way I could change the browser UI capabilities and configure my page:

const fullScreen = {
deviceScaleFactor: 1,
hasTouch: false,
height:  2400 ,
isLandscape: false,
isMobile: false,
width: 1800,
fullPage: true
};
page.setViewport(fullScreen)

 

Other devOpts options:

it is very easy to use, for example I would like to see all my links in a frame

for (let child of frame.childFrames())
{
dumpFrameTree(child, indent + ‘  ‘);
}

 

Summary:

Using the headless browser like Puppeteer was very easy and intuitive, it felt natural to add it as part of my testing code.

In addition, setting up the headless browser environment and executing was very simple and fast.

On the less convenient point, what I found was that to get the results directly into the CI, one should add more scripting code or use other executions methods.

Lastly, this method still ramps up, hence has some small bugs in few features and also lacks documentation and more samples, however, for an early testing tool for white-box/unit testing, it is very promising and well-positioned to complement tools such as Selenium. AS a matter of fact, what I also saw, is that other browser vendors are taking the same approach and investing in headless browsers – Mozilla, Microsoft.

 

P.S: If you want to learn more about the growing technologies and trends in the market, I encourage you to follow My podcast with Uzi Eilon called Testium (Episode 6 is fully dedicated to this subject)

The Rise of Progressive Web Apps and The Impact on Cross-Browser Testing

If we all thought we’ve figured out the digital market from an application type perspective, and have seen the rise of mobile, and the transformation of web to responsive web – now we should all start getting used to a whole new type of application that should change the entire user experience and offer new web functionality – Meet PWA’s.

Google.com is a very clear example of such app, and Apple is about to introduce PWA capabilities in its upcoming WebKit engine.

What are Progressive Web Apps?

If to refer to Google official website dedicated to PWA, Google defines PWA as “A new way to deliver amazing user experiences on the web

David Rubinstein from SD Time, actually add even more insights into these new app types:

PWAs can use device features like cameras, data storage, GPS and motion sensors, face detection, Push notifications, and more. This will pave the way for AR and VR experiences, right on the web. Imagine being able to redecorate your home virtually using nothing but your phone and a PWA. Pan your camera around a room, then use tools on a website to change wall colors, try out furniture, hang new artwork, and more. It may feel like a futuristic fantasy, but it’s close to reality.

The key behind PWA apps is to provide a rich end-user alternative to native apps. These apps can be launched from the device home screen adding layers of performance, reliability, and functionality to a web application without the need to install anything from the app store. In addition, these apps that are still JavaScript based, but with additional specific API’s can work even when there’s no internet connection and that’s  a huge advantage.

PWA apps leverage 2 main architectural features:

  • Service Workers – give developers the ability to manually manage the caching of assets and control the experience when there is no network connectivity.
  • Web App Manifest – That’s the file within the PWA that describe the app, provide metadata specific to the app like icons, splash screens and more. See below an example Google offers for such a descriptor file (Json)
{
  "short_name": "AirHorner",
  "name": "Kinlan's AirHorner of Infamy",
  "icons": [
    {
      "src": "launcher-icon-1x.png",
      "type": "image/png",
      "sizes": "48x48"
    },
    {
      "src": "launcher-icon-2x.png",
      "type": "image/png",
      "sizes": "96x96"
    },
    {
      "src": "launcher-icon-4x.png",
      "type": "image/png",
      "sizes": "192x192"
    }
  ],
  "start_url": "index.html?launcher=true"
}


In order to check the correctness of your PWA checklist and the entire app, Google offers some tools as part of their documentation like this Progressive Web Apps checklist, and their chrome built-in DevTools (see below visual),

As deeply covered in this great Dzone article, good PWAs also implement the PRPL pattern recommended by Google to enhance performance.

What Are the Implications of PWAs for Cross-Browser sites and Mobile Apps?

To understand the implications, I recommend dividing the question into the impacted Personas.

  1. Developers
  2. Testers
  3. Business
  4. End-Users

Each of the above Personas will have different benefits and implications when adopting this kind of apps.

Developers

For existing web developers, this new app type should present a whole new world of innovative opportunities. Since PWAs are still JavaScript based apps, developers do not need to gain new skills, but rather learn the new APIs offered through the Service Workers and see how they can be leveraged by their websites.  Since the PWA app runs on a mobile device and can be launched without a network connection and without any installation, obviously it needs to be validated by developers through unit and integration tests.

Going forward, the market envisions these apps impacting the native apps architecture in a way that there will only be 1 type of app that can seamlessly run on both browsers and mobile devices with one single implementation – that will require a heavier lift and re-work.

 

Testers

For testers, as in every new implementation, new tests (manual, automated) needs to be developed, executed and fit into the overall pipeline.

PWAs, in particular, introduces some unique use cases such as

  • No network operation
  • High performance 
  • Sensors based functionality (Location, Camera for AR/VR and more)
  • Cross-device functionality (like in Responsive, the experience should be the same regardless of the screen size/HW etc.)
  • Adhering to the design and checklist required by Google and soon Apple
  • Accessibility is always a need
  • Security of these apps (with and without being connected to the network)

Business

For the business, the new app types shall help increase the end-user engagement with the business. When having a web application that is richer in functionality, performs fast, and can be “always on” through an easy launch from the customers’ device home screen, this by definition should increase usage and move the needle to the business. My assumption is that large enterprises are already looking into these type of apps as the next-gen RWD apps.

End Users

At the end of the day, all products are aiming to get greater engagements with the customers and beat the competition. Obviously, if the end users will understand the value in these apps, and can “feel” it in their day by day activities, this will be a clear Win-Win situation to both the organization as well as the customer.

To assure end-user experience as Google envisioned when first launching this technology 3 years ago (2015), Dev and Test teams should continue their continuous testing activities, and make sure they are covering sufficient platform, features and use cases between each release and each new release of a platform or device.

To conclude this blog, I highly recommend watching the short video and read the blog from Mozilla on how PWA live within Firefox and how different experience users get from such apps (see below Firefox Wego app within Firefox browser in the background and a PWA Wego app in the foreground)

Happy PWA Testing!

Mobile, Cross Browser Testing, DevOps and Continuous Testing Trends and Projections for 2018

As we about to wrap out 2017, It’s the right time to get ready to what’s expected next year in the mobile, cross-browser testing and DevOps landscape.

To categorize this post, I will divide the trends into the following buckets (there may be few more points, but I believe the below are the most significant ones)

  • DevOps and Test Automation on Steroids Will Become Key for Digital Winners
  • Artificial Intelligence (AI) and Machine Learning (ML)/ Tools alignment as part of Smarter Testing throughout the pipeline
  • IOT and Digital Transformation Moving to Prime Time

 

DevOps and Automation on Steroids

If in 2017, we’ve seen the tremendous adoption of more agile methods, ATDD, BDD and organizations leaving legacy tools behind in favor of faster and more reliable and agile-ready testing tools, such that can fit the entire continuous testing efforts whether they’re done by Dev, BA, Test or Ops.

In 2018, we will see the above growing to a higher scale, where more manual and legacy tools skills are transforming into more modern ones. The growth in continuous testing (CT), Continuous Integration (CI) and DevOps will also translate into much shorter release cadence as a bridge towards real Continuous Delivery (CD)

 

Related to the above, to be ready for the DevOps and CT trend, engineers need to become more deeply familiar with tools like Espresso, XCUITest, Earl Grey and Appium on the mobile front, and with the open-source web-based framework like the headless google project called Puppeteer, Protractor, and other web driver based framework.

In addition, optimizing the test automation suite to include more API and Non-Functional testing as the UX aspect becomes more and more important.

Shifting as many tests left and right is not a new trend, requirement or buzz – nothing change in my mind around the importance of this practice – the more you can automate and cover earlier, the easier it will be for the entire team to overcome issues, regressions and unexpected events that occur in the project life cycle.

AI, ML, and Smarter Test Automation

While many vendors are seeking for tools that can optimize their test automation suite, and shorten their overall execution time on the “right” platforms, the 2 terms of AI and ML (or Deep learning) are still unclear to many tool vendors, and are being used in varying perspectives that not always mean AI or ML 🙂

The end goal of such solutions is very clear, and the problem it aims to solve is real –> long testing cycles on plenty of mobile devices, desktop browsers, IOT devices and more, generates a lot of data to analyze and as a result, it slows down the DevOps engine. Efficient mechanism and tools that can crawl through the entire test code, understand which tests are the most valuable ones, and which platforms are the most critical to test on due to either customer usage or history of issues etc. can clearly address such pain.

Another angle or goal of such tools is to continuously provide a more reliable and faster test code generation. Coding takes time, requires skills, and varies across platforms. Having a “working” ML/AI tool that can scan through the app under test and generate robust page object model, and functional test code that runs on all platforms, as well as “responds” to changes in the UI, can really speed up TTM for many organization and focus the teams on the important SDLC activities in opposed to forcing Dev and Test to spend precious time on test code maintenance.

IOT and The Digital Transformation

In 2017, Google, Apple, Amazon and other technology giants announced few innovations around digital engagements. To name a few, better digital payments, better digital TV, AR and VR development API and new secure authentication through Face ID. IOT this year, hasn’t shown a huge leap forward, however, what I did notice, was that for specific verticals like Healthcare, and Retail, IOT started serving a key role in their digital user engagements and digital strategy.

In 2018, I believe that the market will see an even more advanced wave in the overall digital landscape where Android and Apple TV, IOT devices, Smart Watches and other digital interfaces becoming more standard in the industry, requiring enterprises to re-think and re-build their entire test lab to fit these new devices.

Such trend will also force the test engineers to adapt to the new platforms and re-architect their test frameworks to support more of these screens either in 1 script of several.

Some insights on testing IOT specifically in the healthcare vertical were recently presented by my colleague Amir Rozenberg – recommend to review the slides below

https://www.slideshare.net/AmirRozenberg/starwest-2017-iot-testing/ 

 

Bottom Line

Do not immediately change whatever you do today, but validate whether what you have right now is future ready and can sustain what’s coming in the near future as mentioned above.

If DevOps is already in practice in your organization, fine – make sure you can scale DevOps, shorten release time, increase test and platform automation coverage, and optimize through smarter techniques your overall pipeline.

AI and ML buzz are really happening, however, the market needs to properly define what it means to introduce these into the SDLC, and what would success look like if they do consider leveraging such. From a landscape perspective, these tools are not yet mature and ready for prime time, so that leaves more time to properly get ready for them.

Happy New 2018 to My Followers.

Enabling Mobile Testing In a Fast Growing DevOps Reality

6 months ago I launched my 1st book called “The Digital Quality Handbook”.

The book aims to address the key challenges in assuring high mobile (as well as web) quality, by avoiding pitfalls that are commonly practiced in the industry.

I have also recently joined the working group of ISTQB to influence the material in the mobile certification course, where I plan to include insights from the book as well.

In this book, I am hosting top leaders from the industry touching the most important aspects in assuring DevOps.

The above image is taken from Amazon recommending my book close to the leading DevOps practitioner books, this is another strong validation of the book relevancy and value.

Few highlights from the book are below:

  1. Shifting quality left and right to cover as many tests automatically throughout the release pipeline is a key to move faster and identify issues earlier in the process (Angie Jones from Twitter, Manish Maturia from InfoStretch and others provide practitioner level insights and tips)
  2. Testing on the right platforms and OS’s is a key to assure high quality across different devices (new, legacy, popular) in various locations and environments
    1. I am referring to this magazine, that I author on a quarterly basis in the book, and highly recommend subscribing to receive this free asset upon each release: http://info.perfectomobile.com/factors-magazine.html 
  3. Robust automation is achieved through best practices such as building a page object model (POM) and using unique object locators rather than flaky XPATHs etc. I am referring to a free online tool that can help score your object as part of your test automation development http://xpathvalidator.projectquantum.io/
  4. Testing not only via the UI is another key for success, so complementing UI testing with API level testing can reduce the time of testing, provide faster feedback and other values. This chapter was actually developed by my twin brother Lior Kinbruner 🙂 – worth checking it out!
  5. Performance testing and UX is another challenge and key to success. A full section of the book is dedicated to wind tunnel testing, user experience testing (JeanAnn Harrison contributes a lot here together with Amir Rozenberg).

The book was #1 in the new best selling book on Amazon, and still rocking today after more than 6 months. It is #43 as of today in the overall Software Testing Book which is a great validation and honor for me and the contributors.

 

If you still haven’t got a copy of the book, i really encourage you to do so – I am already planning on my next journey so stay tuned 🙂

Complementing Cross-Browser Testing with Headless Unit Testing Solutions

Nothing new in the land of cross-browser testing. Selenium as the underlying API layer serves leading frameworks including WebDriverIO, Protractor (Angular based testing), NightWatchJS, RobotJS and many others.

For web application developers that require fast feedback capabilities post their code commit or bug resolution, there are various testing options. Some would quickly test manually on a set of local or cloud based VM’s, some will develop unit tests (qUnit etc.), but there are also very mature cross browser testing solutions that add more layers of coverage and insights in an automated and easy way.

In a recent eBook that I developed, I’m covering the 10 emerging cross-browser testing tools with a set of considerations around how to choose the right one or the right mix of them.

As can be seen in the 10 tools shown above, there is a mix of a unit as well as E2E functional testing tools mostly javascript based.

Developers who would like to include as part of their quick sanity post commit a validation of the load time it takes the site to load, can easily add this PhantomJS based test into their CI post build acceptance testing and get such visibility after each successful build – that, match the result with a benchmark and take decisions.

In a quick test that I ran on the NFL.com website, I was able to not only detect a slow load of 10sec. but I also identified a long set of errors while the page is loaded.

Another powerful capability tools like PhantomJS can offer is the ability to both capture a specific rendering of a web page by a pre-defined viewport, as well as the ability to generate a page HAR file for network traffic analysis (I am aware that it is not the newest tool, and that Goole already provides a newer version, but still this is a valuable open-source free tool that can help add coverage capabilities to any web development team).

So if as an example, the load time with errors above turns on a red light regarding that site, with 2 simple tests that BTW PhantomJS provides as their starting kit in GIT, the developer can address the above 2 use cases of HAR file generation as well as page rendering screenshot.

The result of the above snippet is the screenshot below:

The HAR file creation that is based on the following GIT code sample will result in the following (I am using the google add-on HTTP Archive Viewer for Chrome, it can be done simply with other HAR viewers as well):

Bottom line

You can download my latest eBook and learn more, but in general – leverage both unit testing powerful tools, as well as traditional E2E tests, hence they do complement each other and add their unique value – And it’s Free!

Happy Testing!

Optimizing Mobile Test Automation Across The Pipeline

With the massive innovation that drives the digital market these days, organizations are continuing to develop features, as well as new test code to cover these features.

What I’ve learned is that often, the test code developers would not always stop and look back into their existing test suites and validate whether the new tests that are being developed are somehow a superset to existing ones. In addition, legacy tests are a continuous load and overhead on your SDLC cycles length if they are not being maintained over time.

Oil Transport

Many Owners To The Same Problem

Since we live in an agile/DevQAOps world, test code development is not a QA only problem, but rather everyone3s. Tests are being executed throughout the pipeline from Dev to integration and pre/post production testing.

Use of smart tagging mechanism for your test scenarios (login), suites (App A) and types unit, regression) can be a good step towards gaining control over your tests.

Without some context, discipline, and continuous structured validation of the tests, it will become harder as you progress your SDLC to debug, analyze and solve defects (would be like finding the key in the below visual mess)

Find the Key in the Picture

Recommended Practices

  • Develop the tests with context, tags and proper annotations that would make sense to you and your team even 12 months from the development day. Make sure that in your execution reports you then have a way to filter using these annotations to only get the view of a given functional area, platform etc.
  • Match your device under tests capabilities to the test code and application under test. Make sure that you focus e.g. your fingerprint based tests only on the devices that support it (API XX and above).
  • Perform test code review every agreed upon time – in such review, group your feature specific test suites and try to optimize, merge, eliminate flakiness, identify missing coverage areas etc. It is harder to do it as the time progresses, so depending on your release cadence and test development maturity, set the right goals – more reviews would be better than less – it will also be shorter and more efficient that way since the delta between such review will be smaller.
  • Drive joint Dev, Test, Product, Marketing decisions based on data – When you have the ability to get quality analysis from your entire test suites, it is recommended to gather all counter parts and brainstorm on the findings. Which tests are most effective, can we shrink based on the data the release cycles, are we missing tests for specific areas, are there platforms that are more buggies than others, which tests takes longer than others to finish etc.
  • Optimize your CI and build-acceptance testing – based on the above intelligence, teams can reach data driven decision about what to include in their CI as well. Testing in the build cycle via CI should be fast, reliable with zero false positives. With quality insights on your tests, you can decide and certify the most valuable and fastest tests to get into this CI testing, and by that to shrink the overall process without risking coverage aspect.

CI_Dash1.png

Bottom Line

A test is code, and like you refactor, maintain, retire and improve your code, you should do the same to your tests. Make sure to always be in control over your tests, and by that, gain control over your quality of your app in a continuous manner.

Happy Testing!

Create a website or blog at WordPress.com

Up ↑