Most Recommended Software Testing Books to Read in 2020 and Beyond

As we kick of a new decade of software development and testing, and as digital continuous to challenge test automation engineers that are trying to fit the testing within shorter then ever cycles, here are the top recommended software testing books to consider – the order isn’t the priority, they are all awesome books and equally recommended:

#1 Agile Testing – Lisa Crispin and Janet Gregory

Readers will come away from this book understanding

  • How to get testers engaged in agile development
  • Where testers and QA managers fit on an agile team
  • What to look for when hiring an agile tester
  • How to transition from a traditional cycle to agile development
  • How to complete testing activities in short iterations
  • How to use tests to successfully guide development
  • How to overcome barriers to test automation

#2 More Agile Testing – Lisa Crispin and Janet Gregory

Main learning objectives from the book:

  • How to clarify testing activities within the team
  • Ways to collaborate with business experts to identify valuable features and deliver the right capabilities
  • How to design automated tests for superior reliability and easier maintenance
  • How agile team members can improve and expand their testing skills
  • How to plan “just enough,” balancing small increments with larger feature sets and the entire system
  • How to use testing to identify and mitigate risks associated with your current agile processes and to prevent defects
  • How to address challenges within your product or organizational context
  • How to perform exploratory testing using “personas” and “tours”
  • Exploratory testing approaches that engage the whole team, using test charters with session- and thread-based techniques
  • How to bring new agile testers up to speed quickly–without overwhelming them

#3 Hands on Mobile App Testing – Daniel Knott

Readers will learn how to

  • Establish your optimal mobile test and launch strategy
  • Create tests that reflect your customers, data networks, devices, and business models
  • Choose and implement the best Android and iOS testing tools
  • Automate testing while ensuring comprehensive coverage
  • Master both functional and nonfunctional approaches to testing
  • Address mobile’s rapid release cycles
  • Test on emulators, simulators, and actual devices
  • Test native, hybrid, and Web mobile apps
  • Gain value from crowd and cloud testing (and understand their limitations)
  • Test database access and local storage
  • Drive value from testing throughout your app life-cycle
  • Start testing wearables, connected homes/cars, and Internet of Things devices

#4 Enterprise Continuous Testing – Wolfgang Platz

Enterprise Continuous Testing: Transforming Testing for Agile and DevOps introduces a Continuous Testing strategy that helps enterprises accelerate and prioritize testing to meet the needs of fast-paced Agile and DevOps initiatives. Software testing has traditionally been the enemy of speed and innovation—a slow, costly process that delays releases while delivering questionable business value. This new strategy helps you test smarter, so testing provides rapid insight into what matters most to the business.

#5 Continuous Testing for DevOps Professionals – Eran Kinsbruner and Leading Market Experts

Continuous Testing for DevOps Professionals is the definitive guide for DevOps teams and covers the best practices required to excel at Continuous Testing (CT) at each step of the DevOps pipeline. It was developed in collaboration with top industry experts from across the DevOps domain from leading companies such as CloudBees, Tricentis, Testim.io, Test.ai, Perfecto, and many more.The book is aimed at all DevOps practitioners, including software developers, testers, operations managers, and IT/business executives

#6 AccelerateThe Science of Lean Software and DevOps, Nicole Forsgren

How can we apply technology to drive business value? For years, we’ve been told that the performance of software delivery teams doesn’t matter―that it can’t provide a competitive advantage to our companies. Through four years of groundbreaking research to include data collected from the State of DevOps reports conducted with Puppet, Dr. Nicole Forsgren, Jez Humble, and Gene Kim set out to find a way to measure software delivery performance―and what drives it―using rigorous statistical methods. This book presents both the findings and the science behind that research, making the information accessible for readers to apply in their own organizations.

#7 Complete Guide to Test Automation, Arnon Axelrod

  • Know the real value to be expected from test automation
  • Discover the key traits that will make your test automation project succeed
  • Be aware of the different considerations to take into account when planning automated tests vs. manual tests
  • Determine who should implement the tests and the implications of this decision
  • Architect the test project and fit it to the architecture of the tested application
  • Design and implement highly reliable automated tests
  • Begin gaining value from test automation earlier
  • Integrate test automation into the business processes of the development team
  • Leverage test automation to improve your organization’s performance and quality, even without formal authority
  • Understand how different types of automated tests will fit into your testing strategy, including unit testing, load and performance testing, visual testing, and more

#8 The DevOps Handbook, Gene Kim, Jez Humble, Patrick Debois, John Willis

More than ever, the effective management of technology is critical for business competitiveness. For decades, technology leaders have struggled to balance agility, reliability, and security. The consequences of failure have never been greater―whether it’s the healthcare.gov debacle, cardholder data breaches, or missing the boat with Big Data in the cloud. And yet, high performers using DevOps principles, such as Google, Amazon, Facebook, Etsy, and Netflix, are routinely and reliably deploying code into production hundreds, or even thousands, of times per day. Following in the footsteps of The Phoenix Project, The DevOps Handbook shows leaders how to replicate these incredible outcomes, by showing how to integrate Product Management, Development, QA, IT Operations, and Information Security to elevate your company and win in the marketplace

#9 The Digital Quality Handbook – Eran Kinsbruner and Industry Experts

As mobile and web technologies continue to expand and basically drives large organizational business in virtually every vertical or industry, it is critical to understand how to take existing release practices for mobile and web apps to the next level, including software development life cycle (SDLC), tools, quality, etc. Organizations which are already enjoying the power of digital are still struggling with various challenges that can be related to many factors, such as:

  • SDLC and processes maturity
  • Expanding test coverage to include more non-functional testing, user condition testing, etc.
  • Coping with existing limitations of open source tools and frameworks
  • Sustaining correctly sized and up-to-date mobile test labs
  • Getting proper quality insights upon each test cycle prior and post production
  • Branching wisely cross-platform and cross-feature test suites

#10 Specification by Example , How Successful Teams Deliver the Right Software, Gojko Adzic

Gojko has a list of great books, this is one of the great ones, but check out other of his books as well.

Specification by Example is an emerging practice for creating software based on realistic examples, bridging the communication gap between business stakeholders and the dev teams building the software. In this book, author Gojko Adzic distills interviews with successful teams worldwide, sharing how they specify, develop, and deliver software, without defects, in short iterative delivery cycles.

#11 Clean CodeA Handbook of Agile Software Craftsmanship, Robert C. Martin

Readers will come away from this book understanding

  • How to tell the difference between good and bad code
  • How to write good code and how to transform bad code into good code
  • How to create good names, good functions, good objects, and good classes
  • How to format code for maximum readability
  • How to implement complete error handling without obscuring code logic
  • How to unit test and practice test-driven development

#12 Continuous DeliveryReliable Software Releases through Build, Test, and Deployment Automation, Jez Humble and David Farley

The authors introduce state-of-the-art techniques, including automated infrastructure management and data migration, and the use of virtualization. For each, they review key issues, identify best practices, and demonstrate how to mitigate risks. Coverage includes

  • Automating all facets of building, integrating, testing, and deploying software
  • Implementing deployment pipelines at team and organizational levels
  • Improving collaboration between developers, testers, and operations
  • Developing features incrementally on large and distributed teams
  • Implementing an effective configuration management strategy
  • Automating acceptance testing, from analysis to implementation
  • Testing capacity and other non-functional requirements
  • Implementing continuous deployment and zero-downtime releases
  • Managing infrastructure, data, components and dependencies
  • Navigating risk management, compliance, and auditing

#13 The Guide to Software Testability, Ash Coleman and Rob Meaney

Learn practical insights on how testability can help bring teams together to observe, control and understand the systems they build. Enabling them to better meet customer needs, achieve a transparent level of quality and predictability of delivery.

#14 A Practical Guide to Testing in DevOps, Katrina Clokie

A Practical Guide to Testing in DevOps offers direction and advice to anyone involved in testing in a DevOps environment

#15 Test Automation University, Angie Jones and Applitools

While not a book, a great shout out to my colleague, friend and co-author in one of my above mentioned books Angie Jones for launching, leading and building the Test Automation University. This is obviously an additional online resource that complements books and other written material.

Summary

I am sure that there are plenty of other great and practical books, but i went with this list as a start. If you feel that there must be an additional up to date book, reach out to me, and I will be more than happy to add to this list.

Happy Reading!

 

 

Eliminating Mobile Test Automation Flakiness and More

Mobile testing by definition is an unstable, flaky and unpredictable activity.

When you think you covered all corners and created a “stable” environment, still, your test cycle often get stuck due to 1 or few items.

In this post, I’ll try to identify some of the key root causes for test automation flakiness, and suggest some preventive actions to eliminate them.

What Can Block Your Test Automation Flow?

From an ongoing experience, the key items that often block test automation of mobile apps are the following:

  • Popups – security, available OS upgrades, login issues, etc.
  • Ready state of DUTs – test meets device in a wrong state
  • Environment – device battery level, network connectivity, etc.
  • Tools and Tets Framework fit – Are you using the right tool for the job?
  • Use of the “right” objects identifiers, POM, automation best practices
  • Automation at Scale – what to automate, on what platforms?

All of the above contribute in one way or the other to the end-to-end test automation execution.

We can divide the above 6 bullets into 2 sections:

  1. Environment
  2. Best Practices

Solving The Environment Factor in Mobile Test Automation

In order to address the test environment contribution to test flakiness, engineers need to have full control over the environment they operate in.

If the test environment and the devices under test (DUT) are not fully managed, controlled and secured the entire operation is at risk. In addition, when we state the term “Test Environment Readiness” it should reflect the following:

  1. Devices are always cleaned up prior to the test execution or are in a known “state”/baseline to the developers and testers
  2. If there are repetitive known popups such as security permissions, install/uninstall popups, OS upgrades, or other app-specific popups, they should be accounted for either in the pre-requisites of the test or should be prevented proactively prior to the execution.
  3. Network stability often is a key for unstable testing – engineers need to make sure that the devices are connected to WiFi or cellular network prior to testing execution start. This can be done either as a pre-requisite validation of the network or through a more generic environment monitoring.

 

Following Best Practices

In previous blogs, I addressed the importance of selecting the right testing frameworks, IDEs as well as leveraging the cloud as part of test automation at scale. After eliminating the risks of the test environment in the above section, it is important to make sure that both developers and test automation engineers follow proper guidelines and practices for their test automation workflow.

Since testing shifted left towards the development team, it is important that both dev and test align on few things:

  1. What to automate?
  2. On what platforms to test?
  3. How to automate (best practices)?
  4. Which tools should be used to automate?
  5. What goes into CI, and what is left outside?
  6. What is the role of Manual and Exploratory testing in the overall cycle?
  7. What is the role of Non-Functional testing in the cycle?

The above points (partial list)  covers some fundamental questions that each individual should be asking continuously to assure that his team is heading in the right direction.

Each of the above bullets can be attributed to at list one if not many best practices.

  1. To address the key question on what to automate, here’s a great tool (see screenshot below) provided by Angie Jones. In her suggested tool, each test scenario should be validated through some kind of metric that adds to a score. The highest scored test cases will be great candidates for automation, while the lowest ones obviously can be skipped.
  2. To address the 2nd question on platform selection, teams should monitor their web and mobile ongoing traffic, perform market research and learn from existing market reports/guides that addresses test coverage.
  3. Answering the question “How to automate” is a big one :). There is more than 1 thing to keep in mind, but in general – automation should be something repetitive, stable, time and cost efficient – if there is something preventing one or more of these objectives, it’s a sign that you’re not following best practices. Some best practices can be around using proper object identifiers, others can reflect building the automation framework with proper tags and catches so when something breaks it is easy to address, and more.
  4. The question around tools and test frameworks again is a big one. Answering it right depends on the project requirements and complexity, the application type (native, web, responsive, PWA), test types (functional, non-functional, unit). In the end, it is important to have a mix of tools that can “play” nicely together and provide unique value without stepping on each other.
  5. Tests that enter CI should be picked very carefully. Here, it is not about the quantity of the tests but about the quality, stability, and value these tests can bring focusing on fast feedback. If a test requires heavy env. setup is flaky by nature, takes too much time to run, it might not be wise to include it in the CI.
  6. Addressing the various testing types in Qs 6 and 7 – it depends on the project objectives, however, there is a clear value for each of the following tests in  mobile app quality assurance:
    1. Accessibility and performance provide key UX indicators about your app and should be automated as much as possible
    2. Security testing is a common oversight by many teams and should be covered through various code analysis, OWASP validations and more.
    3. Exploratory, manual and crowd testing provide another layer of test coverage and insights into your overall app quality, hence, should be in your test plan and divided throughout the DevOps cycle.

Happy Testing

Could The Galaxy Note Be The Gadget of the Year?

A Guest Blog Post by Alli Davis.

The Samsung Galaxy Note 8 is one of the best phones that the smartphone manufacturer has in its lineup. The Note 8 is revered for its large and gorgeous looking display, a fast performance, and improved S-Pen among other features. Despite its release in the latter half of 2017, the Samsung Galaxy Note has already received gadget of the year awards and other accolades. Here’s what the awarding bodies has to say about the Samsung Galaxy Note 8.

Exhibit Tech Awards

The Note 8 received the Flagship Smartphone of the Year Award at the Exhibit Tech Awards in India. It received the award for its screen size battery life as well as a great camera. The award is the most recent one for the Note 8, having come in late December of 2017.

Considering the place Samsung was with the Note 7, the Note 8 is nothing short of an amazing comeback. After the previous iteration’s battery issues, some reports suggested that the company might be considering ending the Galaxy line with the 7. But coming out with the next generation 8 proved to be the right move.

 India Mobile Congress

Just over one month after the Samsung Galaxy Note came out in August, the India Mobile Congress awarded the device with its Gadget of the Year honor. The inaugural event awards honor innovations and initiatives designed to better the ICT ecosystem. The ICT ecosystem refers to everything that goes into making a technological environment for a business or government. This includes things like policies, processes, applications, and of course technologies.

Samsung introduced the Galaxy Note 8 in the Indian marketplace as an effort to give consumers the opportunity to do more with their smartphones. Upon receiving the award, Asim Warsi, the Senior VP of Mobile Business at Samsung India confirmed the effort to improve lives in India. Introducing the phone into the premium sector of the market has only strengthened the company as a leader in that category. It leads the way not only with a great screen and camera system but also with Samsung Pay and Samsung Knox, the security feature of the phone.

MySmartPrice Mobile Of The Year Awards

The  MySmartPrice awards also honored Samsung’s Galaxy Note 8 in December 2017. The device received My SmartPrice’s Flagship Smartphone of the Year 2017 Award for the most feature packed phone in this category. Its 6.3 OMLED screen and camera features captured the attention of people for the award. It was slightly out edged by the iPhone X and Google Pixel 2 in areas such as the screen and camera, But the phone manages to hold its own among the stiff competition in the marketplace.

The Galaxy Note’s Snapdragon core processor makes it a very good option for day to day use according to reviews and performs relatively well in low light conditions. The reduction in price also makes it stand out amongst the competition. In the higher-end phone market, Samsung prices itself quite well for the features it offers.

At a time when Samsung could have ended the Galaxy Note line forever after the Note 7’s battery issue, the company persevered with its Note 8 and the risk paid off. In a highly competitive marketplace, Samsung still produces a Note that is productive, powerful, and innovative with features and a price that a wide variety of customers can enjoy. With all of the Accolades that the Samsung Galaxy Note 8 has received, the future looks bright for future iterations of the phone to come.

Alissa B. Davis is a freelance writer who has a passion for writing about technology and stories about her community. Read her latest stories at http://alissabdavis.com/

Continuous Testing Principles for Cross Browser Testing and Mobile Apps

Majority of organizations are already deep into Agile practices with a goal to be DevOps and continuous delivery (CD) compliant.

While some may say that maximum % of test automation would bring these organizations toward DevOps, It takes more than just test automation.

To mature DevOps practices, a continuous testing approach needs to be in place, and it is more than automating functional and non-functional testing. Test automation is obviously a key enabler to be agile, release software faster and address market events, however, continuous testing (CT) requires some additional considerations.

Tricentis defines CT accordingly:

CT is the process of executing automated tests as part of the software delivery pipeline in order to obtain feedback on the business risks associated with a software release candidate as rapidly as possible. It evolves and extends test automation to address the increased complexity and pace of modern application development and delivery

The above suggests that a CT process would include a high degree of test automation, with a risk-based approach and a fast feedback loop back to developer upon each product iteration.

How to Implement  CT?

  • A risk-based approach means sufficient coverage of the right platforms (Browsers and Mobile devices) – such platform coverage eliminates business risks and assures high user-experience. Such platform coverage is continuous maintenance requirements as the market changes.
  • Continuous Testing needs an automated end-to-end testing that integrates existing development processes while excluding errors and enabling continuity throughout SDLC. That principle can be broken accordingly:
    • Implement the “right” tests and shift them into the build process, to be executed upon each code commit. Only reliable, stable, and high-value tests would qualify to enter this CT test bucket.
    • Assure the CT test bucket runs within only 1 CI –> In CT, there is no room for multiple CI channels.
    • Leverage reporting and analytics dashboards to reach “smart” testing decisions and actionable feedback, that support a continuous testing workflow. As the product matures, tests need maintenance, and some may be retired and replaced with newer ones.
  • Stable Lab and test environment is a key to ongoing CT processes. The lab should be at the heart of your CT, and should support the above platform coverage requirements, as well as the CT test suite with the test frameworks that were used to develop these tests.
  • Utilize if possible artificial intelligence (AI) and machine learning (ML)/deep-learning (DL) solutions to better optimize your CT test suite and shorten the overall release activities.

  • Continuous Testing is seamlessly integrated into the software delivery pipeline and DevOps toolchain – as mentioned above, regardless of the test framework, IDEs and environments (front-end, back-end, etc.) used within the DevOps pipeline, CT should pick up all relevant testing (Unit, Functional, Regression, Performance, Monitoring, Accessibility and more), execute them in parallel and in an unattended fashion, to provide a “single voice” for a Go/No-Go release – that happens every 2-3 weeks.

Lastly, for a CT practice to work time after time, the above principles needs to be continuously optimized, maintained and adjusted as things change either within the product roadmap or in the market.

Happy CT!

The Essentials of iOS App Testing For iPhone X

48 hours ago, Apple revealed its new and futuristic iPhone X. Regardless of its design, and debatable price tag, this device also introduced a whole set of functionality, display, and engagement with the end-user.

iOS11 is turning to be quite different from previous releases from both user adoption which is still low (~30%) and also from a quality perspective – 4 patch releases in 1.5 months is a lot.

Most of the changes are already proving to cause issues for existing apps that work fine on iOS11.x and former iPhones like iPhone 8, 7 and others.

In this post, I’d highlight some pitfalls that testers as well as developers ought to be doing immediately if they haven’t done so already to make sure their apps are compliant with the latest Apple mobile portfolio.

The post will be divided into 2 areas: Mobile testing recommendations and App Development recommendations.

Mobile Testing for iPhone X/iOS 11

  • Test across all supported platforms as a general statement. iOS11 isn’t for every device, and apps are stuck on iOS10 that has different functionality than the iOS11. Test your apps across iOS9.3.5, iOS 10.3.3, and the latest iOS11.x
  • iPhone X comes from the factory with iOS11.0.1, requesting for an update to iOS1.1 – that means, this device will never get the intermediate iOS11.0.2/iOS11.0.3 – if customers haven’t yet updated to iOS1.1, you may want to have 1 device like iPhone 8/7 still on iOS11.0.3 so you have coverage for iOS11.Latest-1
  • Display and Screen Size for iPhone X specifically changed, and this device has a 5.8” screen size that is different for all other iPhones. Testing UI elements, Responsive apps layouts and other graphics on this new device is obviously a must (below is an example taken from CVS native app showing UI issues already found by me while playing with the device). This device is also full screen similar to the Samsung S8/Note 8 devices. A lot of tables, text field, and other UI elements need to be iOS11/iPhopne X ready by the developers.

Image title

  • New gestures and engagement flow impact usability as well as test automation scripts. In iPhone X, unlike previous iPhones, the user has no HOME Button to work with. That means that in order for him to launch the task manager  (see below) and switch or kill a background running app, he needs to follow a different flow. What that means is that at first, the app testing teams need to make sure that this new flow is covered in testing, and more important, if these flows are part of a test automation scenario, the code needs to be adapted to match the new flow.

Image title

In addition to the removal of the Home button that causes the new way of engaging with background apps, the way for the user to return to the Home screen has also changed. Getting back to the Home screen is a common step in every test automation, therefore these steps need to account for the changes, and replace the button press with Swipe Up gesture.

  • Authentication and payment scenarios also changed with the elimination of the Touch ID option, that was replaced with the Face ID. While iPhone X introduced an innovative digital engagement with the Face recognition technology, the de-facto today to log in into apps, make payments and more, is still the Fingerprint authentication. Testing both methods is now a quality and dev requirement. From a scan that I ran through the leading apps in the market (see examples below), there is a clear unpreparedness for iPhone X. Most apps will either show on their UI the option to log in via Touch ID or if they support Face ID, they will allow users to use it, while still showing on the UI and in the app settings the unsupported option.

Image title

  • Testing mobile web and responsive web apps in both landscape and portrait mode with the unique iPhone X display is also a clear and immediate requirement. I also found issues mostly around text truncation and wrong leverage of the entire screen to display the web content.

Image title

 

In addition, trying to work with Hulu.com website proved to also be a challenge. Most menu content is being thrown to the bottom of the screen under the user control, making it simply inaccessible. Obviously, the site is not ready for iPhone X/Safari Browser.

Mobile Apps Development

  • Optimize existing iOS apps from both UI as well as authentication perspective. As spotted above, there are clear compatibility issues around the removal of the Touch ID option, that needs to be modified on the UI side of the apps when launched on iPhone X. In addition, scaling UI elements on the new screen whether for RWD apps or mobile apps needs refactoring as well. Apple is offering app developers a ui guidlines to help make the changes fast.Image title
  • Leverage advanced capabilities in iOS11 that best suit the new chipset (AI11 Bionic) and the camera sensors, to introduce digital engagement capabilities around augmented reality (ARKit API’s) and others. Retail apps and games are surely the 1st most suitable segments to jump on these innovative capabilities and enrich their end users’ experiences.

Image title

Bottom Line

The new iPhone X might be paving the way together with the Android Note 8 for a new era of innovation that offers App developers new opportunities to better engage and increase business values. If quality will not be aligned with these innovative opportunities, as shown above, that transformation will be quite challenging, slow and frustrating for the end users’

It is highly recommended for iOS app vendors across verticals to get hands-on experience with the new platform, assess the gaps in quality and functionality, and make the required changes so they are not “left behind” when the innovative train moves on.

Enabling Mobile Testing In a Fast Growing DevOps Reality

6 months ago I launched my 1st book called “The Digital Quality Handbook”.

The book aims to address the key challenges in assuring high mobile (as well as web) quality, by avoiding pitfalls that are commonly practiced in the industry.

I have also recently joined the working group of ISTQB to influence the material in the mobile certification course, where I plan to include insights from the book as well.

In this book, I am hosting top leaders from the industry touching the most important aspects in assuring DevOps.

The above image is taken from Amazon recommending my book close to the leading DevOps practitioner books, this is another strong validation of the book relevancy and value.

Few highlights from the book are below:

  1. Shifting quality left and right to cover as many tests automatically throughout the release pipeline is a key to move faster and identify issues earlier in the process (Angie Jones from Twitter, Manish Maturia from InfoStretch and others provide practitioner level insights and tips)
  2. Testing on the right platforms and OS’s is a key to assure high quality across different devices (new, legacy, popular) in various locations and environments
    1. I am referring to this magazine, that I author on a quarterly basis in the book, and highly recommend subscribing to receive this free asset upon each release: http://info.perfectomobile.com/factors-magazine.html 
  3. Robust automation is achieved through best practices such as building a page object model (POM) and using unique object locators rather than flaky XPATHs etc. I am referring to a free online tool that can help score your object as part of your test automation development http://xpathvalidator.projectquantum.io/
  4. Testing not only via the UI is another key for success, so complementing UI testing with API level testing can reduce the time of testing, provide faster feedback and other values. This chapter was actually developed by my twin brother Lior Kinbruner 🙂 – worth checking it out!
  5. Performance testing and UX is another challenge and key to success. A full section of the book is dedicated to wind tunnel testing, user experience testing (JeanAnn Harrison contributes a lot here together with Amir Rozenberg).

The book was #1 in the new best selling book on Amazon, and still rocking today after more than 6 months. It is #43 as of today in the overall Software Testing Book which is a great validation and honor for me and the contributors.

 

If you still haven’t got a copy of the book, i really encourage you to do so – I am already planning on my next journey so stay tuned 🙂

Mobile Testing Coverage Strategy: OS Families vs. Minor OS Versions?

As the author of the Factors coverage magazine, I am often asked about the operating system (OS) testing strategy.

Some teams would state that testing on the Latest, Latest – 1, and Latest -2 for Android is sufficient, and for iOS testing on the Latest and Latest -1 would satisfy their coverage risks. I actually say that it is not about covering the specific minor OS version, but covering sufficiently the representative OS families. In this post, i will explain my recommended strategy.

Mobile OS Versions Market Share

Late September 2017, Android platform is led by top 5 OS families (4.x, 5.x, 6.x, 7.x and latest 8.x).

In fact, Android 4.x includes 2 families – JellyBeans and KitKat, with the latest being more widely used and popular.

OEM’s are very late in updating their smartphones to the latest OS version, and when they do, they only update the most recent smartphone versions and not the legacy ones that in many cases, cover a large user base.

For Android, as we can see above, any app developer that supports the global market, cannot disregard a market share of 15.1% like the KitKat holds – KitKat is by far not a Latest -2 or even – 4 OS platform. Testing, therefore, an Android App needs to go quite far in the history of the Android OS versions to provide sufficient and risk-free testing.

iOS isn’t that simple either. This platform, especially after the recent iOS11 release, is now split into 3 major OS families that include iOS9.3.5, iOS 10.3.3 and iOS11.0. Each iOS family supports quite important devices that were left behind and did not receive the most up to date iOS update. As an example, iPhone 4S, iPad 2, iPad Mini 2 are stuck on iOS9.3.5, and iPhone 5C, iPhone 5 and iPad 4th Generation are stuck on iOS 10.3.3.

 

While the above market share still doesn’t reflect the iOS11 new market share, there is still 9% of iOS9.x users out there and there will be at least a similar number for iOS10.x once most users shift to iOS11. As in Android, here also we see 3 major OS families that need to be included in the testing strategies.

Beta Testing

Many organizations are being naive to the market innovation, and are not taking advantage of the public beta releases and developer previews coming from both Apple and Google. Android Oreo (8.0) and iOS11 were available for testing prior to their general availability release, however, many teams didn’t leverage this and are finding late in the game the defects, and in many cases, are hearing about them from their customers.

Above is the iOS11 GA bug that was reported, while below is an Android Oreo OS version specific defect that impacted many end-users.

 

Recommendations

Each customer has its unique user base, target geographies and in many cases also access to the user’s analytics.

Customers should follow the following guidelines

  • Monitor your user base and build a test lab that addresses the top user’s devices/OS permutations (use your own analytics)
  • Learn and monitor the market trends like the above so they do cover in addition to their analytics the major OS families (5-10% and above market share should be highly considered)
  • Testing on public OS beta’s is a must. Google Pixels, iOS Simulators as well as desktop web beta versions are always available for testing from day one
  • Mix device manufactures with the selected OS families to maximize the coverage (e.g. Samsung XX/Android 6.x, LG XX/Android 5.1.x, etc.)
  • Cover real environment testing to identify real-life glitches like mentioned above and as we see often in the market.

 

Happy Testing!

 

Optimizing Mobile Test Automation Across The Pipeline

With the massive innovation that drives the digital market these days, organizations are continuing to develop features, as well as new test code to cover these features.

What I’ve learned is that often, the test code developers would not always stop and look back into their existing test suites and validate whether the new tests that are being developed are somehow a superset to existing ones. In addition, legacy tests are a continuous load and overhead on your SDLC cycles length if they are not being maintained over time.

Oil Transport

Many Owners To The Same Problem

Since we live in an agile/DevQAOps world, test code development is not a QA only problem, but rather everyone3s. Tests are being executed throughout the pipeline from Dev to integration and pre/post production testing.

Use of smart tagging mechanism for your test scenarios (login), suites (App A) and types unit, regression) can be a good step towards gaining control over your tests.

Without some context, discipline, and continuous structured validation of the tests, it will become harder as you progress your SDLC to debug, analyze and solve defects (would be like finding the key in the below visual mess)

Find the Key in the Picture

Recommended Practices

  • Develop the tests with context, tags and proper annotations that would make sense to you and your team even 12 months from the development day. Make sure that in your execution reports you then have a way to filter using these annotations to only get the view of a given functional area, platform etc.
  • Match your device under tests capabilities to the test code and application under test. Make sure that you focus e.g. your fingerprint based tests only on the devices that support it (API XX and above).
  • Perform test code review every agreed upon time – in such review, group your feature specific test suites and try to optimize, merge, eliminate flakiness, identify missing coverage areas etc. It is harder to do it as the time progresses, so depending on your release cadence and test development maturity, set the right goals – more reviews would be better than less – it will also be shorter and more efficient that way since the delta between such review will be smaller.
  • Drive joint Dev, Test, Product, Marketing decisions based on data – When you have the ability to get quality analysis from your entire test suites, it is recommended to gather all counter parts and brainstorm on the findings. Which tests are most effective, can we shrink based on the data the release cycles, are we missing tests for specific areas, are there platforms that are more buggies than others, which tests takes longer than others to finish etc.
  • Optimize your CI and build-acceptance testing – based on the above intelligence, teams can reach data driven decision about what to include in their CI as well. Testing in the build cycle via CI should be fast, reliable with zero false positives. With quality insights on your tests, you can decide and certify the most valuable and fastest tests to get into this CI testing, and by that to shrink the overall process without risking coverage aspect.

CI_Dash1.png

Bottom Line

A test is code, and like you refactor, maintain, retire and improve your code, you should do the same to your tests. Make sure to always be in control over your tests, and by that, gain control over your quality of your app in a continuous manner.

Happy Testing!

Optimizing Android Test Automation Development

Now that we are a few weeks away from Google I/O, and we understand that the complex Android landscape is becoming, even more, complex let’s explore a way Android teams can optimize and plan their test automation across the different platforms and devices.

In the past, I’ve written about the need to connect the 3 layers:

  • Application under test
  • Test code itself
  • Device/OS under test

I related back to my old patent that I jointly submitted years ago in the days of J2ME and also wrote a chapter about it in my newly published book (The Digital Quality Handbook)

Problem Definition

Android OS families support different capabilities and the gap is growing from one Android SDK to the next. As an example, Android devices older than 6.0, cannot support Android Doze for battery usage optimization, or cannot support App Shortcuts (see below example from Google Photos app). These diffs introduce a challenge to Dev and test team that innovate and take advantage of these features since the test code that shall run against these features and devices needs to be turned only towards devices can actually support it.

How can teams sustain a test automation suite that runs specifically on the right devices per supported features?

Proposed Approach

While I don’t have a bulletproof, magic pill to address all challenges that may occur as a result of the above problem, I can surely recommend an approach as described below.

Important to note, that being aware of the problem, is a step toward resolving it 🙂

Assess Your App and DUT:

  • Map the different features that your app supports or requires the users to grant permissions for
  • Examine your device test lab and filter the devices that support and does not support these specific features

To manage the above, teams can leverage the following:

  • Use an existing ADB command that extracts from a connected device/s the supported feature
    • ADB SHELL
      • PM LIST FEATURES

After running the above command, you will get an output that looks like the below …

Compare The Outputs

Once you know your DUT’s capabilities, as well as your App, features to be tested, you can run a simple output comparison and see what can and can’t be tested – From that point, the optimization should be mostly manual – you will setup your test execution and CI in the lab accordingly. While it isn’t simple enough, it still offers a sustainable approach + awareness to both dev and test team that can be useful throughout the development, debugging and testing activities. In the below visual you can see a capabilities diff between a Samsung Note 5/Android 7.0 (left column) and an older  Samsung running Android 5.x device capabilities (right column). An immediate diff out of a larger list that I have shows the fingerprint functionality that is supported on the Note 5 but not on the other Samsung device. Such insight should be used when planning the feature testing across these 2 devices (this is just one example).

Bottom Line

As Google continues to innovate and add more features, the existing devices and test framework will find it hard to close the gaps and that’s a challenge that teams need to be aware of, plan for, and optimize so their release vehicles and velocity remain solid.

Happy Optimization!

Create a website or blog at WordPress.com

Up ↑