The Rise of Progressive Web Apps and The Impact on Cross-Browser Testing

If we all thought we’ve figured out the digital market from an application type perspective, and have seen the rise of mobile, and the transformation of web to responsive web – now we should all start getting used to a whole new type of application that should change the entire user experience and offer new web functionality – Meet PWA’s.

Google.com is a very clear example of such app, and Apple is about to introduce PWA capabilities in its upcoming WebKit engine.

What are Progressive Web Apps?

If to refer to Google official website dedicated to PWA, Google defines PWA as “A new way to deliver amazing user experiences on the web

David Rubinstein from SD Time, actually add even more insights into these new app types:

PWAs can use device features like cameras, data storage, GPS and motion sensors, face detection, Push notifications, and more. This will pave the way for AR and VR experiences, right on the web. Imagine being able to redecorate your home virtually using nothing but your phone and a PWA. Pan your camera around a room, then use tools on a website to change wall colors, try out furniture, hang new artwork, and more. It may feel like a futuristic fantasy, but it’s close to reality.

The key behind PWA apps is to provide a rich end-user alternative to native apps. These apps can be launched from the device home screen adding layers of performance, reliability, and functionality to a web application without the need to install anything from the app store. In addition, these apps that are still JavaScript based, but with additional specific API’s can work even when there’s no internet connection and that’s  a huge advantage.

PWA apps leverage 2 main architectural features:

  • Service Workers – give developers the ability to manually manage the caching of assets and control the experience when there is no network connectivity.
  • Web App Manifest – That’s the file within the PWA that describe the app, provide metadata specific to the app like icons, splash screens and more. See below an example Google offers for such a descriptor file (Json)
{
  "short_name": "AirHorner",
  "name": "Kinlan's AirHorner of Infamy",
  "icons": [
    {
      "src": "launcher-icon-1x.png",
      "type": "image/png",
      "sizes": "48x48"
    },
    {
      "src": "launcher-icon-2x.png",
      "type": "image/png",
      "sizes": "96x96"
    },
    {
      "src": "launcher-icon-4x.png",
      "type": "image/png",
      "sizes": "192x192"
    }
  ],
  "start_url": "index.html?launcher=true"
}


In order to check the correctness of your PWA checklist and the entire app, Google offers some tools as part of their documentation like this Progressive Web Apps checklist, and their chrome built-in DevTools (see below visual),

As deeply covered in this great Dzone article, good PWAs also implement the PRPL pattern recommended by Google to enhance performance.

What Are the Implications of PWAs for Cross-Browser sites and Mobile Apps?

To understand the implications, I recommend dividing the question into the impacted Personas.

  1. Developers
  2. Testers
  3. Business
  4. End-Users

Each of the above Personas will have different benefits and implications when adopting this kind of apps.

Developers

For existing web developers, this new app type should present a whole new world of innovative opportunities. Since PWAs are still JavaScript based apps, developers do not need to gain new skills, but rather learn the new APIs offered through the Service Workers and see how they can be leveraged by their websites.  Since the PWA app runs on a mobile device and can be launched without a network connection and without any installation, obviously it needs to be validated by developers through unit and integration tests.

Going forward, the market envisions these apps impacting the native apps architecture in a way that there will only be 1 type of app that can seamlessly run on both browsers and mobile devices with one single implementation – that will require a heavier lift and re-work.

 

Testers

For testers, as in every new implementation, new tests (manual, automated) needs to be developed, executed and fit into the overall pipeline.

PWAs, in particular, introduces some unique use cases such as

  • No network operation
  • High performance 
  • Sensors based functionality (Location, Camera for AR/VR and more)
  • Cross-device functionality (like in Responsive, the experience should be the same regardless of the screen size/HW etc.)
  • Adhering to the design and checklist required by Google and soon Apple
  • Accessibility is always a need
  • Security of these apps (with and without being connected to the network)

Business

For the business, the new app types shall help increase the end-user engagement with the business. When having a web application that is richer in functionality, performs fast, and can be “always on” through an easy launch from the customers’ device home screen, this by definition should increase usage and move the needle to the business. My assumption is that large enterprises are already looking into these type of apps as the next-gen RWD apps.

End Users

At the end of the day, all products are aiming to get greater engagements with the customers and beat the competition. Obviously, if the end users will understand the value in these apps, and can “feel” it in their day by day activities, this will be a clear Win-Win situation to both the organization as well as the customer.

To assure end-user experience as Google envisioned when first launching this technology 3 years ago (2015), Dev and Test teams should continue their continuous testing activities, and make sure they are covering sufficient platform, features and use cases between each release and each new release of a platform or device.

To conclude this blog, I highly recommend watching the short video and read the blog from Mozilla on how PWA live within Firefox and how different experience users get from such apps (see below Firefox Wego app within Firefox browser in the background and a PWA Wego app in the foreground)

Happy PWA Testing!

Advertisements

Mobile, Cross Browser Testing, DevOps and Continuous Testing Trends and Projections for 2018

As we about to wrap out 2017, It’s the right time to get ready to what’s expected next year in the mobile, cross-browser testing and DevOps landscape.

To categorize this post, I will divide the trends into the following buckets (there may be few more points, but I believe the below are the most significant ones)

  • DevOps and Test Automation on Steroids Will Become Key for Digital Winners
  • Artificial Intelligence (AI) and Machine Learning (ML)/ Tools alignment as part of Smarter Testing throughout the pipeline
  • IOT and Digital Transformation Moving to Prime Time

 

DevOps and Automation on Steroids

If in 2017, we’ve seen the tremendous adoption of more agile methods, ATDD, BDD and organizations leaving legacy tools behind in favor of faster and more reliable and agile-ready testing tools, such that can fit the entire continuous testing efforts whether they’re done by Dev, BA, Test or Ops.

In 2018, we will see the above growing to a higher scale, where more manual and legacy tools skills are transforming into more modern ones. The growth in continuous testing (CT), Continuous Integration (CI) and DevOps will also translate into much shorter release cadence as a bridge towards real Continuous Delivery (CD)

 

Related to the above, to be ready for the DevOps and CT trend, engineers need to become more deeply familiar with tools like Espresso, XCUITest, Earl Grey and Appium on the mobile front, and with the open-source web-based framework like the headless google project called Puppeteer, Protractor, and other web driver based framework.

In addition, optimizing the test automation suite to include more API and Non-Functional testing as the UX aspect becomes more and more important.

Shifting as many tests left and right is not a new trend, requirement or buzz – nothing change in my mind around the importance of this practice – the more you can automate and cover earlier, the easier it will be for the entire team to overcome issues, regressions and unexpected events that occur in the project life cycle.

AI, ML, and Smarter Test Automation

While many vendors are seeking for tools that can optimize their test automation suite, and shorten their overall execution time on the “right” platforms, the 2 terms of AI and ML (or Deep learning) are still unclear to many tool vendors, and are being used in varying perspectives that not always mean AI or ML 🙂

The end goal of such solutions is very clear, and the problem it aims to solve is real –> long testing cycles on plenty of mobile devices, desktop browsers, IOT devices and more, generates a lot of data to analyze and as a result, it slows down the DevOps engine. Efficient mechanism and tools that can crawl through the entire test code, understand which tests are the most valuable ones, and which platforms are the most critical to test on due to either customer usage or history of issues etc. can clearly address such pain.

Another angle or goal of such tools is to continuously provide a more reliable and faster test code generation. Coding takes time, requires skills, and varies across platforms. Having a “working” ML/AI tool that can scan through the app under test and generate robust page object model, and functional test code that runs on all platforms, as well as “responds” to changes in the UI, can really speed up TTM for many organization and focus the teams on the important SDLC activities in opposed to forcing Dev and Test to spend precious time on test code maintenance.

IOT and The Digital Transformation

In 2017, Google, Apple, Amazon and other technology giants announced few innovations around digital engagements. To name a few, better digital payments, better digital TV, AR and VR development API and new secure authentication through Face ID. IOT this year, hasn’t shown a huge leap forward, however, what I did notice, was that for specific verticals like Healthcare, and Retail, IOT started serving a key role in their digital user engagements and digital strategy.

In 2018, I believe that the market will see an even more advanced wave in the overall digital landscape where Android and Apple TV, IOT devices, Smart Watches and other digital interfaces becoming more standard in the industry, requiring enterprises to re-think and re-build their entire test lab to fit these new devices.

Such trend will also force the test engineers to adapt to the new platforms and re-architect their test frameworks to support more of these screens either in 1 script of several.

Some insights on testing IOT specifically in the healthcare vertical were recently presented by my colleague Amir Rozenberg – recommend to review the slides below

https://www.slideshare.net/AmirRozenberg/starwest-2017-iot-testing/ 

 

Bottom Line

Do not immediately change whatever you do today, but validate whether what you have right now is future ready and can sustain what’s coming in the near future as mentioned above.

If DevOps is already in practice in your organization, fine – make sure you can scale DevOps, shorten release time, increase test and platform automation coverage, and optimize through smarter techniques your overall pipeline.

AI and ML buzz are really happening, however, the market needs to properly define what it means to introduce these into the SDLC, and what would success look like if they do consider leveraging such. From a landscape perspective, these tools are not yet mature and ready for prime time, so that leaves more time to properly get ready for them.

Happy New 2018 to My Followers.

Enabling Mobile Testing In a Fast Growing DevOps Reality

6 months ago I launched my 1st book called “The Digital Quality Handbook”.

The book aims to address the key challenges in assuring high mobile (as well as web) quality, by avoiding pitfalls that are commonly practiced in the industry.

I have also recently joined the working group of ISTQB to influence the material in the mobile certification course, where I plan to include insights from the book as well.

In this book, I am hosting top leaders from the industry touching the most important aspects in assuring DevOps.

The above image is taken from Amazon recommending my book close to the leading DevOps practitioner books, this is another strong validation of the book relevancy and value.

Few highlights from the book are below:

  1. Shifting quality left and right to cover as many tests automatically throughout the release pipeline is a key to move faster and identify issues earlier in the process (Angie Jones from Twitter, Manish Maturia from InfoStretch and others provide practitioner level insights and tips)
  2. Testing on the right platforms and OS’s is a key to assure high quality across different devices (new, legacy, popular) in various locations and environments
    1. I am referring to this magazine, that I author on a quarterly basis in the book, and highly recommend subscribing to receive this free asset upon each release: http://info.perfectomobile.com/factors-magazine.html 
  3. Robust automation is achieved through best practices such as building a page object model (POM) and using unique object locators rather than flaky XPATHs etc. I am referring to a free online tool that can help score your object as part of your test automation development http://xpathvalidator.projectquantum.io/
  4. Testing not only via the UI is another key for success, so complementing UI testing with API level testing can reduce the time of testing, provide faster feedback and other values. This chapter was actually developed by my twin brother Lior Kinbruner 🙂 – worth checking it out!
  5. Performance testing and UX is another challenge and key to success. A full section of the book is dedicated to wind tunnel testing, user experience testing (JeanAnn Harrison contributes a lot here together with Amir Rozenberg).

The book was #1 in the new best selling book on Amazon, and still rocking today after more than 6 months. It is #43 as of today in the overall Software Testing Book which is a great validation and honor for me and the contributors.

 

If you still haven’t got a copy of the book, i really encourage you to do so – I am already planning on my next journey so stay tuned 🙂

Complementing Cross-Browser Testing with Headless Unit Testing Solutions

Nothing new in the land of cross-browser testing. Selenium as the underlying API layer serves leading frameworks including WebDriverIO, Protractor (Angular based testing), NightWatchJS, RobotJS and many others.

For web application developers that require fast feedback capabilities post their code commit or bug resolution, there are various testing options. Some would quickly test manually on a set of local or cloud based VM’s, some will develop unit tests (qUnit etc.), but there are also very mature cross browser testing solutions that add more layers of coverage and insights in an automated and easy way.

In a recent eBook that I developed, I’m covering the 10 emerging cross-browser testing tools with a set of considerations around how to choose the right one or the right mix of them.

As can be seen in the 10 tools shown above, there is a mix of a unit as well as E2E functional testing tools mostly javascript based.

Developers who would like to include as part of their quick sanity post commit a validation of the load time it takes the site to load, can easily add this PhantomJS based test into their CI post build acceptance testing and get such visibility after each successful build – that, match the result with a benchmark and take decisions.

In a quick test that I ran on the NFL.com website, I was able to not only detect a slow load of 10sec. but I also identified a long set of errors while the page is loaded.

Another powerful capability tools like PhantomJS can offer is the ability to both capture a specific rendering of a web page by a pre-defined viewport, as well as the ability to generate a page HAR file for network traffic analysis (I am aware that it is not the newest tool, and that Goole already provides a newer version, but still this is a valuable open-source free tool that can help add coverage capabilities to any web development team).

So if as an example, the load time with errors above turns on a red light regarding that site, with 2 simple tests that BTW PhantomJS provides as their starting kit in GIT, the developer can address the above 2 use cases of HAR file generation as well as page rendering screenshot.

The result of the above snippet is the screenshot below:

The HAR file creation that is based on the following GIT code sample will result in the following (I am using the google add-on HTTP Archive Viewer for Chrome, it can be done simply with other HAR viewers as well):

Bottom line

You can download my latest eBook and learn more, but in general – leverage both unit testing powerful tools, as well as traditional E2E tests, hence they do complement each other and add their unique value – And it’s Free!

Happy Testing!

Optimizing Mobile Test Automation Across The Pipeline

With the massive innovation that drives the digital market these days, organizations are continuing to develop features, as well as new test code to cover these features.

What I’ve learned is that often, the test code developers would not always stop and look back into their existing test suites and validate whether the new tests that are being developed are somehow a superset to existing ones. In addition, legacy tests are a continuous load and overhead on your SDLC cycles length if they are not being maintained over time.

Oil Transport

Many Owners To The Same Problem

Since we live in an agile/DevQAOps world, test code development is not a QA only problem, but rather everyone3s. Tests are being executed throughout the pipeline from Dev to integration and pre/post production testing.

Use of smart tagging mechanism for your test scenarios (login), suites (App A) and types unit, regression) can be a good step towards gaining control over your tests.

Without some context, discipline, and continuous structured validation of the tests, it will become harder as you progress your SDLC to debug, analyze and solve defects (would be like finding the key in the below visual mess)

Find the Key in the Picture

Recommended Practices

  • Develop the tests with context, tags and proper annotations that would make sense to you and your team even 12 months from the development day. Make sure that in your execution reports you then have a way to filter using these annotations to only get the view of a given functional area, platform etc.
  • Match your device under tests capabilities to the test code and application under test. Make sure that you focus e.g. your fingerprint based tests only on the devices that support it (API XX and above).
  • Perform test code review every agreed upon time – in such review, group your feature specific test suites and try to optimize, merge, eliminate flakiness, identify missing coverage areas etc. It is harder to do it as the time progresses, so depending on your release cadence and test development maturity, set the right goals – more reviews would be better than less – it will also be shorter and more efficient that way since the delta between such review will be smaller.
  • Drive joint Dev, Test, Product, Marketing decisions based on data – When you have the ability to get quality analysis from your entire test suites, it is recommended to gather all counter parts and brainstorm on the findings. Which tests are most effective, can we shrink based on the data the release cycles, are we missing tests for specific areas, are there platforms that are more buggies than others, which tests takes longer than others to finish etc.
  • Optimize your CI and build-acceptance testing – based on the above intelligence, teams can reach data driven decision about what to include in their CI as well. Testing in the build cycle via CI should be fast, reliable with zero false positives. With quality insights on your tests, you can decide and certify the most valuable and fastest tests to get into this CI testing, and by that to shrink the overall process without risking coverage aspect.

CI_Dash1.png

Bottom Line

A test is code, and like you refactor, maintain, retire and improve your code, you should do the same to your tests. Make sure to always be in control over your tests, and by that, gain control over your quality of your app in a continuous manner.

Happy Testing!

Trends in Cross Browser Testing and Web Development

Typically, i”ll write a lot on mobile app testing, tools, trends, coverage and such.

In this blog, I actually wanted to share some up to date trends as I see them in the web landscape.

The web market has shifted a lot over the past years alongside the mobile space. We see a clear use of specific development languages, development frameworks and of course specific test frameworks aimed to test Angular, jQuery, Bootstrap,.Net and other websites.

From a Dev Language perspective, the web FE developer is mostly using the following languages as part of his job:

Sourcehttp://vintaytime.com/premium/top-programming-languages/

As a clear trend in web development, it shows that JavaScript is the leading language used by web developers. It’s actually not a huge surprise since if you move to the top frameworks used by these web developers, you will see quite a few that are based on JavaScript.

There are some trends seen recently by developers around shifting to non AngularJS web development framework like Aurelia, React, and Vue.JS that are seeing a growing usage and adoption by developers due to considerations such as (larger list of Pro’s/Con’s are in source 1 below). With this trend in mind, and you’ll read in my references below, the new solutions are still not as complete as AngularJS is.

  • Shorter learning curve
  • Simple to use, clean
  • Flexibility
  • Lightweight compared to others (less than half the size of AngularJS e.g.)
  • Better performing
  • Easy to integrate with other front-end stack tools
  • Responsive server-side rendering (Vue.JS supports it, reduces time for users to see rendered content)
  • SEO Friendly
  • Good documentation and Community Support
  • Good debugging capabilities

Source 1: https://www.slant.co/topics/4306/~angular-js-alternatives

Source 2: https://w3techs.com/technologies/details/js-angularjs/all/all

Now, that we have seen the leading web development languages, and frameworks used these days, let’s drill down into what test automation engineers are adopting.

Selenium without a doubt is the leading and base for most frameworks, however, even in this space, we see new and innovative test frameworks such as Casper.JSTestCafeBuster.JSNightwatch.JS together with the traditional Webdriver.IO and of course Protractor.

If we examine the below visual (SourceNPM Trends), it’s a clear market dominance between Selenium and Protractor that underneath its implementation does uses Selenium WebDriver, and supports Jasmine and Mocha tools.

The advantage of tools like Protractor is that they support much easier web sites that were developed in various frameworks like AngularJS, Vue.JS etc. Such advantage allows test automation engineers to agnostically use them for multiple websites regardless to the frameworks they are built with.

It is not that easy, and pink as I described above, but it does give a good headstart when starting to build the test automation Foundation.

Thre are few other players in that space that are aimed at specific unit testing, and headless browser testing (Phantom.JS, Casper.JS, JSDom etc.).

As I blogged in the past, from a test automation strategy perspective, teams might find it beneficial and more complete to leverage a set of test frameworks rather than using only one. If the aim is to have non-UI headless browser testing together with Unit testing and also UI based testing, then a combination of tools like Protractor, Casper.JS, QUnit might be a valid approach.

I hope you find this post useful, and can “swim” in the hectic tools landscape. As always, it is important to match the tool to the product requirements, development methodology (BDD, Agile, Waterfall etc.), supported languages and more.

Optimizing Android Test Automation Development

Now that we are a few weeks away from Google I/O, and we understand that the complex Android landscape is becoming, even more, complex let’s explore a way Android teams can optimize and plan their test automation across the different platforms and devices.

In the past, I’ve written about the need to connect the 3 layers:

  • Application under test
  • Test code itself
  • Device/OS under test

I related back to my old patent that I jointly submitted years ago in the days of J2ME and also wrote a chapter about it in my newly published book (The Digital Quality Handbook)

Problem Definition

Android OS families support different capabilities and the gap is growing from one Android SDK to the next. As an example, Android devices older than 6.0, cannot support Android Doze for battery usage optimization, or cannot support App Shortcuts (see below example from Google Photos app). These diffs introduce a challenge to Dev and test team that innovate and take advantage of these features since the test code that shall run against these features and devices needs to be turned only towards devices can actually support it.

How can teams sustain a test automation suite that runs specifically on the right devices per supported features?

Proposed Approach

While I don’t have a bulletproof, magic pill to address all challenges that may occur as a result of the above problem, I can surely recommend an approach as described below.

Important to note, that being aware of the problem, is a step toward resolving it 🙂

Assess Your App and DUT:

  • Map the different features that your app supports or requires the users to grant permissions for
  • Examine your device test lab and filter the devices that support and does not support these specific features

To manage the above, teams can leverage the following:

  • Use an existing ADB command that extracts from a connected device/s the supported feature
    • ADB SHELL
      • PM LIST FEATURES

After running the above command, you will get an output that looks like the below …

Compare The Outputs

Once you know your DUT’s capabilities, as well as your App, features to be tested, you can run a simple output comparison and see what can and can’t be tested – From that point, the optimization should be mostly manual – you will setup your test execution and CI in the lab accordingly. While it isn’t simple enough, it still offers a sustainable approach + awareness to both dev and test team that can be useful throughout the development, debugging and testing activities. In the below visual you can see a capabilities diff between a Samsung Note 5/Android 7.0 (left column) and an older  Samsung running Android 5.x device capabilities (right column). An immediate diff out of a larger list that I have shows the fingerprint functionality that is supported on the Note 5 but not on the other Samsung device. Such insight should be used when planning the feature testing across these 2 devices (this is just one example).

Bottom Line

As Google continues to innovate and add more features, the existing devices and test framework will find it hard to close the gaps and that’s a challenge that teams need to be aware of, plan for, and optimize so their release vehicles and velocity remain solid.

Happy Optimization!

Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.

 

Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)

 

Digital Dashboard Example With Predefined ContextTags (source: Perfecto)

 

Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

Google Mobile Friendly With Perfecto and Quantum

Guest Blog Post by Amir Rozenberg, Senior Director of Product Management, Perfecto

resize

Google recently announced “Mobile First Indexing”, from Google:

To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results (Source).

screen-shot-2017-02-13-at-5-33-26-pm

More recently they made the Google Mobile-Friendly tool and guidelines available. A very nice interactive version is available here, and images at the bottom of the thread, while there’s also an API (which, thanks to Google, can allow users to exercise first before they code). Google also offers code snippets in several languages.

Notes:

  • Google takes a URL and renders it. If you run multiple executions in parallel there’s no point in sending the same URL from every execution because the result would be the same
  • Google returns basically “MOBILE_FRIENDLY” or not. Suggest to set the assert on that
  • The current API differs from the UI such that it only provides the results for Mobile friendly (and the UI gives also mobile and web page speed). Hopefully, Google adds that to the response 😉
  • This will probably not work for internal pages as Google probably doesn’t have a site-to-site secure connection with your network.

 

For developers and testers who do not have time, testing mobile friendliness repeatedly probably will simply not happen. That’s why I integrated Google Mobile-Friendly API into Quantum:

  • Added 2 Gherkin commands
// If you navigate directly to this page
Then I check mobileFriendly URL "http://www.nfl.com"
// If you got to this page through clicks
Then I check mobileFriendly current URL
  • Added the Gherkin command support (GoogleMobileFriendlyStepsDefs.java)
  • And the script example is pretty simple:
@Web
Feature: NFL validate

  @SimpleValidation
  Scenario: Validate NFL
    Given I open browser to webpage "http://www.nfl.com"
    Then I check mobileFriendly current URL
    Then I check mobileFriendly URL "http://www.nfl.com"
    Then I wait "5" seconds to see the text "video"

 

That’s it. Next steps:

 

Ideas for future improvement:

  • You can automate the validation such that every click would trigger a check with Google behind the scenes.

Just for fun, some more screenshots for detailed analysis for NFL.com:

 

screen-shot-2017-02-13-at-5-33-48-pm

 

screen-shot-2017-02-13-at-5-34-09-pm

screen-shot-2017-02-13-at-5-34-23-pm

 

 

Introduction to Android Espresso Testing and Spoon

Espresso UI test automation framework is Google’s de-facto testing platform for Android app developers.

The way it is easily used from within Android Studio and IntelliJ IDEA IDE’s makes it a powerful tool that differentiates it from other open-source cross-platform solutions such as Appium and other commercial tools.

Before drilling into basic setup and execution of an Espresso simple test, let’s first understand some of the basics:

  • Espresso is an Android only test automation framework (not cross platform like Appium/Selenium)
  • Espresso requires a separate APK package running in parallel with the application under test
  • Espresso is not Dev-Language Free framework like Appium (that supports Java, JS, Python, C#, Perl)

Positive Motivations to Use Espresso

  • The Espresso framework is embedded into the entire dev workflow and IDE, and that makes the adoption and leverage higher
  • Espresso can be used to do a quick post-commit validation of a fix or new code implementation, and also as part of a larger test scale within the CI workflow.
  • Espresso provides fast feedback to its users which is a big advantage since it is running on the device/emulator side-by-side with the app
  • Espresso supports annotations to determine the test execution scope (small/medium/large) which organizes the overall testing cycle for both dev and test
  • Espresso has unique synchronization method in its core making the tests less flaky and more robust. It will pass to the next test step in the code only once the view is available on the device screen in opposed to other tools that can easily fail without having timers, validation points and more.

Basic Espresso Framework Methods:

Espresso framework allows the automation developer to manipulate the test using 3 concepts:

  1. View Matchers
  2. View Actions
  3. View Assertions

basicses

As seen in the above definition, onView(xxx) of a specific object on the app screen, an Action will be performed and an Assertion will be made to validate the end result.

Espresso Setup

The setup within Android studio is quite simple, and there is plenty of documentation in the google community around it.

The developer will edit his build.Gradle file for the application under test to include the Espresso framework dependency, the JUnit version, and the InstrumentationRunner (see below example)

gradlesample

Once the above is done, it is time to create for the corresponding app the test class.

This class will need to include through Import, few libraries that are required by the Espresso test (below example)

import

Test Code Implementation

In order to develop an Espresso UI automation, the developer must have the unique object identifiers for the application under test.

To study the app objects (Hamcrest Matchers) the developer can use various methods:

  1. UIAutomation.bat tool that is built into the Android Studio SDK
  2. All resource ID’s should be automatically be stored in a dynamically generated R.java file
  3. Object spy within tools that supports Espresso (Perfecto and others).

Looking at a simple TipCalculator application, you can see through the UIAutomator spy, that the text box object ID is named bill_value

uiautomator

In the R.java file, it will look like this (choose the best method you find comfortable)

rjava

When implementing the Espresso test code, we will leverage the ObjectID as part of the onView method to perform a Click prior to entering an input value to that text box.

code1

In order to perform a type of value into the above Total Bill text box, we will use the 2nd method provided by Espresso, that is. Perform:

code2

Once we are done with the action, we would like to assure that the result of that action is as expected, and this is when the developer will use the assertion method .Check

code3

Finally, once the entire test suite is implemented and ready – running the test from Android Studio is very simple.

Select the Test class from the Edit Configurations menu in Android studio and chose run. Select your target (ADB connected device, cloud devices, emulators).

code4

At the end of a test, a basic test report will be provided to the user.

Running Espresso Tests in Parallel – Using Spoon

No test engineer or developer will be quite unless it validates the functionality of his app on multiple devices and emulators. For that, there is another widely used tool called Spoon (there are also cloud-based solutions as mentioned above that support parallel execution on real devices). This tool, will collect all the target devices (that are visible via adb devices) test results and aggregate them into one HTML view that can be easily investigated.

example_main

In order to leverage Spoon, please download the Gradle for spoon plugin and install it. Post installation, configure as follows

gradlespoon

By default, Spoon will run your tests on all ADB connected devices, however, if you want to run concrete devices and skip others in order to reproduce a specific defect on 1 device, you can configure spoon accordingly

spoon2

Good Luck!