Code-Based Test Automation vs. Codeless Automation

 

As more advanced technologies are entering the continuous testing landscape powered by AI/ML, organizations and especially, practitioners, are debating which is better, and why if any should they adopt codeless test authoring solutions?

In this blog, I will be providing the various considerations to switch, and/or combine between the 2 test automation methods.

TL,DR –> There isn’t a magic answer to this debate, and there isn’t a method that is good vs. bad.

Top Considerations

To better address the question on when and why to use either methods, here are the top items to consider, not listed by importance, since each team might relate to different objectives and priorities:

  • What are the application use cases and flows to automate?
  • Which persona is going to create and maintain these scenarios?
  • What are the skill-set within the team/individuals for the job?
  • Is the app under test Mobile/web/responsive/desktop?
  • What are the time constrains for the project, and what are the release cadence going forward (weekly/monthly)?
  • Is the test suite meant to be integrated to other tools (CI/CD/Frameworks)?
  • Are there any advanced automation scenarios (chatbots, IOT, location, audio, etc.)?
  • What are the cost boundaries (tools, project, lab, etc.)
  • Is the test suite meant to be executed at large scale?
  • Is the project a new one, or an additional layer of coverage on top of existing code-based suite? (Selenium/Appium, etc.)

Diving Deeper

Now that we’ve listed some important consideration, let’s explain them a bit deeper.

For teams that are already working on a project (web/Mobile), and have implemented a good working amount of code-based test scenario that are embedded into processes, CI/CD, and other triggers, such consideration should be heavily considered – what is the incentive to change? are there coverage gap in the code-based suite? is there too much noise and flakiness attached to the existing test code? etc. Only if there is a good incentive like the ones mentioned, teams should consider adding codeless test scenarios into their pipeline.

On the other hand, for teams that are just starting a new project, that’s a perfect timing to look into the entire team skills, and decide based on the technology the project is built on, what tools to use – if might make sense for example, for a newly created website to combine a Selenium framework that SDETs that are Java/JavaScript developers would lead together with business testers that can remove some of the load from them via ML-Driven codeless selenium tools.

After covering use cases, quality of existing test suite, new vs. existing projects, lets also consider the time-frames and budget allocated to the project. It is clear that recording codeless scripts takes on average 6-10 times faster than coding the same scenario in Java or other development language. It involves setting up the platform and test environment, coding, debugging, executing at scale, assertions, and more. This obviously also translate into $$ savings. On the other hand, not all test scenarios can be easily recorded, since, for some advanced flows, coding might be a better approach and an easier one to maintain over time. This is why, sometimes it is better to look at the job-to-be-done, prior to rushing into scripting.

Code and Codeless Creation is Key for High Test Automation Coverage

Next in the overall debate is the Eco-system and tools landscape within the organization. Including a new technology is not easy, often not well-accepted, and also not always justified. In today’s reality when squad teams are working together and consists of a variety of resources with varying skills, objectives, and preferences, integrating a new technology should be done with proper consideration, and with proper validation that it “plays” nice within the other existing tools. Codeless tools in that context, should fill in an important gap within the team, integrate well into existing CI/CD and other processes, and not cause duplication of effort, or additional overhead.

Lastly for this blog (not for the entire debate), I would touch the cost of maintenance of test automation scripts. This is perhaps one of the problematic items for any test automation team. Writing a script once, making it run across platforms over time is an easier said than done task. Applications are constantly changing, and so does the platforms under test (mobile devices/OS versions, Browsers), therefore, scripts needs to be properly maintained to ensure clean and noise-free pipeline. Codeless in many ways addresses such challenge through self-healing of elements, test steps, and more. It can be also achieved within code-based projects through advanced reporting and analysis, with automated root-cause analysis and other methods, but codeless does shine the most in such cases.

Perfecto/TestCraft Object Self-Healing and Weight Mechanism (ML)

Bottom Line

I tried to keep this blog short and to the point, and leave the practitioners the decision-making across the 2 methods. As written in this post, there are plenty of questions to address prior to adopting a codeless tool, and how to combine it within existing code-based suites. Combining both methods in my honest opinion is the way to go in the future, and the way to maximize the overall test automation coverage with greater efficiency across the teams. Make the right decision that fits the project now, and also in the long-run.

I will be running a live webinar covering the exact blog topic hosted by Shani Shoham from 21. Feel free to register here

Core Pillars of Stable Test Automation Suites

For those who follow me, you already know that to build a continuous testing suites that scales, and that lasts for more than few software iterations, teams must follow rigorous processes.

When i say teams – I mean developers, test engineers and business testers.

 

 

This post isn’t about balancing the work load between personas, but rather on continuously sharpening the test suites value to the entire team.

As we close a decade and enter a new one, and with some cool advancements in the technologies that include AI/ML but also challenging platforms like PWAs, Foldables, 5G and IOT – it’s the perfect time to re-evaluate our testing management and maintenance processes.

What Causes Test Instabilities?

Especially across the digital landscape, there are quite a few recurring patterns that causes test instabilities or flakiness.

  • There are the scripts and frameworks that we use – in this category we can list object locators strategy, popups (security, system), coding skills, timing and steps synchronizations and more.
  • There are backend issues around services, networking and up to date test data and environments.
  • Lab related issues that includes platforms being unavailable or poorly setup for testing (screen is locked, network is disconnected, app isn’t installed, device isn’t in ready state mode)
  • Lastly, Orchestration – This is a tricky root cause since there are not that many early warnings to these kind of issues up until you execute and scale up. Not having sufficient platform to test against, or knowing that a platform is in use is often hard to find (but – not impossible :))

In addition to the above 4 major categories, there are also of course the software development dynamics that includes advancements in the product, market changes that impact your test scripts and infrastructure, and also the external dependencies that you cannot always control (network carriers, 3rd party services that you depend on, etc.).

Test Automation Certification for Value vs. ROI

While there is no 100% safety net for the above mentioned RCAs (root cause analysis), but there are few steps that teams can take and enforce to minimize the risks.

My few recommended tips that should be examined every 2-3 software releases/iterations are:

  1. Think with the end in mind when you write a test automation script. Will the test provide value more than just once? Which test suites/types such test will fit (regression, functional, unit, etc.)
  2. Consider the maintainability of such test script over time, how difficult will that be? and will it be worth the hassle?
  3. Certify the test script prior to integrating it into any pipeline. Certification should address the stability of the test across multiple platforms (only works on my machine is not an option), test execution duration (Less than 5 minutes please), If it is a test that detected defects it needs to be scored somehow as a higher value test that others.
  4. Monitor and measure the impact of your test suites and correct not longer than once a quarter!

The above image is an example of an external dependency that can impact the performance of your app. Assuming you test your mobile app against real cellular networks, you should acknowledge that not every carrier behave and performs the same, and it may impact your overall results. Knowing, measuring and taking this into account can save false negatives and other wrong speculations.

Lastly, and as this section implies – it is time we stop measuring test automation ROI and $ but clear value and risk mitigation to the business. Test automation value is by far more wide than the legacy ROI metrics that were simply covering for cost of test creation and execution regardless of what the test executions actually results in.

I have recently delivered a session about test automation stability for the OnlineTestConf. You can view the recording and slides here

Power Of Analytics

While a lot of the above might be clear to many practitioners, what i often see in the market is that the time element prevents teams from actually seeing what’s wring with their testing infrastructures. Teams invest a lot of money, resources and of course time into building a cool set of testing suites, however once their done, the auto pilot steps in, and until something is broken, these suites remains blind spots. This is again, a legacy practice, and in the modern DevOps/Agile/Continuous Testing era, teams ought to monitor and gain maximum visibility into their test code continuously!

As the above visual suggest – only when you look into the reports in a smart way, you can improve quality, deliver fast and up to date feedback and Yes – get some ROI (in a form of value) from your investment.

Imagine you can upon demand answer questions like –

  • what are my most problematic test scenarios?
  • what are my most flaky or buggy platforms (mobile/web)?
  • which job within my CI is the longest?
  • Which test cases must be retired or be placed in a “blocked” bucket for maintenance (due to popups, objects, other)?

Knowing the above answers upon and perhaps prior to clicking on the RUN button could be a major productivity boost to your entire cycle, wouldn’t it?

The below are 2 real life example from the Perfecto tool, but you can think about similar implementation as well – In the below, based on AI and smart algorithms the tool scans the test reports for pre-defined patterns like popups, elements not found, platforms in use and others, but also allows teams to customize these and add unique and relevant RCAs on their own to better slice and dice the test data.

Bottom Line

As we wrap a decade of software development and testing, it’s perhaps time that we take our test automation a level up and inject smart algorithms, processes and more so we are always on top of what we build, and improve whenever there is a need.

Happy Testing!

Getting Started With Headless Browser Testing

[Guest Blog by Uzi Eilon, CTO, Perfecto]

The “shift left” trend is actually happening, developers as part of the DevOps pipeline need to test more and add more automation testing in order to release faster.

In addition, those tests are almost the last barrier before production, because the traditional testing is going away.

In such case, the standard unit tests are not good enough, and the E2E tests are complicated and require a longer time of setup and prepare.

This is the reason both Google and Mozilla released new JS headless browsers to help their developers to execute automation tests.

The same happened in the mobile area where Apple and Google released the XCUItest and Espresso.

Headless browsers provide the following capabilities in order for the developers to use it:

  • Same language , same IDE , same working environment:
    Most of the web develops work with  JS so these browsers are JS platform , to add new test you should open new class and write standard JS code.
  • FAST Feedback & Execution
    These tests need to be executed fast (sometime every commit) , these browser reduce the UI and rendering “noise” connect to the element directly  and run very fase.
  • Easy to setup
    Developers time is expensive, and developers will not add complicated processes for test , the setup of the tools is a simple npm installation.
  • Access to all the DevTools capabilities
    Developers need more details , these tools give access to all the DevTools data includes accessibility, network, log , security and more.
    Smart tests can be very powerful and cover not only the functionally but also the efficiency

In order to understand more I played with Puppeteer and I’m happy share my thoughts with you.

Installation

Very simple

npm i –save puppeteer

Documentation

Not a lot of examples or discussions about specific issues but I did find the API documentation that contains everything I was looking for.

Objects identification

Intuitive – Same way I connected to my object via any JS  .

Example:

  • by id : page.type(‘#firstName’,‘Uzi’);·
  • by class page.type(‘.class,‘Uzi’);

Sync and waiting for elements

In this case, I have to admit I struggled with the standard wait for navigation command, it was not stable:

await page.waitForNavigation({waitUntil:‘load’})

at the end I used the following :

await page.waitForSelector(‘#firstName’,{visible:true}).then(()=>
{      // do the actions per page
page.screenshot({path: ‘then.png’,fullPage: false})
});

 

UI

As part of my test I tried to verify the screen by taking a screenshot, I liked the way I could change the browser UI capabilities and configure my page:

const fullScreen = {
deviceScaleFactor: 1,
hasTouch: false,
height:  2400 ,
isLandscape: false,
isMobile: false,
width: 1800,
fullPage: true
};
page.setViewport(fullScreen)

 

Other devOpts options:

it is very easy to use, for example I would like to see all my links in a frame

for (let child of frame.childFrames())
{
dumpFrameTree(child, indent + ‘  ‘);
}

 

Summary:

Using the headless browser like Puppeteer was very easy and intuitive, it felt natural to add it as part of my testing code.

In addition, setting up the headless browser environment and executing was very simple and fast.

On the less convenient point, what I found was that to get the results directly into the CI, one should add more scripting code or use other executions methods.

Lastly, this method still ramps up, hence has some small bugs in few features and also lacks documentation and more samples, however, for an early testing tool for white-box/unit testing, it is very promising and well-positioned to complement tools such as Selenium. AS a matter of fact, what I also saw, is that other browser vendors are taking the same approach and investing in headless browsers – Mozilla, Microsoft.

 

P.S: If you want to learn more about the growing technologies and trends in the market, I encourage you to follow My podcast with Uzi Eilon called Testium (Episode 6 is fully dedicated to this subject)

Introducing Reporting Test Driven Development (RTDD)

In the era of “[.. ] Driven Development” trends like BDD, TDD, and ATDD it is also important to realize the end goal of testing, and that’s the quality analysis phase.

In many of my engagements with customers, and also from my personal practitioner experience I constantly hear the following pains:

  1. Test executions are not contextually broken, therefore are too long to analyze and triage
  2. Planning test executions based on trends, experience, and insights is a challenge – e.g. which tests are finding more bugs than the other?
  3. Dealing with flaky tests is an ongoing pain especially around mobile apps and platforms
  4. On-Demand quality dashboards that reflect the app quality per CI Job, Per app build, Per functionality tested area etc.

 

Introducing Reporting Test Driven Development (RTDD)

As an aim to address the above pains, that I’m sure are not the only related ones, I came to an understanding, that if Agile/DevOps teams start thinking about their test authoring and implementation with the end-in-mind (that is the Test Reports) they can collect the value at the end of each test cycle as well as prior during the test planning phase.

When teams can leverage a test design pattern that assigns their tests with custom Contextual Tags that wrap an entire test execution or a single test scenario with annotations like “Regression“, “Login“, “Search” and so forth – suddenly the test suites are better structured, easily maintained and can be either included/excluded and filtered through at the end of execution.

In addition, when the entire suite is customized by tags and annotations, management teams can easily retrieve on-demand quality dashboard and be up to date with any given software iteration.

Finally, developers that get the defect reports post executions, can easily filter and drill down into the root cause in an easier and more efficient manner.

If you think about the above, the use of annotations as a method to manage test execution and filter them is not a new concept.

TestNG Annotations with Selenium Example (source: Guru99)

As seen above, there are supported ways to tag specific tests by their priority, it is just a matter of thinking about such tags from the beginning.

Doing reverse engineering to a large test suite is painful, hard to justify and most often too late since the product by then is already out there and the teams are left to struggle with the 4 mentioned consequences from above.

RTDD is all about putting structure, governance, and advanced capabilities into your test automation factory.

If we examine the following table that divides various tags by 3 levels, it can serve as 1 reference that can be immediately used either through built-in tagging and annotation coming from TestNG or other reporting solutions.

As can be seen in the above table, think about an existing test suite that you recently developed. Now, think about the exact test suite that is tag-based according to the above 3 categories:

  1. Execution level tags
    1. This tag can encapsulate the entire build or CI JOB-related testing activities, or it can differentiate the tests by the test framework in which you developed the scripts. That’s the highest classification level of tags that you would use.
  2. Test suite level tags
    1. This is where you start breaking your test factory according to more specific identifiers like your mobile environment, the high-level functionality under test etc.
  3. Logical test level tags
    1. These are the most granular test tags identifiers that you would want to define per each of your test logical steps to make it easy to filter upon, triage failures, and plan ongoing regressions based on code changes.

As a reference implementation for an RTDD solution in addition to the basic TestNG implementation that can be very powerful if being used correctly with its listeners, pre-defined tags and more,  I would like to refer you to an open-source reporting SDK that enables you to do exactly what is mentioned in the above post.

When using such SDK with your mobile or responsive web test suites, you achieve both, the dashboards as seen below as well as a fast defect resolution that drills down by both Test case and Platform under test

Code Sample: Using Geico RWD Site with Reporting TDD SDK [Source: My Personal GIT)

 

Digital Dashboard Example With Predefined ContextTags (source: Perfecto)

 

Bottom Line

What I have documented above, should allow both managers, test automation engineers, and developers of UI/Unit and other CI related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as I’ve demonstrated above, can allow them to achieve the following outcomes:

  • Better structured test scenarios and test suites
  • Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
  • Shift tag based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
  • Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
  • Eliminate flaky tests through high-quality visibility into failures

The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.

Happy Testing (as always)!

Selenium Is the New Testing Tool Standard

Seems like the debate in the world of test automation tools is over.

If few years back HP QTP/UFT (formerly WinRunner) was the standard and most commonly used tool for test automation in the QA space, those days are over.

The shift toward Agile, Devops and such trends together with the digital transformation which includes multi platform testing of Mobile, Web, IOT in a very short amount of times changed the tools landscape and the testing requirements.

See below a snapshot of the top required testing tools which show that the shift already started in 2011 where Selenium passed HP tools in the market adoption.

qtp vs selenium

Sourcehttp://www.seleniumguide.com/

The requirements today are that testing is done as early as possible in the project life cycle (SDLC) and to enforce this process, developers ought to play a significant role – Testing is now being developed and executed by all Agile team members including developers, testers, ops people and others.

In order for the shift and the adoption to grow the tools need to be tightly integrated into the developers environment (IDE’s) which in the digital space might be Eclipse, Android Studio, Visual Studio, Xcode or other cross platform IDE’s like PhoneGap or Titanium.

The additional aspect of test framework adoption such as Selenium and Appium lies in the Open-Source nature of these tools. The flexibility of such open source tools to get extended by developers according to their needs is a great deal compared to closed testing tools such as UFT which are disconnected from the IDE and development environments.

We shall continue to monitor the tools space and movement, but seems like the open source tools is becoming standard for Agile, DevOps practitioners which find these tools suitable for their shift left activities, keeping up with the market dynamics and competition, as well as great enablers for quality and velocity maintainability.

To get some heads up into what is the future of Selenium, and how are the efforts moving on toward making the web browsers drivers (Chrome, Firefox, IE etc.) standard and managed by the browser vendors, refer to this great session (courtesy of Applitools)

http://testautomation.applitools.com/post/120437769417/the-future-of-selenium-andreas-tolfsen-video

Create a website or blog at WordPress.com

Up ↑