The future of testing? Challenges in application automation testing
How change became the driving factor for software testing
Many new technologies and frameworks have come into existence over the last few decades. Keeping up with the pace of change requires ongoing cultural changes. Application testing has altered significantly and the impact has been felt in New Zealand and around the world. Software application testing is essential to detect any defects, mistakes or errors that could lead to the failure of the applications. Testing provides users with a higher quality, accurate, reliable and effectively performing software product.
Traditionally testing has been carried out manually. Manual testing involves checking essential features in an application by executing test cases and generating reports Human judgement and intuition provides accurate visual feedback and good results. In recent years more testing has been automated which comes in handy for regression testing and repeatable functional tests cases. It makes economic sense to automate the execution of tests where code is written for test cases in order to test features and software applications which are large and complex.
First generation automated testing tools have been around since the mid-1970s. Test Procedure Language (TPL) was applied to automatically test one or more target program modules (Panzl 1976). Test procedures were complete, self-contained, self-validating and executed automatically, producing a report indicating which test cases failed, if any. Automation testing tools like SilkTest and WinRunner, primarily focused on record and replay and supporting the internet explorer browser. With these first generation automation testing tools, manual testing still remained a significant portion of the testing.
In 1990, Tim Berners-Lee developed the first web browser Nexus (previously known as WorldWideWeb), which emerged as a hyperlinked application. This was followed by an internet boom throughout the ’90s which produced easy to use and install applications like Netscape (previously known as Mosaic). In later years multiple other browsers such as Firefox, Safari, Opera, Edge and Chrome appeared, and took much of the market share of Internet Explorer. This lead to cross-browser testing. Any change of the application code could lead to performance, layout, accessibility or connectivity from browser to browser on various device/OS combinations.
Open-source tools such as Selenium gained prominence as small and medium-sized businesses started using technology and looked for affordable software. The development of mobile technologies triggered a change in the way users started using applications and interacting with businesses, so leading to mobile automation testing. Mobile testing not only has to focus on the stock standard Android (vanilla OS) and iOS versions, but also deal with the custom user interfaces (UIs) from device manufactures like Samsung’s One UI, Android Go, OxygenOS, HTC Sense, LG UX and Sony’s Xperia UI etc. Quality Assurance (QA) engineers also have to deal with the mobile screen resolutions thus making it more complex than web testing.
Integration of multiple systems and third-party integration communication between large systems started to define the need for API (application programming interface) testing. API testing is a software testing type that validates communication and data exchange between two separate software systems focusing on the business logic layer and software architecture. It validates and verifies the features, functionality, consistency, performance and security of the programming interfaces. In API testing, instead of the GUI and standard user input/output tests, software is used to send messages/instructions to the API and record the system’s response.
All these technological changes meant that testers were constantly having to adapt and develop skills in learning new tools, technology and frameworks. For this chapter, these aspects of mobile automation and API testing will not be covered in detail.
Today, the industry has started adopting agile-based project/product development and cloud-based development/deployment, triggering a lot of disruption in procedures for carrying out quality assurance and migration testing. This has resulted in an approach which uses extensive automation testing to produce a speedier higher quality software product. The adoption of an holistic approach towards software testing where there is a broad understanding among the teams (development and testing) works well and forms a cooperative endeavour for success.
For this chapter, I will focus on the journey of Quality Assurance (QA) over the last two decades, particularly the adaptations that have been made to meet the challenges of QA and how change has been the defining factor for software application testing today. I will give my thoughts on how testers should adapt to the Agile development environment and how to select the most appropriate testing approach by using methodical system thinking. I will also give advice on how to know what testing should be automated and what is best left to be done manually. Finally, I will discuss the future of automation testing in a fast-changing software world.
Automation testing today
Automation testing projects have brought benefits and successes when people are aware of and can control automation test challenges effectively. The growing popularity of the DevOps way of delivery and Agile principles and practices have produced a need for faster releases of quality products. Various automation test frameworks offer many benefits such as reusable test scripts, faster defect detection and minimal interference by individuals while executing the test scripts. Next-generation software testing service providers are producing fully functional test automation frameworks that result in faster and continuous releases, using a simple plug and play approach.
Chief Information Officers (CIOs) are accountable for reducing complexity, staying on budget and keeping up with business demands. The Global Test Automation market was valued at approximately $15.87 billion in 2016. The market size is expected to reach approximately $54.98 billion by 2022, growing at a Compound Annual Growth Rate (CAGR) of 23 per cent between 2017 and 2022. To keep up with such a growth rate, organisations and vendors are making investments in tools and technologies to ensure they are ahead in the market and capable of releasing software application products in a timely manner.
The top three challenges in test automation
Over the years, small and large organisations have had an eventful journey in their adoption of automation testing with varying levels of success and numerous challenges. Though the advantages of automation testing have been significant, the industry has also seen hiccups concerning overall delivery, people challenges, process overheads, culture, mindset, skill availability and customised solutions.
To maintain a higher product quality, automation testing has become a necessity for organisations especially in the areas of unit testing, functional, regression, integration, cross-browser, extraction, transformation and loading (ETL), performance and security. Over the years the overall quality of the products has improved bringing tremendous benefits and successes when people are aware of and control test automation challenges effectively.
However, they have also increased operational overheads, resulted in process-related bottlenecks and introduced interdependencies between various teams such as product owners, development, release management, QA and automation team. Organisations must now spend a lot more effort in setting up collaboration practices and maintaining communication and planning across multiple teams which is integral to automation success.
In my experience of working globally, I have come across many challenges within the QA domain. In my opinion the top three challenges within New Zealand in trying to achieve test automation are:
- High investment cost
- Choosing the right tool
- Availability of skilled resources
High investment cost
There are multiple tools available in the market and organisations need to think carefully and select the tool that best addresses their needs. There are testing tools for web applications, website security, cross-browser, mobile applications, APIs etc. Then there is also a choice to be made between open source and commercial off-the-shelf tools. Organisations need to be mindful of technological investments.
Open-source tools like Selenium, Katalon, Sahi and Watir can help to reduce licensing costs. However, organisations also need to factor in the associated training and cost of procuring staff with the relevant skills. It takes time to upskill and develop the ability to deliver full value. Commercial tools like TestComplete, UFT (Unified Functional Testing — formerly QTP, HP QuickTest Professional), TOSCA and Ranorex, have their advantages and disadvantages. SoapUI and Postman provide the ability to write, run, integrate and automate advanced API tests with ease. BrowserStack, SauceLabs etc., provide the ability for an application to be tested across 700+ combinations across different browsers. Qualys, VeraCode, Rapid7, SonarQube and WhiteHat provide Dynamic Security Scans and Static Security Application Scans.
Corporations have invested huge amounts in purchasing testing tools, hiring good talent and developing testing processes to support their automation requirements. Despite this, organisations are struggling to achieve return on investment (ROI) due to a lack of thoughtful, future-focused leadership. This trend is global and not unique to New Zealand. Most of the out-of-the-box tools and techniques are not able to support the specific requirements of organisations. Tool vendors and teams face a lot of resistance from various stakeholders in adopting, customising and tailoring solutions to meet organisational needs. Most of the medium and small organisations that try various open-source tools struggle to find suitably skilled staff to meet niche skills and to retain talent.
Choosing the right tool
There are dozens of open source and commercial test automation tools and platforms available. One tool alone cannot meet or address all the needs of a reasonably sized organisation. As a result, organisations end up procuring multiple tools. The belief is that test automation requires fewer resources than manual testing, that it is quick and easy to implement and has minimal costs. However, there is a need to understand that technology adoption plays a significant role in test automation. The absolute truth is that it takes time to create and build a test automation strategy, not to mention developing the skills to write the scripts that will power it, based on manual test cases.
Industry aspects and challenges around the delivery can have a significant impact on adoption. For example, one large corporation could use a combination of open-source and commercial tools and technologies for its application development environment. To get the right mix some automation testing product vendors exaggerate the features of their wares and oversell the abilities of their product to solve problems. A thorough assessment is deemed necessary to select the right tool. Inadequate evaluation and research could lead to improper procurement of the tool resulting in increased effort in delivering automation testing and a lower return on investment.
For an organisation to procure the optimum tools for testing, inputs for the business case need to factor in product procurement costs, training, availability of skills within the market, time taken to adapt to the change, alignment with the organisation’s technology footprint and the process that needs to be followed to reap the value of the investment. The recommended approach is to find a tool that is flexible, easy to adapt to organisation business processes and workflows, can support a wide range of applications and languages, and thus enable the QA team to contribute effectively, regardless of their background or skill set.
Availability of skilled resources
In the ever-dynamic ICT world where new tools, frameworks, methodologies keep on changing, it is difficult to keep up with the pace. Resources might be available with skills in only one or two of the required technologies. This is true not only across organisations but also within one organisation. A tester needs to know the programming language that their team is working with. As automation tests are closely connected with code, an automation tester needs to have experience and knowledge in Python, if the application is written in Python or they may need to know Ruby if the application code is written in Ruby. Automation testers also need to know programming languages such as Java and C/C++, along with knowledge of relational database query languages such as Structured Query Language (SQL). They also need to have experience in protocols such as hypertext transport protocol HTTP/HTTPS, CSS, scripting languages such as extensible markup language (XML), JavaScript and API testing. Automation testers also need to know at least one of one of the frameworks (i.e., Behaviour Driven, Linear, Data Driven). Getting staff with such a wide range of skills is not easy.
Organisations need to identify skilled staff with the right combination of skills such as Selenium and Java or Selenium and Python. They need to consider the recruitment time to get that combination. If suitable candidates cannot be found, they need to consider whether it will be possible to find internal staff that can be skilled up/trained and be effective within a short timeframe.
For an organisation, there are often multiple projects in the pipeline all at different stages of execution. Project managers need to work with resource managers, delivery managers, or team leaders to utilise shared staff resources and cover all necessary testing. Allocation of staff, prioritisation of projects, working with different stakeholders and ensuring project delivery is managed within time and budget keeps project managers busy. Resource managers, IT team leads and delivery managers need to ensure that the resources are utilised to the maximum to achieve a better return on investment. Training managers and stream/team leads need to ascertain staff have the right skills to adapt new technology, understand frameworks and deliver projects to high quality.
The demand for software testers is high and the supply is not sufficient within the NZ market. A survey of IT employers in 2017 reported that 79% of employers were planning to hire additional staff. However, 29% of employers also said their biggest problem was finding and retaining staff. The New Zealand government is actively encouraging skilled test analysts from overseas to work in New Zealand. A variety of information and communication technology (ICT) test skills appear on Immigration New Zealand’s long-term skill shortage list. Even when organisations find suitable staff it is difficult to retain that talent which adds another set of challenges.
Agile development and testing
Over the last decade, Agile software development has become the need of the hour for both small and large organisations. Highly disciplined, performance scrum teams and processes are required to keep up with the demands for testing. Organisations have adapted to this change of mindset and tackled the growing automation needs in the Agile world by adopting the right automation tools frameworks viz ATDD, BDD, etc. This approach has been endorsed by various key performance indicators (KPIs) like time to market, customer satisfaction index and customer retention index. Even with these investments, organisations are struggling to keep up with the demands of the Agile process for go-to-market (GTM) products with the right quality.
Today Agile dominates the development process which is putting a lot of pressure on the QA process. New product features are being released every two or three weeks. New sprint automation tools like Shift Left are only in their nascent stages within New Zealand as organisations are still following traditional Selenium-based automation testing. However, in the next three to five years scriptless automation will lead to script less execution. This will mean subject matter experts will be able to do their own testing with ease rather than relying on specialist testers whether automated or manual. Having good business knowledge and understanding the business process will be the key driver for testing.
What should testers do?
Many companies are adopting DevOps, Agile, Scrum and Continuous Integration/ Continuous Delivery and are focusing on quicker delivery of business value. ‘Old school’ testing strategies, approaches, frameworks and professions have started to become a thing of the past. Testers start asking questions like What do I do to be still relevant? Do I move into another field or be more technical? Will I lose my job?
A manual tester needs to upskill in the technologies and processes described above to be an automation tester, and a developer needs the ability to write test scripts from both the user and tester point of view. While here are definite possibilities to upskill staff the automation effort needs to be considered.
Adopting automation testing is a big challenge. As previously stated, the adoption of automation within Agile requires a significant mind shift. It is not only about the tools; it is about the whole framework of continuous integration and continuous delivery acknowledging that product features and functionality are ever-changing. The turnaround time to validate, test, analyse, report and accept must be completed within a two-week cycle. Though the outcome can be planned in multiple sprints, incorporating the ever-changing requirements is never as simple and easy as anticipated.
There are several test automation frameworks that have been developed over the last few years. Frameworks like Linear, Modular Driven, Behaviour Driven, Data Driven, Hybrid etc. each have their own architecture and their own advantages and disadvantages. In this chapter, I will not go into the details of each one. An automation tester requires an understanding of some or all the testing frameworks to be used, alongside an understanding of the automation testing tool in use at the organisation itself, the ability to understand the functionality and write test scripts together with strong programming skills.
A tester who wants to make a career in testing should consider getting themselves certified depending on their level and skills. A pathway for software testing certifications can be found on the ISTQB website. Certification puts testers ahead in the race though it may not necessarily give any immediate extra benefit with regards to salary. Certification acts as a catalyst to boost knowledge and changes the way testers approach their work. Such certifications help testers to organise better, think strategically and develop long term vision.
Selecting a proper testing approach
Automation tests need to use the correct testing approach to be successful. Automation engineers need to answer several important questions:
- How to reduce the effort in building test scripts?
- How to have minimal maintenance of test script?
- What will be the lifetime of the test suites?
- How frequently will the functionality change?
- Which metrics will be useful and how to generate useful test reports?
- How to keep the manual intervention to minimal?
- How to revalidate test automation in the Agile development methodology.
- What technology is being used for application development?
It is not easy to address these difficult questions, as there are no straightforward answers. In the current scheme of things there is not one framework, tool or approach that will solve all the problems encountered in automation testing. The only answer is to take a pragmatic approach in order to achieve the best return on investment.
Know when, what and how to automate
A tester can only create an effective test, whether automated or not, when the tester has the ability to visualise how a real user will navigate through the application. Test engineers must have domain knowledge for the applications under test as well as knowing how to create developer-grade automated tests. Having experience and knowledge of the most appropriate framework to select assists in getting maximum coverage for automation testing. Regular communication with all the stakeholders within the team and the user community helps to clear up any ambiguities in automated test scripts. An automation tester should think about which test cases are suitable for automation. Just because something can be automated does not mean it should be automated is a good mantra here.
In every software application development testers need to have a full understanding of the basic concepts and any needful changes. Testers must be able to see the big picture and use systematic thinking and a structured approach to decide which tests are good components for automation. Manual and automation testing complement each other and hence, manual testing will always exist alongside all the fresh automation testing tools being introduced.
Be a methodical system thinker
Manual testing is a task which is easy for any IT person to quickly pick up as a skill. However, there is more to being a good tester than just learning the appropriate skillset. Effective testers need to have the proper mindset and a commitment to producing great software products. The whole team needs to agree on what ‘Quality’ is and how best to evaluate it and monitor it.
The tester needs to think differently than the developer, they need an alternative view and should be able to spot the unthinkable. A tester needs to be able to see the big picture and visualize how the system fits together, not only the functional aspects of the application, but also the non-functional areas. A tester needs to be able to justify and clarify which tests need validation and which test scripts may not be reliable in the long run. Testing is a social activity, and a tester should regularly communicate to be effective. The onus is upon testers to break the barriers down and make sure testing is a collaborative task. Though automation can replace repetitive tasks, the prerequisite to automation is manual test cases. Skills such as inductive reasoning, inference and human intuition will never be replaced by automation.
The future of automation testing
In the fast-paced, software development ecosystem, automated tests play an integral part in maintaining the speed and efficiency of the software testing cycle. There are many automation testing tools in the market and more tools are regularly launched. The market should have matured enough by now to deliver 100% of automation testing needs of the industry. However, only a fraction of application tests are currently covered by automation. This raises a number of questions:
- Why do automation tests fail?
- What are the bottlenecks which impede achieving higher coverage?
- What are the points that force us to hit the wall when we try to adopt automation?
Most vendors are confused about this and try to come up with ways to automate object recognition rather than focusing on the execution layer of the testing. With Agile application development, the focus should be on tasks we repeatedly do. In the continuous delivery pipeline, the feedback loop from testers should be fast, add value, be taken seriously and improved, instead of being seen as a bottleneck. Testers are not gatekeepers and should work alongside developers, there should be shorter feedback loops thus decreasing bugs and increasing production releases. Instead of reporting on the symptoms, the focus should be on solving the root cause which could be in the specifications or the complexity or redundancy of the code.
In the automation life cycle, an object repository is created once. There is more activity when creating the script (a smaller number of times), executing the scripts (greater number of times) and analysing failure (greater number of times). If the rule of thumb of automation is to automate the processes and that we should execute tests more often then we should focus on automating the whole process — the way we create, the way we execute, the way we debug and the way we analyse the test scripts and results. It is also useful to prepare the set of automated tests early on in order to see the working framework in place at the start and avoid a conflicting situation later during the test automation phase.
In the late ’90s and early 2000s Test Driven Development (TDD) was developed to test programming concepts. TDD starts by designing and developing tests for the unit functionality of an application, thus avoiding duplication and instructing developers to write new code only if an automated test has failed. This helps to reduce debugging effort through the simple validation of correctness. However, the major drawback was that tests could potentially have blind spots, since unit tests are typically created by the same developer who is writing the code. TDD also led to a higher maintenance overhead due to poor planning and architecture.
Frameworks like Behavioural Driven Development (BDD) are based on end user behaviour thereby including the components and benefits of TDD but avoiding the bias. However, the challenge is that the combination of large applications and multiple users can result in wild guesswork about all the possible permutations of user behaviour.
Moreover, lots of hours are spent analysing bugs registered after production. Though systems do provide some production logs from runtime, it still takes a lot of time to reproduce incidents and bugs reported by end-users. At times, these errors do not get closed due to the inability to reproduce the incident.
Large, complex, integrated challenging systems require a modular architecture. Various touch points, dataflows, business processes, mergers, acquisitions, compliance requirements and policies add to the complexities inside and outside of the system. Within a complex system, creating and managing the architecture of test software is as important as the core product. With the advent of Agile development, automation testing has become difficult for managers to control. Test automation will yield increased testability for the application with platform independence, well-defined modules, published interfaces and organised system layering practices. Effective modular design and framework needs high cohesion, low coupling, and the highest conformity between each unit of testing.
Today, several frameworks such as Page Object Model, Keyword Driven, Data Driven and Hybrid apply loosely coupled scripting. In other words, testers need good knowledge of programming and test automation to implement the functions they are assessing in order to thoroughly test for correct behaviour and also to allow for change. Testers may find this difficult to grasp at the onset which could cause a drop in the performance at runtime. In most of the frameworks, test scripts are written during the development of the application and not during runtime. It would be worthwhile to develop a framework for developing new test scripts to adapt the flow of user activity in the run/execution time. The best way to build a resilient architecture for a test automation framework is to start small, test and review frequently, and only gradually build out the full version. The fundamental backbone of the final test automation framework should be based on a well nurtured strategy of framework design and components.
Automation testing tools and frameworks should provide the flexibility to write code for all interfaces. The tool should be able to comprehend which piece of code is to be executed and should have the ability to create a new flow and a new test case at any time. It should be able to analyse the production log for each user identify their flow and create test cases which execute at runtime. Consequently, this is where artificial intelligence (AI) and machine learning (ML) will play a big role in the future of such automation tools. Now bots are supporting, helping and slowly taking over QA. Quality Engineering (QE) is going to be the key where development teams need to know and consciously test the end product requirement through different stages of the product lifecycle. In short QE follows both a top down and a bottom up approach unlike the traditional top down approach as in QA. Organisations must delve deeper to create the ultimate framework design before they dive into the field of adopting AI in software testing, making sure they have the right testing tools for each development stage to keep things agile and flexible instead of the ‘one size fits all’ bigger approach as with QA. Though AI and M testing tools will become crucial for ensuring the accuracy of all the complex processes on the horizon. The only reason that AI is so good at this is because it depends primarily on accurate and repetitive data. Just as not all testing makes sense for automation, not all processes are appropriate for robotics, there are many testing tasks which are better performed by humans.
Does that mean there is no future for Manual Testers or Automation Testers?
- To perform automation testing, there is a need for skilled developers writing the test scripts from the manual test cases.
- To conduct performance testing, there is a need for skilled performance test engineers to write performance test scripts.
- To conduct cross browser testing, there is a need for tools and skills to configure and develop cross browser testing.
- To conduct vulnerability assessment, there is a need for niche security testing skills.
All the above adds to the costs in Agile development. It also adds to effort which is difficult to manage when every two weeks new features are delivered.
To conclude, neither automation nor AI in automation will end the job of software testers. Software testing as a role will still be significant for a long time though testers will be increasingly working with more evolved and powerful tools.
is co-founder and managing director at QVidalabs, and builds security testing and scriptless automation testing products for the New Zealand and global market. Adesh has over 25 years of global experience working with C-level executives to maximise business results through developing IT Strategy, optimising business processes and implementing technology solutions in a multicultural global environment.
Adesh has a successful track record of delivering optimal results in high-growth environments across India, Middle East, Africa and Europe. Adesh has varied experience of working with Tier1 technology consulting organizations like KPMG, Oracle and Infosys. His focus has been on business results — enhancing return in investments and reducing total cost of ownership for clients, revenue growth, margin improvements, effective team utilization, and most importantly high levels of customer satisfaction. Prior to QVidalabs, Adesh had entrepreneurial stints in IT services start-ups like Sagax Solutions in South Africa and Brahmaand Technologies in New Zealand.
Adesh holds an MBA from the University of Kwa Zulu Natal, South Africa and a Bachelor of Computer Science and Engineering from the Savitribai Phule University of Pune, India.
Reference
ICSE, October 1976 Pages 477–485, ‘Test procedures: A new approach to software verification’, David J Panzl, dl.acm.org/doi/proceedings/10.5555/800253