The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.
With cy.intercept(), you can intercept HTTP requests and responses in your tests, and perform actions like modifying the response, delaying the response, or returning a custom response. When a request is intercepted by cy.intercept() the request is prevented from being sent to the server, and instead, Cypress will respond with the mock data you provide. This allows you to test different scenarios and responses from a server without actually having to make requests to it. Before intercepting network requests, one of the main challenges was that it was difficult to debug and diagnose network-related issues. Developers needed more visibility into what was happening with network traffic between a client and a server. Intercepting network requests provides insight into the network traffic generated by the application. Without this capability, troubleshooting issues can become more complex and time-consuming. The team may not have the necessary information to identify the cause of the problem, which can result in delays in the testing process. Moreover, QA teams had little access to the requests and responses transmitted between the client and server due to their inability to intercept and examine network data. Because of this, it was challenging to comprehend the application’s behavior. Many tools can be used for intercepting network requests. Cypress is one of the most popular automation testing frameworks through which you can intercept network requests. Cypress intercept — cy.intercept() is a method provided by Cypress that allows you to intercept and modify network requests made by your application. It will enable you to simulate different server responses or network conditions to test how your application handles them. This can be very useful when writing end-to-end tests. This can be very useful when writing end-to-end tests. To use the Cypress intercept — cy.intercept() method, you can call it within a Cypress test like this: cy.intercept(<url>, <options>) The parameter specifies the URL of the network request you want to intercept and is an object that can be used to specify additional options, such as the response to return and the status code to use, and so on. Before deep diving into using Cypress intercept for handling network requests, let’s first understand the nuances of network testing while performing Cypress testing. What Are Network Requests? Network requests refer to exchanging data between a client and a server over a network. When testing web applications, verifying the correct metadata, such as headers, cookies, authentication tokens, and other information sent with the request, can be important. In testing, network request metadata can be used to verify that the correct headers and cookies are being sent with the request, to ensure that the data is being sent in the correct format, and to verify that the authentication tokens are being set correctly. Tools such as Cypress provide APIs for intercepting and inspecting network request metadata, making it easy to test the behavior of network requests and ensure that they are being handled correctly. Network requests can be made using HTTP (Hypertext Transfer Protocol) or HTTPS (Hypertext Transfer Protocol Secure) protocols. HTTP is the traditional protocol for sending and receiving data over the Internet. It’s a simple, text-based protocol for sending and receiving information between clients and servers. HTTPS is a secure version of the HTTP protocol that uses SSL/TLS encryption to secure the data transmitted between the client and the server. HTTPS protects sensitive data, such as passwords, credit card information, and other personal information. When you make an HTTPS request, your browser establishes an encrypted connection with the server. All data exchanged between the two parties is encrypted to ensure that third parties cannot intercept and read it. This is crucial to how the Internet works and allows data transfer between devices. Here’s an example: A user opens a web browser and types in the URL for a website, such as “www.example.com.” The browser sends a network request to the server hosting the website, asking for the HTML, CSS, and JavaScript files that make up the website. The server receives the request and sends back the requested files. The browser receives the files and uses them to render the website on the user’s screen. The user interacts with the website by clicking a button or filling out a form. The browser sends another network request to the server with additional data, such as the information the user entered in the form. The server processes the request and returns the appropriate response, such as confirming the form submission or sending back data for the next page of the website. In the below diagram, you can see there is a network request between a client (the web browser) and a server (the website host). The client requests resources, and the server sends back the requested information. The process can be repeated as the user interacts with the website. What Is Intercepting Network Requests? Intercepting network requests refers to intercepting and inspecting the traffic between a client and a server during communication over a network. Web development usually involves intercepting HTTP requests and responses. Intercepting network requests can be helpful in various situations: Debugging and troubleshooting network issues. Inspect the request and response payloads. Modify the requests in real-time. By intercepting and modifying network requests, inspecting and manipulating various aspects of the communication, such as headers, parameters, cookies, and response data, is possible. When intercepting network requests, usually have a proxy between the client and the server. The proxy will intercept all network traffic between the two endpoints and allow the user or tool to inspect and modify the traffic before forwarding it to its intended destination. In the diagram above, the client is requesting the server, but the request is intercepted by a proxy before it reaches the server. The proxy can then analyze and modify the request before forwarding it to the server and can also analyze and modify the response before sending it back to the client. One commonly used tool for analyzing network traffic in Wireshark. Wireshark is a free and open-source packet analyzer that lets you capture and analyze network traffic in real-time. Another tool for analyzing network traffic is tcpdump. It can capture and filter network traffic for various protocols, including TCP, UDP, ICMP, and more. Why Intercept Network Requests? Intercepting network requests can provide many benefits, depending on the context and the reason for the interception. Here are a few examples: Debugging: By intercepting network requests, developers can easily see what data is being sent and received during different application stages. This can help to quickly identify and resolve any issues related to the application’s network communication. Testing: Intercepting network requests can be beneficial for automating tests of web applications. Test scripts can be written to simulate user actions and inspect the resulting network traffic, allowing developers to automate the testing of complex workflows and scenarios. Data Modification: Interception can modify network requests, allowing developers to manipulate the data sent over the network. Accelerating test execution: By intercepting network requests and returning mock responses, you can reduce the number of requests your application makes to the server. Security: Intercepting network requests can also be used as a security measure. For example, In organizations, you can monitor network traffic to detect malicious activity, such as a cyber-attack. Modifying network requests: By intercepting network requests, developers can modify the requests made by an application to test different scenarios or to implement new features. By modifying requests and analyzing responses, you can quickly identify and fix issues in your code. Below are some other benefits of intercepting the network request, which is helpful for the network team. Traffic analysis: By capturing and analyzing network requests, network administrators can gain insights into network usage patterns, identify potential performance bottlenecks, and optimize the network for better performance. Content filtering: By intercepting network requests, organizations can filter content based on various criteria, such as security policies, bandwidth limitations, or content type. Monitoring: Interception can also be used for monitoring purposes, for example, to gather statistics about the usage of an application or to gain insights into user behavior. QA Tools for Intercepting Network Requests Here are some QA automation testing tools that can be used for intercepting network requests Cypress: Cypress can intercept network requests and manipulate their responses. In Cypress, you can intercept network requests using the Cypress intercept — cy.intercept() method. This command allows you to intercept and modify network requests and responses. Playwright: With Playwright, you can intercept network requests using the page.route() method. This method allows you to intercept network requests and provide a custom response. Postman: Postman is a popular API development and testing tool that allows you to inspect and modify network requests. It has a user-friendly interface and a wide range of features, making it a great option for QA automation testing. Charles: Charles is a web debugging proxy tool that can inspect, debug, and modify network requests. It is widely used by developers to troubleshoot issues with web applications and APIs. Fiddler: Fiddler is a web debugging proxy tool that allows you to inspect and modify network requests, and it is widely used for QA automation testing. JMeter: JMeter is a popular open-source load testing tool with features for intercepting and modifying network requests. JMeter can be used to test the performance of web applications and APIs. Burp Suite: Burp Suite is a powerful and widely used tool for testing web applications. It allows you to intercept and manipulate HTTP requests and responses and provides detailed information about the requests and responses. SoapUI: SoapUI is an open-source tool for testing web services, including features for intercepting network requests. It is widely used for testing SOAP and REST APIs. Selenium: Selenium is a popular open-source framework for automating web application testing. While Selenium itself does not have built-in capabilities for intercepting network requests. Intercepting Network Requests With Cypress Here are some common use cases for using Cypress intercept for handling network requests. Mocking APIs Cypress allows developers to create mock APIs to simulate different server responses. This is useful when testing application behavior under different conditions, such as when the server is down or the response is delayed. Testing HTTP Requests and Responses By intercepting network requests, the Cypress UI automation tool allows developers to test how the application handles HTTP requests and responses. This includes testing error response handling, response time, and response code. Testing Authorization and Authentication Using Cypress, developers can test how their application handles authentication and authorization by intercepting network requests and passing the authorization tokens. Here are some circumstances under which authorization and authentication are used in intercepting network requests: Access Control: Authorization is used to control access to resources such as files, web pages, APIs, and databases. By requiring authorization, only authorized users or systems are granted access to the resource. Secure Data Transmission: Authentication ensures data is securely transmitted between systems. By authenticating the sender and receiver, data can be encrypted and decrypted only by the intended parties. Testing for Performance-Related Issues Cypress UI testing tool can be used to measure the performance of web applications by intercepting network requests and measuring the response times of each request. This can help developers identify performance bottlenecks in their applications and optimize performance. Using Cypress Intercepts for Handling Network Requests Cypress is a JavaScript-based end-to-end testing framework that makes writing, running, and debugging tests for web applications easy. It has built-in support for intercepting and stubbing network requests, allowing you to control the data returned from the server and make assertions about the network requests made by your application while performing Cypress end-to-end testing. You can use the Cypress intercept — cy.intercept() command to intercept network requests in Cypress. This Cypress intercept command takes a URL pattern and a callback function as arguments and will intercept all requests that match the pattern. In the callback function, you can then modify the request, return a response, or continue the request to the network. Below are the methods you can use in Cypress for Spy and stub network requests and responses. JavaScript cy.intercept(url) cy.intercept(method, url) cy.intercept(routeMatcher) Here’s a simple example of how you could use the Cypress intercept command to return a fake response for a certain request: JavaScript cy.intercept('/api/data', { method: 'GET' }).as('getData') .reply(200, { data: 'Test Data' }); // … Perform your test logic … cy.wait('@getData') .its('response.body') .should('deep.equal', { data: 'Test Data' }); In this example, the Cypress intercept — cy.intercept() command is used to intercept all GET requests to the /api/data endpoint. The .reply method is then used to return a fake response with a status code of 200 and a JSON body of { data: Test Data’ }. The cy.wait() command is then used to wait for the request to be intercepted and complete, and the response body is asserted to match the expected value. Before explaining all the methods in detail, let’s first see how the Cypress intercept — cy.intercept() method works. How Does the cy.intercept() Method Work? The Cypress intercept or cy.intercept() is a method used to intercept and modify HTTP requests and responses made by the application during testing. This allows you to simulate different network scenarios and test the behavior of your application under different conditions. In the diagram below, the Cypress intercept — cy.intercept() method intercepts the requests and responses made by the Application Under Test (AUT). The Cypress intercept method can intercept requests to specific URLs or requests made by specific methods (e.g., GET, POST, etc.) The steps of the diagram are explained below: The client (browser) initiates a request to the server. The server receives the request and sends back a response. The client receives the response and handles it. Cypress test code intercepts the request before it reaches the server. The test code can modify the request or the response in any way it wants, such as adding headers, delaying the response, or changing the status code. The test code returns a stubbed response that replaces the actual response from the server. The client receives the stubbed response and handles it as if it came from the server. Once you get the response, we can verify the stubbed response by explaining the assertion below with various examples. Different Ways of Intercepting Network Requests in Cypress There are various ways to use Cypress intercept for handling network requests: 1. Matching URL The first way of intercepting the request is by Matching the URL. There are three ways of matching the URL. The interception by matching the exact URL. JavaScript it('Intercept by Url', () => { cy.visit('https://reqres.in/'); cy.intercept('https://reqres.in/api/users/').as('posts') cy.get("[data-id=users]").click() cy.wait('@posts').its('response.body.data').should('have.length', 6) }) In this example, we are intercepting the complete reqres.in URL. The intercepted request is assigned a named alias, ‘posts,’ using the .as method. The test then waits for the ‘posts’ request to complete and verifies the length of the response body. 2. Interception of multiple URLs using pattern matching. JavaScript it('Intercept by use pattern-matching to match URLs', () => { cy.visit('https://reqres.in/'); cy.intercept('/api/users/').as('posts') cy.get("[data-id=users]").click() cy.wait('@posts').its('response.body.data').should('have.length', 6) }) In this example, we are intercepting the request having the URL match the /api/users/ pattern. The intercepted request is assigned a named alias, ‘posts,’ using the .as method. The test then waits for the ‘posts’ request to complete and verifies the length of the response body. 3. Interception of the URL using a regex pattern. JavaScript it('Intercept by regular expression', () => { cy.visit('https://reqres.in/'); cy.intercept('/\/api/users?page=2').as('posts') cy.get("[data-id=users]").click() cy.wait('@posts').its('response.body.data').should('have.length', 6) }) In this example, the Cypress intercept — cy.intercept() method intercepts the URL that matches the regex pattern /\/api\/users. The as() method gives a name to the intercepted request, which can be later used with cy.wait() to wait for the response. In this case, cy.wait(‘@posts’) waits for the intercepted request to complete before proceeding with the test. 2. Matching Method Another way of intercepting the request is by Matching the methods. By default, if you don’t pass a method argument, then all HTTP methods (GET, POST, PUT, PATCH, DELETE, etc.) will match. Bypassing the method in the cy.intercept() will intercept a particular method request in a network call. Suppose you have provided an interceptor command like cy.intercept('/api/users/'). In that case, we will match parameters in all Methods (GET, POST, PUT, PATCH, DELETE, etc.) But if you pass the Method name in command cy.intercept ('GET', '/users?page=2'), then, in that case, only the GET method is intercepted. JavaScript it('Intercept by matching GET method', () => { cy.visit('https://reqres.in/'); cy.intercept('GET','api/users?page=2').as('posts') cy.get("[data-id=users]").click() cy.wait('@posts').its('response.body.data').should('have.length', 6) }) Another example of a POST request is where we have manipulated the response by providing the data in the body. In the below example, we have provided the data in the body, and thus, we have mocked the data with provided data in the body. JavaScript it('Intercept by matching POST method', () => { cy.visit('https://reqres.in/'); cy.intercept('POST', 'api/users', (req) => { req.reply({ status: 200, body: { "name": "John", "job": "QA Manager", } }) }).as('updateuser') cy.get("[data-id=post]").click() cy.wait('@updateuser') }) Below is the output of the above test case. You can see we have mocked the data by intercepting the POST call. 3. Matching With RouteMatcher RouteMatcher is a part of the Cypress API that allows you to match specific network requests based on their URL, method, headers, and other attributes. By using a RouteMatcher, you can match requests based on their URL patterns, which provides a flexible way to intercept API requests and test our application’s behavior under different conditions. JavaScript it('Intercept by RouteMatcher ', () => { cy.visit('https://reqres.in/') cy.intercept({ method: 'GET', url: 'https://reqres.in/api/users/**' }, (req) => { req.reply({ statusCode: 200, body: { data: [{ id: 7, email: 'tim.Bluth@reqres.in', first_name: 'tim', last_name: 'Bluth', avatar: 'https://reqres.in/img/faces/1-image.jpg'}, { id: 8, email: 'janet.weaver@reqres.in', first_name: 'Janet', last_name: 'Weaver', avatar: 'https://reqres.in/img/faces/2-image.jpg'}]} }) }).as('postdata') cy.wait('@postdata').its('response.body.data').should('have.length', 2) }) In this example, we have used a RouteMatcher to match any GET request to the https://reqres.in/api/users/** endpoint. The ** notation matches any path after /api/users/. We then use the req.reply() function to return a custom response for matching requests. Finally, we load our application and verify that the response has length 2. Output: The output of the above test case is attached below: 4. Pattern Matching In Pattern matching, you can provide the matching pattern string like in the below example any GET Or PATCH requests that match the pattern **/users/** will be intercepted. JavaScript it('Intercept by Pattern Matching using glob matching ', () => { cy.visit('https://reqres.in/') cy.intercept({ method: '+(GET|PATCH)', url: '**/users/**' }, (req) => { req.reply({ statusCode: 200, body: { data: [{ id: 7, email: 'kim.smith@reqres.in', first_name: 'Kim', last_name: 'Smith', avatar: 'https://reqres.in/img/faces/1-image.jpg'}, { id: 8, email: 'janet.weaver@reqres.in', first_name: 'Janet', last_name: 'Weaver', avatar: 'https://reqres.in/img/faces/2-image.jpg'}]} }) }).as('postdata') cy.wait('@postdata').its('response.body.data').should('have.length', 2) }) Output: In the below code, you can see the data that we have mocked displaying under the response body. 5. Stubbing a Response In Cypress, stubbing a response refers to the process of intercepting a network request made by the application being tested and returning a predefined response instead of the actual response from the server. There are two ways to stub a response for a network request: — With a string Here’s an example of how you can use the Cypress intercept — cy.intercept() method to stub a response for a network request by passing a string in the body. JavaScript it('Stubbing a response With a string', () => { cy.visit('https://reqres.in/') cy.intercept('GET', '**/users/**', { statusCode: 200, body: 'Hello, world!' }).as('getUsers') cy.wait('@getUsers') cy.get('@getUsers').then((interception) => { expect(interception.response.body).to.equal('Hello, world!') }) }) Output: — With Fixture files Another way of stubbing the response using the fixture file. You can mock the data from the fixture file instead of providing data in the body. JavaScript it('Stubbing a response With Fixture file', () => { cy.visit('https://reqres.in/') cy.intercept('GET', 'https://reqres.in/api/users?page=2', { fixture: 'users.json' }).as('getUsers') cy.visit('https://reqres.in/') cy.wait('@getUsers') cy.get('.data').should('have.length', 6) }) In this example, we are intercepting a GET request to the endpoint reqres.in and responding with a fixture file called users.json. We also use the .as() method to assign the intercepted request to an alias to wait for the response before performing further actions. Assuming you have a fixture file named users.json in your cypress/fixtures directory, this test will verify that the .data element on the page has a length of six, which matches the number of records in the users.json fixture file. Output: In the below output of the above code, you can see the data we have mocked using the fixture file. 6. Changing Headers You can also use the Cypress intercept — cy.intercept() method to mock header data. JavaScript it('Intercept a request and modify headers', () => { cy.visit('https://reqres.in'); cy.intercept('GET', 'https://reqres.in/api/users', (req) => { req.headers['Authorization'] = 'Bearer my-token'; }).as('getUserList'); //cy.visit('https://reqres.in/api/users'); cy.wait('@getUserList') cy.get('@getUserList').then((interception) => { const requestHeaders = interception.request.headers; expect(requestHeaders).to.have.property('Authorization', 'Bearer my-token'); }); }); In this example, we intercept a GET request to reqres.in users and modify the Authorization header by adding a token value. We give this interception a unique alias using the .as() command so that we can wait for it to complete using cy.wait(). After waiting for the interception to complete, we use the cy.get(‘@getUserList’) command to get the interception object and assert that the Authorization header was modified correctly. How to Override an Existing Cypress Intercept? Overriding an existing Cypress intercept allows you to modify or cancel an existing network request intercept that has already been defined in your test code. This can be useful when you want to change the behavior of an existing intercept, for example, to simulate a different response from a server or to modify the request data in a different way. The main difference between intercepting network requests and overriding an existing intercept is that intercepting allows you to define new intercepts in your test code while overriding allows you to modify existing intercepts. Cypress provides a rich API for working with both intercepts and overrides to help you thoroughly test and debug your web applications. The below example shows how you can use the Cypress intercept — cy.intercept() method to override an existing intercept and modify the behavior of your application during testing. JavaScript describe.only('Override an existing intercept example', () => { beforeEach(() => { cy.intercept('GET', 'https://reqres.in/api/users').as('getUsers') }) it('overrides the response of the /api/users request', () => { cy.visit('https://reqres.in/') cy.intercept('GET', 'https://reqres.in/api/users', (req) => { req.reply((res) => { res.send({ data: [{ id: 1, email: 'test@test.com' }], page: 1, per_page: 1, total: 1, total_pages: 1 }) }) }).as('getUsers') cy.wait('@getUsers').then((interception) => { expect(interception.response.body.data).to.have.length(1) expect(interception.response.body.data[0].email).to.eq('test@test.com')}) }) }) In this example, we first define an intercept for the GET /api/users request and give it an alias of getUsers. Then, in the test itself, we override the same request by defining a new intercept with the same alias of getUsers. In the new intercept, we use the req.reply() function to override the original request’s response and return a new response that includes a single user with an email of ‘test@test.com‘. Finally, we use the cy.wait() command to wait for the getUsers alias to complete, and then we test the response to ensure it contains the expected data. Output: Wrapping Up Using stubbing, requests to a network are intercepted and replaced with predefined responses, rather than sending the request over the network and waiting for a response. However, stubbing can also lead to false positives, as the behavior of the stubbed requests may not accurately reflect the behavior of the actual requests. This can lead to a false sense of security in your tests and may miss bugs or errors that only occur in the actual network request. On the other hand, not using stubbing with the Cypress intercept — cy.intercept() can provide a more accurate picture of the behavior of your application in the real world. By allowing actual network requests to be made and handling them accordingly, you can be more confident that your tests accurately reflect the behavior of your application in the wild.
All We Can Aim for Is Confidence Releasing features is all about confidence: confidence that features work as expected; confidence that our work is based on quality code; confidence that our code is easily maintainable and extendable; and confidence that our releases will make happy customers. Development teams develop and test their features to the best of their abilities so that quality releases occur within a timeframe. The confidence matrix shown below depicts four main areas: The high confidence and small release time area (an area that all development teams strive for) The low confidence and small release time area The high confidence and long release time area The low confidence and long release time area The first is when we’ve made a quality release quickly. The second is when we quickly released features that may be buggy. The third is when it took us a while to do a quality release. The fourth is when it took us a while to make a buggy release. Think of the confidence matrix as a return on investment (ROI) matrix in its most basic form where our return is confidence and our investment is time. When feature development starts, confidence could be high or low. We may be confident that we know what we must develop and how to do it. I’ve found that most software projects start in the low-confidence zone. New features could mean new unknowns that result in low confidence. Most importantly, as our development and testing activities continue, as our release time reaches the deadline, our confidence should increase. Unfortunately, this is not always the case. To achieve confidence, most teams test and use development best practices. Despite their best efforts, I’ve seen teams releasing fast or slow with high or low confidence. Teams’ confidence may have started low but finished high or vice versa. This article shares experiences about how teams have tried to gain confidence from testing. Confidence From Tests Requires Reliable Tests Tests will either pass or fail. We execute them to get a true picture of the system under test. The system could be a unit or units of code or a complete application. The true picture could be that a new feature is ready to be released or that there are problems that need to be fixed before releasing. Once we’ve got the true picture we can make decisions based on testing results and not guesses. How do we know that we’ve got the true picture? By trusting our testing results. Trusting our testing results means that no matter how many times we execute a test suite, all the tests will have no false positives and no false negatives. Tests should not pass accidentally. For example, if out of ten runs they pass five times and fail five times, they are not reliable. Such testing results are as good as guesses and will not give us a true picture of the system under test. A test may be failing for irrelevant reasons while the functionality that it exercises could be working as expected. We need to have reliable tests where we can trust our test results. No matter how much code we cover with tests, no matter how fast or slow our tests run, we will get confidence from our testing efforts if and only if our tests are reliable. Levels of Testing: Speed vs Scope A simple way to understand scope is the following rule of thumb: large scope means that we cover many lines of code. Small scope means that we cover a few lines of code. Traditionally, there are four testing levels. The lower level is unit testing, followed by integration testing, system testing, and acceptance testing, which is the higher testing level.Unit testing is about making educated decisions about what inputs should be used and what outputs are expected per input. Groups of inputs should be identified that have common characteristics and are expected to be processed in the same way by the unit of code under test. This is known as partitioning, and once such groups are identified, they should be covered by unit tests. Unit tests have a small scope. To cover our code thoroughly we need many unit tests. This is usually not a problem because we can run thousands of them in a few seconds. As we go from lower to higher testing levels the scope increases and test execution speed becomes an issue. Once a unit of code is defined we may also define components of code by grouping code units together. Integration testing is about interactions and interfacing between different components. Compared with unit tests, integration tests have a larger scope, but are roughly at the same order of magnitude when it comes to test execution speed. At a system level, our product is tested at a large scope. A single system test could cover thousands of units and hundreds of components of code. Such tests take time to execute. If we could build confidence without needing thousands of them, then that would be good news. The bad news is that test execution speed is so low that it could prolong our feature releases considerably. Similar to system tests, acceptance testing has a large scope. In some companies, it is performed by customers or company team members at the customer’s site. Other companies use acceptance testing as validation testing performed by the customers. Speed Is Vital To release a feature, we could test to gain confidence that it works as expected, functionally and non-functionally. It takes time to build confidence. We need time to perform development and testing, assess our testing results, and make a decision about releasing or not. Are we good to release or should we fix the bugs we’ve found, redeploy to test that all fixes are OK, and then release? To minimize the feature-release time, we need to minimize at least: The time it takes to develop the feature: Using coding best practices during development is one way to introduce fewer bugs. The time it takes to test: We test to find bugs. Are they important? We should fix and redeploy. Are they not important? We could deploy with known issues. There are teams that fix a bug, deploy the bug fix in a testing environment, test that the bug fix works as expected and that it does not introduce any new issues, and then deploy to production. Others deploy bug fixes directly to production (this is faster but could be riskier). Release speed is vital. Depending on how much time we’ve got to release a feature, I’ve seen teams making various decisions in order to handle deadlines. These included: Features are released without testing while coding standards used for development are questionable. An example of this is a team that usually started and finished their development efforts in the low confidence area. The team has had a hard time understanding why a number of problems have arisen after their releases. Most importantly, the most critical problems remained under their radar for a long time. Features are released without testing while other coding standards are met and developers are confident with their code. There was a team of experienced developers that did not believe in testing. The closer to testing that they would get would be debugging their code. They were usually between the high confidence/small release time and high confidence/long release time areas in the confidence matrix. Bugs could have fallen under their radar occasionally and testers from other teams would be brought for QA testing when the team was about to release features with rich functionality. Features are released with just a few unit or integration-level tests but a large number of UI tests. This is a case that I’ve seen many times. Such teams would fall in any of the four areas in the confidence matrix. When showstopper bugs were found late by the testers and when fixing them required major rewrites from developers, the team's confidence was low and the release deadlines may have been prolonged. Even if no showstoppers were found, testing was a bottleneck. Developers were reluctant to change the code in a number of areas and each change called for extensive regression testing at the UI from QA testers. When releasing features with rich functionality, QA testing at the UI was a bottleneck because the test execution speed was low and the tests were many. UI test automation has helped to overcome this problem for some teams while for other teams it gave a smaller ROI than expected. Features are released with a large number of unit and integration tests and a minimal set of UI tests. Such teams would usually fall in the high confidence/small release time area of the confidence matrix. Bugs may occasionally have gone under the radar, especially for features with rich functionality but they were usually fixed quickly without side effects. They had continuous integration and continuous deployment setup. Their continuous builds were made of unit tests and integration tests. Frequently executing unit and integration tests was the main source of their confidence. A final confidence boost was given by a small number of manual exploratory tests in the UI. Features released with a large number of integration tests, a number of unit tests, and a few UI tests. This was the case for teams that used microservices and teams that executed a large number of front-end tests. Some JavaScript frontend developers, for example, were strong believers of the “write tests, not too many, mostly integration” paradigm. In the case of backend developers writing microservices, they believed that in a world of microservices, the biggest complexity is not within the microservice itself, but in how it interacts with others. As a result, they gave special attention to writing tests exercising interactions between microservices. Such teams usually avoided the low confidence and long release time area in the confidence matrix. Following good coding standards and best coding practices does not mean that we should not test. In fact, testing is another best practice for coding. As this article focuses on testing and not other coding standards, it suffices to mention that testing is always a good idea. However, when developing and testing, our release speed will be affected by our testing speed, too. Testing dynamics per testing level need to be taken into account, in order to get the most value from our testing efforts for the allocated time. Test Execution Speed Testing at any level is important and necessary. The lower the testing level the faster the test execution speed. I’ve witnessed at least three ways that test execution speed has affected how development teams work. To identify what compromises to make: If we must make compromises, make an educated decision about what to do and what to avoid. Depending on how much time we have for testing, I've seen teams choosing at what testing level they should test. Ideally, if time and costs were not a constraint, we should test at all levels possible. This is because 100% test coverage at a unit level does not mean that we will catch no bugs with integration testing and/or with system testing. The same is true for each testing level. However, a test suite of 1000 unit tests may take an hour to complete while a UI automation suite with 200 tests may take one day to complete. Although choosing not to test at any level may involve risks, if we have little time to dedicate to testing we may make educated decisions according to what tests we want to run and at what level. To identify how fast we will get feedback from our tests: The test result is our feedback. Did the test pass? Our feedback is a green light. Did the test fail? Our feedback is a red light. A development team tested first the most important functional and non-functional areas of their release. They first tested at a testing level that test execution speed was the fastest. As a result, showstopper bugs could be found early during testing and hence they were also fixed early without jeopardizing release time. The main factor that lowered their confidence was showstoppers that were found late and fixed late, resulting in missing release deadlines. They’ve found that the best way to allocate their testing efforts was to start with quick feedback testing (unit and integration testing). If no showstoppers are found, then for the remaining time, continue with higher-level testing. To help identify our testing levels: People often go back and forth about whether particular tests are unit tests or integration tests. Large unit tests could also be considered small integration tests and vice-versa. But what are they really and at what level do they belong? There was a team that shared a definition like, "If a test talks to the database or if it communicates across the network, if it involves accessing file systems like editing configuration files, then it’s not a unit test." The reasoning behind this was simple: test execution speed. If a test talked to the database, for example, then it would take longer to execute. Since unit tests are the fastest across all testing levels, the team decided to call low-level tests that performed such time-consuming actions as integration tests. Another team was using fault detection time as a guide. Α test failed. If it took seconds to detect the fault in the code that caused the failure, then the failing test was a unit test. If it took minutes to detect the fault then the failing test was an integration test. There was a group that used architects and tech leads to write a few integration tests. Their main goal was to ensure that the choreography and orchestration of the architectural components were working. Such tests usually covered 10 to 20% of the code at maximum and having a large scope they usually were slow. In another group, QA and business analysts wrote acceptance tests to achieve a maximum of 50% of code coverage. They also wrote a few system tests as final tests of choreography and orchestration. The system tests covered very little of the actual business rules and were the slowest. Wrapping Up There is a popular debate about what percentage of tests to write at what testing level. I’ve tried to shift the focus a little bit on confidence over time. It’s all about confidence, and a great deal of it can be achieved by running tests quickly and reliably. Testing closer to the unit/integration level will be quicker and necessary, but not sufficient. Higher testing levels will also need to be covered which will probably cost more in execution time, maintenance, and reliability. Let’s not forget one of our basic prerequisites for tests to be valuable: tests that pass for the right reasons and fail for useful reasons. I’ve shared a number of experiences about how different development teams managed their testing efforts resulting in different levels of confidence over time.
Microservices architecture is an increasingly popular approach to building complex, distributed systems. In this architecture, a large application is divided into smaller, independent services that communicate with each other over the network. Microservices testing is a crucial step in ensuring that these services work seamlessly together. This article will discuss the importance of microservices testing, its challenges, and best practices. Importance of Microservices Testing Testing microservices is critical to ensuring that the system works as intended. Unlike traditional monolithic applications, microservices are composed of small, independent services that communicate with each other over a network. As a result, microservices testing is more complex and challenging than testing traditional applications. Nevertheless, testing is crucial to detect issues and bugs in the system, improve performance, and ensure that the microservices work correctly and efficiently. Microservices testing is critical for ensuring a microservices-based application's reliability, scalability, and maintainability. Here are some reasons why microservices testing is essential: Independent Testing: Each microservice is an independent unit, which means that it can be tested separately. This makes testing easier and more efficient. Increased Agility: Testing each microservice separately allows for faster feedback and faster development cycles, leading to increased agility. Scalability: Microservices can be scaled horizontally, which means that you can add more instances of a service to handle increased traffic. However, this requires proper testing to ensure that the added instances are working correctly. Continuous Integration and Delivery: Microservices testing can be integrated into continuous integration and delivery pipelines, allowing for automatic testing and deployment. Challenges of Microservices Testing Testing microservices can be challenging due to the following reasons: Integration Testing: Testing the interaction between multiple microservices can be challenging because of the large number of possible interactions. Network Issues: Microservices communicate with each other over the network, which can introduce issues related to latency, network failure, and data loss. Data Management: In a microservices architecture, data is often distributed across multiple services, making it difficult to manage and test. Dependency Management: Microservices can have many dependencies, which can make testing complex and time-consuming. Best Practices for Microservices Testing Here are some best practices for microservices testing: Test Each Microservice Separately: Each microservice should be tested separately to ensure that it works as expected. Since microservices are independent services, it is essential to test each service independently. This allows you to identify issues specific to each service and ensure that each service meets its requirements. Use Mocks and Stubs: Use mocks and stubs to simulate the behavior of other services that a service depends on. Mock services are useful for testing microservices that depend on other services that are not available for testing. Mock services mimic the behavior of the missing services and allow you to test the microservices in isolation. Automate Testing: Automate testing as much as possible to speed up the process and reduce human error. Automated testing is essential in a microservices architecture. It allows you to test your system repeatedly, quickly, and efficiently. Automated testing ensures that each service works independently and that the system functions correctly as a whole. Automated testing also helps to reduce the time and effort required for testing. Use Chaos Engineering: Use chaos engineering to test the resilience of your system in the face of unexpected failures. Test Data Management: Test data management and ensure that data is consistent across all services. Use Containerization: Use containerization, such as Docker, to create an isolated environment for testing microservices. Test Service Integration: While testing each service independently is crucial, it is equally important to test service integration. This ensures that each service can communicate with other services and that the system works as a whole. In addition, integration testing is critical to detecting issues related to communication and data transfer. Test for Failure: Failure is inevitable, and microservices are no exception. Testing for failure is critical to ensure that the system can handle unexpected failures, such as server crashes, network failures, or database errors. Testing for failure helps to improve the resilience and robustness of the system. Conclusion Microservices testing is a critical step in ensuring the reliability, scalability, and maintainability of microservices-based applications. Proper testing helps to identify issues early in the development cycle, reducing the risk of costly failures in production. Testing each microservice separately, automating testing, testing each service independently, testing service integration, testing for failure, and using mocks and stubs are some best practices for microservice testing. By following these best practices, you can ensure that your microservices-based application is reliable and scalable. In addition, implementing these best practices can help to improve the reliability, resilience, and robustness of your microservices architecture.
The relevance of using a BDD framework such as Cucumber.js is often questioned by our fellow automation testers. Many feel that it is simply adding more work to their table. However, using a BDD framework has its own advantages, ones that can help you take your Selenium test automation a long way. Not to sideline, these BDD frameworks help all of your stakeholders to easily interpret the logic behind your test automation script. Leveraging Cucumber.js for your Selenium JavaScript testing can help you specify an acceptance criterion that would be easy for any non-programmer to understand. It could also help you quickly evaluate the logic implied in your Selenium test automation suite without going through huge chunks of code. With a given-when-then structure, Behaviour Driven Development frameworks like Cucumber.js have made tests much simpler to understand. To put this into context, let’s take a small scenario. You have to test an ATM if it is functioning well. We’ll write the conditions that given the account balance is $1,000 and the card is valid, and the machine contains enough money. When the account holder requests $200, the cashpoint should dispense $200 and the account balance should be $800, and the card should be returned. In this Cucumber.js tutorial, we’ll take a deep dive into a clear understanding of the setup, installation, and execution of our first automation test with Cucumber.js for Selenium JavaScript testing. What Is Cucumber.js and What Makes It So Popular? Let’s start our Cucumber.js tutorial with a small brief about the framework. Cucumber.js is a very robust and effective Selenium JavaScript testing framework that works on the Behavior Driver Development process. This test library provides easy integration with Selenium and provides us the capability to define our tests in simple language that is even understood by a layman. Cucumber.js Selenium JavaScript testing library follows a given-when-then structure that helps in representing the tests in plain language, which also makes our tests to be a point of communication and collaboration. This feature improves the readability of the test and hence helps understand each use case in a better way. It is used for unit testing by the developers but majorly used for integration and end-to-end tests. Moreover, it collaborates with the tests and makes them so legible that there is hardly a need for documentation for the test cases, and can even be digested by business users. Setting up Cucumber.js for Selenium Javascript Testing So, before we carry on with our Cucumber.js Tutorial, to begin writing and executing our automated test scripts using Cucumber, we need to set up our system with the Cucumber.js framework and install all the necessary libraries and packages to begin Selenium JavaScript testing. Node JS and Node Package Manager (npm): This is the foundation package and most important for any Selenium Javascript testing framework. It can be downloaded via the npm manager, i.e., by installing the node package manager from the nodejs.org official website or using the package installer for different operating systems downloaded from the website here for Mac OS, Windows, or Linux. We can execute the npm command in the command line and check if it is correctly installed on the system. Cucumber.js Library Module: The next prerequisite required for our test execution is the Cucumber.js library. We would need the Cucumber.js package as a development dependency. After the successful installation and validation of the Node.js on the system, we would use the node package manager, i.e., npm, that it provides to install the Cucumber.js library package in the npm repository. So, in order to install the latest version of the Cucumber.js module, we will use the npm command as shown below $ npm install -g cucumber and npm install --save-dev cucumber Here the parameter g indicates the global installation of the module, which means that it does not limit the use of the module to the current project, and it also can be accessed with command-line tools. The command executed using the parameter –save-dev will place the Cucumber executable in the base directory, i.e., ./node_modules/.bin directory, and execute the commands in our command-line tool by using the cucumber keyword. Java – SDK: Since all the Selenium test framework internally uses Java, next, we would move ahead and install the Java Development Kit on our systems. It is advised to use JDK having version 6.0 and above and set up/configure the system environment variables for Java. Selenium Web Driver: To automate the system browser, we would need to install Selenium Web Driver Library using the below npm command. In most cases, it automatically gets installed in our npm node_modules base directory as a dependency when installing other libraries. $ npm install selenium-webdriver Browser Driver: Finally, it is required to install the browser driver. It can be any browser where we would want to execute the test scenarios, and hence the corresponding driver needs to be installed. This executable needs to be added to our PATH environment variable and also placed inside the same bin folder. Here we are installing the chrome driver. Here is the link to the documentation where we can find and download the version that matches the version of our browser. $ npm install -g chromedriver Working on the Cucumber.js Test Framework? Now that we’ve set up our system for our Cucumber.js tutorial, we will move ahead with creating our project structure and create a directory named cucumber_test. Then we will create two subfolders, i.e., feature and step_definition, which will contain respective scripts written for our features and step definition. $ mkdir step_definitions Finally, the folder will have a generated package.json file in the base directory of the package and save all of the dev dependencies for these modules. Another important thing to do with the package.json file is to add the test attribute in the scripts parameter. { "scripts": { "test": "./node_modules/.bin/cucumber-js" } } By adding this snippet to our package.json file, we can run all your cucumber tests from the command line by just typing in “npm test” on the command line. Our final project folder structure stands as below. cucumber_test | - - feature | - - feature_test.feature | - - step_definition | - - steps_def.js | - - support | - - support.js | - - package.json Below is the working procedure of a Cucumber.js project: We begin with writing a .feature file that has the scenarios and each of these scenarios with a given-when-then structure defined. Next, we write down step definition files which typically define the functions that match the steps in our scenarios. Further, we implement these functions as per our requirement or automate the tests in the browser with the Selenium driver. Finally, we run the tests by executing the Cucumber.js executable file present in the node_modules/.bin folder. Running Our First Cucumber.js Test Script The next step in this Cucumber.js tutorial is to execute a sample application. We will start by creating a project directory named cucumber_test and then a subfolder name script with a test script name single_test.js inside it. Then we’ll add a scenario with the help of a .feature file. Which will be served to our application as we instruct Cucumber.js to run the feature file. Finally, the Cucumber.js framework will parse the file and call the code that matches the information in the feature file. Here for our first test scenario, we will start with a very easy browser-based application that, on visiting the Selenium official homepage, allows us to make a search by clicking the search button. Please make a note of the package.json file that we will be using in our upcoming demonstrations. package.json The package.json file contains all the configurations related to the project and certain dependencies that are essential for the project setup. It is important to note that the definitions from this file are used for executing the script, and hence this acts as our project descriptor. JSON { "name": "Cucumber.js Javascript test with Selenium", "version": "1.0.0", "description": "CucumberJS Tutorial for Selenium JavaScript Testing", "main": "index.js", "scripts": { "test": "./node_modules/.bin/cucumber-js" }, "repository": { "type": "git", "url": "" }, "author": "", "license": "ISC", "description": { "url": "" }, "homepage": "", "dependencies": { "assert": "^1.4.1", "chromedriver": "^2.24.1", "cucumber": "^1.3.0", "geckodriver": "^1.1.3" }, "devDependencies": { "selenium-webdriver": "^3.6.0" } } Now, the first step of the project is to define the feature that we are going to implement, i.e., within this file; we will describe the behavior that we would want from our application, which is visiting the website in our case. This feature lets the browser check for the elements. Hence, we will update our feature file with the code. Below is what our feature file looks like, which contains the given when and then scenarios. feature_test.feature Now we will have our first basic scenarios for visiting the website defined in the feature file followed by other scenarios. These scenarios will follow a given-when-then template. Given: It sets the initial context or preconditions. When: This resembles the event that is supposed to occur in the scenario. Then: This is the expected outcome of the test scenario. steps_def.js Now moving on to define the steps. Here we define the functions that match the steps in our scenarios and the actions that they should perform whenever a scenario is triggered. JavaScript /* This Cucumber.js tutorial file contains the step definition or the description of each of the behavior that is expected from the application */ 'use strict'; const { Given, When, Then } = require('cucumber'); const assert = require('assert') const webdriver = require('selenium-webdriver'); // // The step definitions are defined for each of the scenarios // // // // The “given” condition for our test scenario // // Given(/^I have visited the Selenium official web page on "([^"]*)"$/, function (url, next) { this.driver.get('https://www.selenium.dev').then(next); }); // // The “when” condition for our test scenario // // When(/^There is a title on the page as "SeleniumHQ Browser Automation" "([^"]*)"$/, function (titleMatch, next) { this.driver.getTitle() .then(function(title) { assert.equal(title, titleMatch, next, 'Expected title to be ' + titleMatch); // // The “then” condition for our test scenario // // Then(/^I should be able to click Search in the sidebar $/, function (text, next) { this.driver.findElement({ id: 'searchText' }).click(); this.driver.findElement({ id: 'searchText' }).sendKeys(text).then(next); }); An important thing to note here is that if we execute the test with only written the .feature file and nothing else, the cucumber framework will throw us an error and prompt us to define the steps. It states that although we have defined the feature, the step definitions are missing, and it will further suggest we write the code snippets that turn the phrases defined above into concrete actions. support.js The support and hooks file is used with the step definition to initialize the variables and perform certain validations. JavaScript // // This Cucumber.js tutorial support file to perform validations and initialization for our app // // const { setWorldConstructor } = require('cucumber') const { seleniumWebdriver } = require('selenium-webdriver'); var firefox = require('selenium-webdriver/firefox'); var chrome = require('selenium-webdriver/chrome'); class CustomWorld { constructor() { this.variable = 0 } function CustomWorld() { this.driver = new seleniumWebdriver.Builder() .forBrowser('chrome') .build(); } setWorldConstructor(CustomWorld) module.exports = function() { this.World = CustomWorld; this.setDefaultTimeout(30 * 1000); }; hooks.js It releases the driver when the test execution is complete. JavaScript module.exports = function() { this.After(function() { return this.driver.quit(); }); }; Finally, when we execute the test, we can see in the command line that our test got executed successfully.$ npm test Now let’s have a look at another example that will perform a search query on google and verify the title of the website to assert whether the correct website is launched in the browser. feature_test2.feature Feature: A feature to check on visiting the Google Search website Scenario: Visiting the homepage of Google.com Given I have visited the Google homepage. Then I should be able to see Google in the title bar. steps_def2.js JavaScript /* This Cucumber.js Tutorial file contains the step definition or the description of each of the behavior that is expected from the application which in our case is the webpage that we are visiting for selenium javascript testing .*/ var assert = require('assert'); // // This scenario has only “given” and “then” condition defined // // module.exports = function () { this.Given(/^I have visited the Google homepage$/, function() { return this.driver.get('http://www.google.com'); }); this.Then(/^I should be able to see Google in title bar$/, function() { this.driver.getTitle().then(function (title) { assert.equal(title, "Google"); return title; }); }); }; support2.js JavaScript // This Cucumber.js tutorial support file is used to perform validations and initialization for our application // var seleniumWebdriver = require('selenium-webdriver'); var firefox = require('selenium-webdriver/firefox'); var chrome = require('selenium-webdriver/chrome'); function CustomWorld() { this.driver = new seleniumWebdriver.Builder() .forBrowser('chrome') .build(); } module.exports = function() { this.World = CustomWorld; this.setDefaultTimeout(30 * 1000); }; hooks2.js JavaScript module.exports = function() { this.After(function() { return this.driver.quit(); }); }; Again, when we execute the test, we can see in the command line that our test got executed successfully.$ npm test Kudos! You have successfully executed your first Cucumber.js script for Selenium test automation. However, this Cucumber.js tutorial doesn’t end there! Now that you are familiar with Selenium and Cucumber.js, I want you to think about the scalability issues here. So far, you have successfully executed the Cucumber.js script over your operating system. However, if you are to perform automated browser testing, how would you go about testing your web application over hundreds of different browsers + OS combinations? You can go ahead and build a Selenium Grid to leverage parallel testing. However, as your test requirements will grow, you will need to expand your Selenium Grid, which would mean spending a considerable amount of money over the hardware. Also, every month a new browser or device will be launched in the market. To test your website over them, you would have to build your own device lab. All of this could cost you money and time in maintaining an in-house Selenium infrastructure. So what can you do? You can leverage a Selenium Grid on-cloud. There are various advantages to choosing a cloud-based Selenium Grid over a local setup. The most pivotal advantage is that it frees you from the hassle of maintaining your in-house Selenium infrastructure. It’ll save you the effort to install and manage unnecessary virtual machines and browsers. That way, all you need to focus on is running your Selenium test automation scripts. Let us try and execute our Cucumber.js script over an online Selenium Grid on-cloud. Running Cucumber.js Script Over an Online Selenium Grid It is time that we get to experience a cloud Selenium Grid by getting ourselves trained on executing the test script on LambdaTest, a cross-browser testing cloud. LambdaTest allows you to test your website on 3000+ combinations of browsers & operating systems hosted on the cloud. Not only enhancing your test coverage but also saving your time around the overall test execution. To run the same script on LambdaTest Selenium Grid, you only need to tweak your Selenium JavaScript testing script a little. As you would now want to specify the hub URL for the Remote WebDriver, which would execute your script on our Selenium Grid. Add the username and access key token. For this, we must add the access key token and also the username details in the configuration files, i.e., cred.conf.js file present within the conf directory. The username and access key token can be exported in two ways, as mentioned below. cred.conf.js JavaScript exports.cred = { username: process.env.LT_USERNAME || 'rahulr', access_key: process.env.LT_ACCESS_KEY || 'AbcdefgSTAYSAFEhijklmnop' } Alternatively, the username and access key token can be easily exported using the command as shown below. JavaScript export LT_USERNAME=irohitgoyal export LT_ACCESS_KEY= AbcdefgSTAYSAFEhijklmnop Next, we will look at the feature file. We will be executing our test on the Google Chrome browser. In our test case, we will open the LambdaTest website to perform certain operations on it, such as launching the search engine, validating the content, etc. So, our directory structure will be pretty simple as below: feature_test.feature Now, we need to think of our desired capabilities. We can leverage LambdaTest Selenium Desired Capabilities Generator feature to select the environment specification details which allows us to pick from various combinations that it offers; we can use this to select the combination we want to perform our Selenium javascript testing for this Cucumber.js tutorial. So, in our test scenario the desired capabilities class will look something similar as below: JavaScript const desiredCapabilities = { 'build': 'Cucumber-JS-Selenium-Webdriver-Test', // the build name that is to be display in the test logs 'browserName': 'chrome', // the browser that we would use to perform test 'version':'74.0', // the browser version that we would use. 'platform': 'WIN10', // The type of the Operating System that we would use 'video': true, // flag to check whether to capture the video selenium javascript testing . 'network': true, // flag to check whether to capture the network logs 'console': true, // flag to check whether to capture the console logs 'visual': true // flag to check whether to the capture visual for selenium javascript testing }; With that set, we now look at the step definitions and the cucumber runner.js. step_def.js JavaScript /* This Cucumber.js tutorial file contains the step definition or the description of each of the behavior that is expected from the application which in our case is the webpage that we are visiting. It is aligned with the feature file and reads all the instructions from it and finds the matching case to execute it for selenium javascript testing . */ 'use strict'; const assert = require('cucumber-assert'); const webdriver = require('selenium-webdriver'); module.exports = function() { this.When(/^I visit website of Google on "([^"]*)"$/, function (url, next) { this.driver.get('https://google.com ').then(next); }); this.When(/^the homepage has the field with "Google Search" is present $/, function (next) { this.driver.findElement({ name: 'li1' }) .click().then(next); }); this.When(/^the homepage has the field with "I’m Feeling Lucky" is present $/, function (next) { this.driver.findElement({ name: 'li3' }) .click().then(next); }); this.When(/^I move the cursor and select the textbox to make a search on Google $/, function (text, next) { this.driver.findElement({ id: 'buttonText' }).click(); this.driver.findElement({ id: 'buttonText' }).sendKeys(text).then(next); }); this.Then(/^click the "Google Search" on the text box "([^"]*)"$/, function (button, next) { this.driver.findElement({ id: button }).click().then(next); }); this.Then(/^I must see title "Google" on the homepage "([^"]*)"$/, function (titleMatch, next) { this.driver.getTitle() .then(function(title) { assert.equal(title, titleMatch, next, 'Expected title to be ' + titleMatch); }); }); }; cucumber-runner.js JavaScript #!/usr/bin/env/node // It resembles our runner file for parallel tests. This file is responsible to create multiple child processes, and it is equal to the total number of test environments passed for selenium javascript testing . // let childProcess = require ('child_process') ; let configFile = '../conf/' + ( process.env.CONFIG_FILE || 'single' ) + '.conf.js'; let config = require (configFile ).config; process.argv[0] = 'node'; process.argv[1] = './node_modules/.bin/cucumber-js'; const getValidJson = function(jkInput) { let json = jkInput; json = json.replace(/\\n/g, ""); json = json.replace('\\/g', ''); return json; }; let lt_browsers = null; if(process.env.LT_BROWSERS) { let input = getValidJson(process.env.LT_BROWSERS); lt_browsers = JSON.parse(input); } for( let i in (lt_browsers || config.capabilities) ){ let env = Object.create( process.env ); env.TASK_ID = i.toString(); let p = childProcess.spawn('/usr/bin/env', process.argv, { env: env } ); p.stdout.pipe(process.stdout); } Now since our test scripts are ready to be executed in the cloud grid, the final thing that we are required to do is to run the tests from the base project directory using the below command: $ npm test This command will validate the test cases and execute our test suite across all the test groups that we have defined. And if we open the LambdaTest Selenium Grid and navigate to the automation dashboard, we can check that the user interface shows that the test ran successfully and passed with positive results. Below is the sample screenshot: Don’t Forget to Leverage Parallel Testing Parallel testing with Selenium can help you significantly trim down your test cycles. Imagine if we have at least 50 test cases to execute, and each of them runs for an average run time of 1 minute. Ideally, it would take around 50 minutes to execute the test suite. But if we execute 2 test cases in 2 parallel concurrent sessions, the total test time drops down to 25 minutes. Hence, we can see a drastic decrease in test time. To execute the parallel testing with Selenium for this Cucumber.js tutorial, execute the below command $ npm run parallel. Bottomline Cucumber.js provides us the capability to write tests in a way that is easily read by everyone. Making the framework very flexible and enabling us to create human-readable descriptions of user requirements as the basis for web application tests. With Cucumber.js, we can interact with our webpage on the browser and make various assertions to verify that the changes we performed are actually reflected in our web application on every Browser-OS combination by utilizing Selenium Grid. Still, there is a lot more that can be done with Cucumber.js. Since this test framework is developed over the Selenium interface, it empowers us with limitless capabilities in terms of Selenium JavaScript testing. Let us know if you liked this Cucumber.js tutorial and if there’s any topic you want us to write on. Happy Testing, and Stay Safe!
After being voted as the best programming language in the year 2018, Python still continues rising up the charts and currently ranks as the third best programming language just after Java and C, as per the index published by Tiobe. With the increasing use of this language, the popularity of test automation frameworks based on Python is increasing as well. Obviously, developers and testers will get a little bit confused when it comes to choosing the best framework for their project. While choosing one, you should judge a lot of things, the script quality of the framework, test case simplicity and the technique to run the modules and find out their weaknesses. This is my attempt to help you compare the top five Python frameworks for test automation in 2019, and their advantages and disadvantages over the other frameworks. So you could choose the ideal Python framework for test automation according to your needs. Robot Framework Used mostly for development that is acceptance test-driven as well as for acceptance testing, the Robot Framework is one of the top Python test frameworks. Although it is developed using Python, it can also run on IronPython, which is .net-based and on Java-based Jython. Robot as a Python framework is compatible across all platforms—Windows, MacOS, or Linux. Prerequisites First of all, you will be able to use Robot Framework (RF), only when you have Python 2.7.14 or any version above it installed. Although, Python 3.6.4 is also used, code snippets provided in the official blog of RF will make sure the appropriate notes are added, consisting of all the changes required. You will also need to install “pip” or Python package manager. Finally, a development framework is a must to download. A popular framework among developers is the PyCharm Community Edition. However, since code snippets are not IDE-dependent, you can use any IDE, which you have worked on earlier. Advantages and Disadvantages of Robot Let’s take a look at the advantages and disadvantages of Robot as a test automation framework over other Python frameworks: Pros Using a keyword-driven-test approach, it makes the automation process simpler by helping the testers easily create readable test cases. Test data syntax can be used easily. Consisting of generic tools and test libraries, it has a vast ecosystem where individual elements can be used in separate projects. The framework is highly extensible since it has many APIs. The Robot framework helps you run parallel tests via a Selenium Grid; however, this feature is not built in. Cons The Robot framework although is tricky when it comes to creating customized HTML reports. However, you could still present xUnit formatted short reports by using the Robot framework. Another flaw of the Robot framework is the inadequacy of parallel testing. Is Robot the Top Python Test Framework for You? If you are a beginner in the automation domain and have less experience in development, using Robot as a top Python test framework is easier to use than Pytest or Pyunit, since it has rich built libraries and involves using an easier test-oriented DSL. However, if you want to develop a complex automation framework, it is better to switch to Pytest or any other framework involving Python code. Pytest Used for all kinds of software testing, Pytest is another top Python test framework for test automation. Being open source and easy to learn, the tool can be used by QA teams, development teams, individual practice groups, and in open source projects. Because of its useful features, like “assert rewriting,” most projects on the internet, including big shots like Dropbox and Mozilla, have switched from unittest (Pyunit) to Pytest. Let’s take a deep dive and find out what’s so special about this Python framework. Prerequisites Apart from a working knowledge in Python, Pytest does not need anything complex. All you need is a working desktop that has: A command line interface Python package manager IDE for development Advantages and Disadvantages of Pytest Pros In the Python testing community, before the arrival of Pytest, developers included their tests inside large classes. However, a revolution was brought by Pytest since it made it possible to write test suites in a very compact manner than before. Other testing tools require the developer, or tester, to use a debugger or check the logs and detect where a certain value is coming from. Pytest helps you write test cases in a way that gives you the ability to store all values inside the test cases and inform you which value failed and which value is asserted. The tests are easier to write and understand since the boilerplate code is not needed that much. Fixtures are functions you can use by adding an argument to your test function. Their job is to return values. In Pytest, you can make them modular by using one fixture from another. Using multiple fixtures helps you cover all the parameter combinations without rewriting test cases. Developers of Pytest released some useful plugins that make the framework extensible. For example, pytest-xdist can be used to execute parallel testing without using a different test runner. Unit tests can also be parameterized without duplicating any code. Provides developers with certain special routines that make test case writing simpler and less prone to errors. The code also becomes shorter and easily understandable. Cons The fact that special routines are used by Pytest means you have to compromise with compatibility. You will be able to conveniently write test cases but won’t be able to use those test cases with any other testing framework. Is Pytest the Top Python Test Framework for You? Well, you have to start by learning a full-fledged language, but once you get the hang of it, you will get all the features like static code analysis, support for multiple IDE, and, most importantly, writing effective test cases. For writing functional test cases and developing a complex framework, it is better than unittest but its advantage is somewhat similar to the Robot framework if your aim is to develop a simple framework. UnitTest (PyUnit) Unittest, or PyUnit, is the standard test automation framework for unit testing that comes with Python. It’s highly inspired by JUnit. The assertion methods and all the cleanup and setup routines are provided by the base class TestCase. The name of each and every method in the subclass of TestCase starts with “test.” This allows them to run as test cases. You can use the load methods and TestSuite class to the group and load the tests. Together, you can use them to build customized test runners. Like Selenium testing with JUnit, unittest also has the ability to use unittest-sml-reporting and generate XML reports. Prerequisites There are no such prerequisites since unittest comes by default with Python. To use it, you will need standard knowledge of the Python framework and if you want to install additional modules, you will need pip installed along with an IDE for development. Advantages and Disadvantages of PyUnit Pros Being part of the standard library of Python, there are several advantages of using Unittest: The developers are not required to install any additional module since it comes with the box. Unittest is xUnit’s derivative and its working principle is similar to other xUnit frameworks. People who do not have a strong background in Python generally find it comfortable to work. You can run individual test cases in a simpler manner. All you need to do is specify the names on the terminal. The output is concise as well, making the framework flexible when it comes to executing test cases. The test reports are generated within milliseconds. Cons Usually, snake_case is used for naming Python codes. But, since this framework is inspired a lot from Junit, the traditional camelCase naming method persists. This can be quite confusing. The intent of the test code sometimes become unclear, since it supports abstraction too much. A huge amount of boilerplate code is required. Is PyUnit the Top Python Test Framework for You? As per my personal opinion and the opinion of other Python developers, Pytest introduced certain idioms, which allowed testers to write better automation code in a very compact manner. Although unittest comes as a default test automation framework, the fact that its working principle and naming conventions are a bit different than standard Python codes, and its requirement of too many boilerplate codes, make it a not so preferred Python test automation framework. Behave We are all aware of behavior driven development, the latest agile-based software development methodology that encourages developers, business participants, and quality analysts to collaborate among each other. Behave is another one of the top Python test frameworks that allows the team to execute BDD testing without any complications. The nature of this framework is quite similar to SpecFlow and Cucumber for automation testing. Test cases are written in a simple readable language and later stuck to the code during execution. The behavior is designed by the behavior specs and the steps are then reused by other test scenarios. Prerequisites Anyone with basic knowledge in Python should be able to use Behave. Let’s take a look at the prerequisites: Before installing Behave, you have to install any version of Python above 2.7.14. Python package manager, or pip, is required to work with Behave. A development environment is the last and most important thing you need. You can use Pycharm, which is preferred by most developers, or any other IDE of your choice. Advantages and Disadvantages of Behave Like all other behavior in driven test frameworks, the opinion regarding Behave’s advantage varies from person to person. Let’s take a look at the common pros and cons of using Behave: Pros System behavior is expressed by semi-formal language and a domain vocabulary that keeps the behavior consistent in the organization. Dev teams who are working on different modules with similar features are properly coordinated. Building blocks are always ready for executing all kinds of test cases. Reasoning and thinking are featured in details, resulting in better product specs. Stakeholders or managers have a better clarity regarding the output of QAs and devs because of the similar format of the specs. Cons The only disadvantage is that it works well only for black box testing. Is Behave the Top Python Test Framework for You? Well, as we said, Behave (Python framework) works best only for black box testing. Web testing is a great example since use cases can be described in plain language. However, for integration testing or unit testing, Behave is not a good choice since the verbosity will only cause complications for complex test scenarios. Developers, as well as testers, recommend pytest-bdd. It is an alternative to Behave since it uses all that is good in Pytest and implements it for testing a behavior driven scenario. Lettuce Lettuce is another simple and easy to use behavior driven automation tool based on Cucumber and Python. The main objective of Lettuce is to focus on the common tasks of behavior driven development, making the process simpler and entertaining. Prerequisites You will need, at minimum, Python 2.7.14 installed along with an IDE. You can use Pycharm or any other IDE of your choice. Also, for running tests, you will be required to install the Python package manager. Advantages and Disadvantages of Lettuce Pros Well, just like any other BDD testing framework, Lettuce enables developers to create more than one scenario and describe the features in the simple natural language. Dev and QA teams are properly coordinated since the specs are of similar format. For black box testing, Lettuce is quite useful for running behavior driven tests cases. Cons There is only one disadvantage of using Lettuce as a Python framework. For successful execution of behavior driven tests, communication is necessary between dev teams, QA, and stakeholders. Absence or communication gap will make the process ambiguous and questions can be raised from any team. Is Lettuce the Top Python Test Framework for You? According to developers and automation testers, Cucumber is more useful when it comes to executing BDD tests. However, if we are talking about Python developers and QA, there is no better replacement than pytest-bdd. All the great features of Pytest, like compactness and easy to understand code, are implemented in this framework combined with the verbosity of behavior driven testing. Wrapping Up! In the above article, we have discussed the top five Python frameworks for test automation in 2019, based on different testing procedures. While Pytest, Robot framework, and unittest are meant for functional and unit testing, Lettuce and Behave are best for behavior driven testing only. From the features stated, we can conclude for functional testing, Pytest is the best. But if you are new to Python-based automation testing, the Robot framework is a great tool to get started. Although the features are limited, it will enable you to get ahead on the track easily. For Python-based BDD testing, Lettuce and Behave are equally good, but if you already have experience with Pytest, it’s better to use pytest-bdd. I hope my article helps you make the right choice out of the top Python test frameworks for your Python web automation needs. Happy testing!
Shift-left testing is a software testing approach where testing is moved to an earlier phase in the development process, closer to the development phase. The goal of shift-left testing is to catch and fix defects as early as possible in the development cycle, which can save time and resources in the long run. A real-time example of shift-left testing is in a microservices architecture where each service is developed and tested independently before being integrated with other services. For example, My team is developing a payment page for an airline platform that involves a new microservice for managing shopping carts and order services. The development team begins by writing unit tests for the cart and order service, which test the individual functions and methods of the service. Once these tests pass, the team can proceed to integration testing, where both services (cart and order) are tested against other services in the platform to ensure that it works as expected. Once the development and testing of both services (cart and order) are completed, it is then deployed to a staging environment where it is tested again. If there are any issues found in staging, they are resolved, and the service is deployed to production. By performing testing early in the development process, the team can catch and fix defects early on, saving time and resources that would have been spent on later stages of testing. It’s also important to note that shift-left testing is not only about testing earlier but also about involving the whole team in the testing process, where developers, QA, and ops collaborate to test, identify and fix issues, which leads to a more streamlined and efficient development process. Adopting Shift Left Testing in a Software Development Lifecycle The software development life cycle (SDLC) is a process followed in an organization for a software project. It consists of various stages, starting from planning, designing, developing, testing, deploying, and maintaining software. It is a framework that outlines the steps and activities involved in developing software, from the initial planning stages to the final deployment and maintenance of the software. Adopting Shift left testing in a software development lifecycle improves the quality of the software and reduces the time and cost required to fix defects later in the process. Below are some ways in which you can adopt to shift left testing in your organization: 1. Involve Testers Earlier in the Development Process Testers should be involved in the development process as early as possible to provide feedback and help identify defects. This involves working closely with developers, attending daily stand-ups, and participating in design and code reviews. To implement shift left testing, organizations often follow Agile Methodology and have sprint ceremonies such as sprint grooming and sprint planning where both QA and Development teams are involved from the beginning. During this time, QA can ask clarifying questions about requirements and provide inputs as well. 2. Implement BDD/TDD Approach This approach has several benefits. The test cases prepared by QA can help developers think about scenarios they may not have considered. Additionally, QA may identify cases that were missed by the product owner, business analyst, or the person who is responsible for gathering requirements. Identifying potential issues and creating test cases early in the development process can save time and effort later on. Without this early identification, issues may not be discovered until later stages of development or testing, at which point it may be more time-consuming and costly to make changes to address them. 3. Encourage Developers to Write Unit Testing Unit testing involves testing individual units or components of code to ensure that they are working correctly. It is an important shift left testing technique that can be used to identify and fix defects early in the development process. You can provide training on how to write effective unit tests, as well as tools and frameworks that can be used to automate unit testing. 4. Conducting Internal Demo Conducting an internal demo to the sprint team on the sprint closure day is an effective way to implement shift left testing. During this demo, team members can visually see the work completed in the previous sprint, including any changes or updates to the website or product. This allows them to provide feedback and identify potential issues early on rather than waiting for formal testing to be conducted later in the process. By involving the entire team in the demo, you can increase the chances of identifying potential issues and gathering valuable feedback. This can help improve the quality and value of the product, as it will not only be tested by a dedicated tester at a later stage but also reviewed and assessed by the entire team. This task ensures all relevant scenarios are considered, and necessary changes are made before the product is released. 5. Monitor Test Coverage Use tools to monitor test coverage or the percentage of code that is tested to ensure that you are testing all relevant code. Using code coverage tools can help you monitor test coverage and also helps in analyzing your codebase and reports on the percentage of code that is covered by tests. 6. Use Version Control and Code Review Use version control systems, such as Git, to track changes to your codebase and enable collaboration. Use code review to ensure that code is reviewed and tested by multiple team members before it is deployed.For example, you might set up a code review process in which all new code is reviewed by at least one other team member before it is merged into the main codebase. This can help identify and fix defects early on and improve the overall quality of your software. By incorporating these techniques into your shift left testing strategy, you can effectively identify and fix defects early in the development process, improving the quality and efficiency of your software 7. Automation Testing Automated testing can be used to test individual units or components of code as they are being developed, allowing you to identify defects early on. This can help reduce the time and effort required for testing later on and improve the quality of your software by identifying and fixing defects early in the development cycle.For example, when developing microservices, you can use automated testing to perform component testing early in the development process. By preparing test cases based on the Swagger or Confluence page and calling the service directly from the feature branch, you can verify that the service is working as intended. You can also write code in the same branch as the development team and check classes or enumerations that are being used to ensure that they meet the requirements. By performing early testing, you can identify bugs and defects at an early stage of software development, improving the quality and efficiency of your development process. 8. Testing Every Component Testing every component is an important aspect of shift left testing. If you are testing an API and not all of the API is developed, you can still test what is available and mock the response for the rest. Using concepts such as stubs and drivers, you can focus on testing the ready components without worrying about what is not yet available. This can give you confidence that the developed components are working correctly. Later, when the entire API is available for testing, you can quickly verify its functionality without spending a lot of time on testing that has already been covered earlier. Additionally, you can concentrate on your API’s functioning in connection to other third-party APIs it communicates with during this testing phase. You can make sure your API is operating properly and consistently by evaluating various third-party API behaviors. 9. Include Security Testing Security testing should be integrated into the development process as early as possible to identify and fix security vulnerabilities early on. This can involve using tools such as static analysis tools, dynamic analysis tools, and penetration testing tools to test the security of the software. These tools can be used to test the security of the software early in the development process. How Shift Left Testing Beneficial? Below are some of the points in which shift left testing proves to be beneficial:Reduced time and cost: By starting testing earlier in the development process, organizations can catch defects earlier and reduce the time and cost of testing. Improved Quality: By testing early and often, organizations can identify and fix defects before they become more complex and expensive. Enhanced Collaboration: Shift-left testing encourages collaboration between development and testing teams, which can improve communication and lead to a better understanding of the requirements and design of the software. Greater Agility: Shift left testing can help organizations be more agile and responsive to changes in the market or business requirements, as it allows them to quickly identify and fix defects and make changes to the software. Conclusion Shift Left testing is not a new approach, but it has gained more popularity in recent years as organizations have sought to improve the efficiency and effectiveness of their software development processes.It is a valuable approach that can help organizations improve the quality of their software and reduce the time and cost required to develop it.
As per StackOverflow insights, JavaScript is the most popular programming language. As the power of web and mobile is increasing day by day, JavaScript and JavaScript frameworks are becoming more popular. It would not be surprising to hear that JavaScript has become a preference for test automation as well. Over the past few years, a lot of development has happened in the open-source JavaScript based test automation framework development and now we have multiple JavaScript testing frameworks that are robust enough to be used professionally. There are scalable frameworks that can be used by web developers and testers to automate even unit test cases and create complete end-to-end automation test suites. Mocha is one JavaScript testing framework that has been well renowned since 2016, as per StateofJS. With that said, when we talk about JavaScript automation testing, we can’t afford not to loop Selenium into the discussion. So I thought coming up with a step-by-step Mocha testing tutorial on the framework will be beneficial for you to kickstart your JavaScript automation testing with Mocha and Selenium. We will also be looking into how you can run it on the LambdaTest automation testing platform to get a better browser coverage and faster execution times. By the end of this Mocha testing tutorial, you will have a clear understanding of the setup, installation, and execution of your first automation script with Mocha for JavaScript testing. What Will You Learn From This Mocha Testing Tutorial? In this article, we are going to deep dive into Mocha JavaScript testing to perform automated browser testing with Selenium and JavaScript. We will: Start with the installation and prerequisites for the Mocha framework and explore its advantages. Execute our first Selenium JavaScript test through Mocha with examples. Execute group tests. Use the assertion library. Encounter possible issues along with their resolutions. Execute some Mocha test scripts on the Selenium cloud grid platform with minimal configuration changes and tests on various browsers and operating systems. What Makes Mocha Prevalent? Mochajs, or simply Mocha, is a feature-affluent JavaScript test framework that runs test cases on Node.js and in the browser, making testing simple and fun. By running serially, Mocha JavaScript testing warrants flexibility and precise reporting, while mapping uncaught exceptions to the correct test cases. Mocha provides a categorical way to write a structured code for testing the applications by thoroughly classifying them into test suites and test cases modules for execution and to produce a test report after the run by mapping errors to corresponding test cases. What Makes Mocha a Better Choice Compared To Other JavaScript Testing Frameworks? Range of installation methods: It can be installed globally or as a development dependency for the project. Also, it can be set up to run test cases directly on the web browser. Various browser support: Can be used to run test cases seamlessly on all major web browsers and provides many browser-specific methods and options. Each revision of Mocha provides upgraded JavaScript and CSS build for different web browsers. Number of ways to offer test reports: It provides users with a variety of reporting options, like list, progress and JSON, to choose from with a default reporter displaying the output based on the hierarchy of test cases. Support for several JavaScript assertion libraries: It helps users cut testing cost and speed-up the process by having compatibility for a set of JavaScript assertion libraries—Express.js, Should.js, Chai. This multiple library support makes it easier for testers to write lengthy complex test cases and use them if everything works fine. Works in TDD and BDD environments: Mocha supports behavior driven development (BDD) and test driven development (TDD), allowing developers to write high quality test cases and enhance test coverage. Support for synchronous and asynchronous testing: Unlike other JavaScript testing frameworks, Mocha is designed with features to fortify asynchronous testing utilizing async/await by invoking the callback once the test is finished. It enables synchronous testing by omitting the callback. Setting Up Mocha and Initial Requirements Before we start our endeavor and explore more of Mocha testing, there are some important prerequisites we need to set up to get started with this Mocha testing tutorial for automation testing with Selenium and JavaScript: Node.js and (npm): The Mocha module requires Node.js to be installed on the system. If it is not already present on the system, it can be installed using the npm manager or by downloading the Windows installer directly from the official Node.js website . Mocha package module: Once we have successfully installed Node.js on the system, we can make use of the node package manager, i.e. npm, to install the required package, which is Mocha. To install the latest version using the npm command line tool, we will first initialize the npm using the below command : $ npm init Next, we will install the Mocha module using npm with the below command: $ npm install -g mocha Here, “g” is for installing the module globally, it allows us to access and use the module like a command line tool and does not limit its use to the current project. The –save-dev command below will place the Mocha executable in our ./node_modules/.bin folder: $ npm install --save-dev mocha We will now be able to run the commands in our command line using the mocha keyword: Java—SDK: Since Mocha is a Selenium test framework and Selenium is built upon Java, we would also be installing the Java Development Kit (preferably JDK 7.0 or above) on the system and configure the Java environment. Selenium WebDriver: We require a Selenium WebDriver and that should be already present in our npm node modules. If it is not found in the module, we can install the latest version of Selenium WebDriver using the below command: $ npm install selenium-webdriver Browser driver: Lastly, we will be installing the driver of the specific browser we are going to use. This executable also needs to be placed inside the same bin folder: $ npm install -g chromedriver Writing Our First Mocha JavaScript Testing Script We will create a project directory named mocha_test and then we will create a subfolder name scripts with a test script name single_test.js inside it. Finally, we will initialize our project by hitting the command npm init. This will create a package.json file in an interactive way, which will contain all our required project configurations. It will be required to execute our test script single_test.js. Finally, we will have a file structure that looks like the below: mocha_test | -- scripts | -- single_test.js | -- package.json { "name": "mocha selenium test sample", "version": "1.0.0", "description": " Getting Started with Our First New Mocha Selenium Test Script and Executing it on a Local Selenium Setup ", "scripts": { "test": "npm run single", "single": "./node_modules/.bin/mocha scripts/single_test.js", }, "author": "rohit", "license": "" , "homepage": "https://mochajs.org", "keywords": [ "mocha", "bdd", "selenium", "examples", "test", "bdd", "tdd", "tap" "framework" ], "dependencies": { "bluebird": "^3.7.2", "mocha": "^6.2.2", "selenium-webdriver": "^3.6.0" } } You have successfully configured your project and are ready to execute your first Mocha JavaScript testing script.You can now write your first test script in the file single_test.js that was created earlier: var assert = require('assert'); describe('IndexArray', function() { describe('#checkIndex negative()', function() { it('the function should return -1 when the value is not present', function(){ assert.equal(-1, [4,5,6].indexOf(7)); }); }); }); Code Walkthrough of Our Mocha JavaScript Testing Script We will now walk through the test script and understand what exactly is happening in the script we just wrote. When writing any Mocha test case in JavaScript, there are two basic function calls we should remember that does the job for us under the hood. These functions are: describe() it() We have used both of them in the test script we wrote above. describe(): Is mainly used to define the creation of test groups in Mocha in a simple way. The describe() function takes in two arguments as the input. The first argument is the name of the test group, and the second argument is a callback function. We can also have a nested test group in our test as per the requirement of the test case. If we look at our test case now, we see that we have a test group named IndexArray, which has a callback function that has inside it a nested test group named #checkIndex negative() and inside of that, is another callback function that contains our actual test. it(): This function is used for writing individual Mocha JavaScript test cases. It should be written in a layman way conveying what the test does. The It() function also takes in two arguments as the input, the first argument is a string explaining what the test should do, and the second argument is a callback function, which contains our actual test. In the above Mocha JavaScript testing script, we see that we have the first argument of the it() function that is written as “the function should return -1 when the value is not present” and the second argument is a callback function that contains our test condition with the assertion. Assertion: The assertion libraries are used to verify whether the condition given to it is true or false. It verifies the test results with the assert.equal(actual, expected) method and makes the equality tests between our actual and expected parameters. It makes our testing easier by using the Node.js built-in assert module. In our Mocha JavaScript testing script, we are not using the entire assert library as we only require the assert module with one line of code for this Mocha testing tutorial. If the expected parameter equals our actual parameter, the test is passed, and the assert returns true. If it doesn’t equal, then the test fails, and the assert returns false. It is important to check whether the below section is present in our package.json file as this contains the configurations of our Mocha JavaScript testing script: "scripts": { "test": "npm run single", "single": "./node_modules/.bin/mocha scripts/single_test.js" }, Finally, we can run our test in the command line and execute from the base directory of the project using the below command: $ npm test or $ npm run single The output of the above test is : This indicates we have successfully passed our test and the assert condition is giving us the proper return value of the function based on our test input passed. Let us extend it further and add one more test case to our test suite and execute the test. Now, our Mocha JavaScript testing script: single_test.js will have one more test that will check the positive scenario of the test and give the corresponding output: var assert = require('assert'); describe('IndexArray', function() { describe('#checkIndex negative()', function() { it('the function should return -1 when the value is not present', function(){ assert.equal(-1, [4,5,6].indexOf(7)); }); }); describe('#checkIndex positive()', function() { it('the function should return 0 when the value is present', function(){ assert.equal(0, [8,9,10].indexOf(8)); }); }); }); The output of the above Mocha JavaScript testing script is : You have successfully executed your first Mocha JavaScript testing script in your local machine for Selenium and JavaScript execution. Note: If you have a larger test suite for cross browser testing with Selenium JavaScript, the execution on local infrastructure is not your best call. Drawbacks of Local Automated Testing Setup As you expand your web application, bring in new code changes, overnight hotfixes, and more. With these new changes, comes new testing requirements, so your Selenium automation testing scripts are bound to go bigger, you may need to test across more browsers, more browser versions, and more operating systems. This becomes a challenge when you perform JavaScript Selenium testing through the local setup. Some of the major pain points of performing Selenium JavaScript testing on the local setup are: There is a limitation that the testing can only be performed locally, i.e., on the browsers that are installed locally in the system. This is not beneficial when there is a requirement to execute cross browser testing and perform the test on all the major browsers available for successful results. The test team might not be aware of all the new browsers versions and the compatibility with them will be tested properly. There is a need to devise a proper cross browser testing strategy to ensure satisfactory test coverage. There arise certain scenarios when it is required to execute tests on some of the legacy browsers or browser versions for a specific set of users and operating systems. It might be necessary to test the application on various combinations of browsers and operating systems, and that is not easily available with the local inhouse system setup. Now, you may be wondering about a way to overcome these challenges. Well, don’t stress too much because an online Selenium Grid is there for your rescue. Executing Mocha Script Using Remote Selenium WebDriver on LambdaTest Selenium Grid Since we know that executing our test script on the cloud grid has great benefits to offer, let us get our hands dirty on the same. The process of executing a script on the LambdaTest Selenium Grid is fairly straightforward and exciting. We can execute our local test script by adding a few lines of code that is required to connect to the LambdaTest platform: It gives us the privilege to execute our test on different browsers seamlessly. It has all the popular operating systems and also provides us the flexibility to make various combinations of the operating system and browsers. We can pass on our environment and config details from within the script itself. The test scripts can be executed parallelly and save on executing time. It provides us with an interactive user interface and dashboard to view and analyze test logs. It also provides us the desired capabilities generator with an interactive user interface, which is used to select the environment specification details with various combinations to choose from. So, in our case, the multiCapabilities class in the single.conf.js and parallel.conf.js configuration file will look similar to the below: multiCapabilities: [ { // Desired Capabilities build: "Mocha Selenium Automation Parallel Test", name: "Mocha Selenium Test Firefox", platform: "Windows 10", browserName: "firefox", version: "71.0", visual: false, tunnel: false, network: false, console: false } Next, the most important thing is to generate our access key token, which is basically a secret key to connect to the platform and execute automated tests on LambdaTest. This access key is unique to every user and can be copied and regenerated from the profile section of the user account as shown below. The information regarding the access key, username, and hub can be alternatively fetched from the LambdaTest user profile page Automation dashboard, which looks like the one as mentioned in the screenshot below. Accelerating With Parallel Testing Using LambdaTest Selenium Grid In our demonstration, we will be creating a script that uses the Selenium WebDriver to make a search, open a website, and assert whether the correct website is open. If the assert returns true, it indicates that the test case passed successfully and will show up in the automation logs dashboard. If the assert returns false, the test case fails, and the errors will be displayed in the automation logs. Now, since we are using LambdaTest, we would like to leverage it and execute our tests on different browsers and operating systems. We will execute our test script as below: Single test: On a single environment (Windows 10) and single browser (Chrome). Parallel test: On a parallel environment, i.e., different operating system (Windows 10 and Mac OS Catalina) and different browsers (Chrome, Mozilla Firefox, and Safari). Here we will create a new subfolder in our project directory, i.e., conf. This folder will contain the configurations that are required to connect to the LambdaTest platform. We will create single.conf.js and parallel.conf.js where we need to declare the user configuration, i.e, username and access key along with the desired capabilities for both our single test and parallel test cases. Now, we will have a file structure that looks like below: LT_USERNAME = process.env.LT_USERNAME || "irohitgoyal"; // Lambda Test User name LT_ACCESS_KEY = process.env.LT_ACCESS_KEY || "1267367484683738"; // Lambda Test Access key //Configurations var config = { commanCapabilities: { build: "Mocha Selenium Automation Parallel Test", // Build Name to be displayed in the test logs tunnel: false // It is required if we need to run the localhost through the tunnel }, multiCapabilities: [ { // Desired Capabilities , this is very important to configure name: "Mocha Selenium Test Firefox", // Test name that to distinguish amongst test cases platform: "Windows 10", // Name of the Operating System browserName: "firefox", // Name of the browser version: "71.0", // browser version to be used visual: false, // whether to take step by step screenshot, we made it false for now network: false, // whether to capture network logs, we made it false for now console: false // whether to capture console logs, we made it false for now }, { name: "Mocha Selenium Test Chrome", // Test name that to distinguish amongst test cases platform: "Windows 10", // Name of the Operating System browserName: "chrome",// Name of the browser version: "79.0", // browser version to be used visual: false, // // whether to take step by step screenshot, we made it false for now network: false, // // whether to capture network logs, we made it false for now console: false // // whether to capture console logs, we made it false for now }, { name: "Mocha Selenium Test Safari", // Test name that to distinguish amongst test cases platform: "MacOS Catalina", // Name of the Operating System browserName: "safari",// Name of the browser version: "13.0", // browser version to be used visual: false, // // whether to take step by step screenshot, we made it false for now network: false, // // whether to capture network logs, we made it false for now console: false // // whether tocapture console logs., we made it false for now } ] }; exports.capabilities = []; // Code to integrate and support common capabilities config.multiCapabilities.forEach(function(caps) { var temp_caps = JSON.parse(JSON.stringify(config.commanCapabilities)); for (var i in caps) temp_caps[i] = caps[i]; exports.capabilities.push(temp_caps); }); var assert = require("assert"),// declaring assert webdriver = require("selenium-webdriver"), // declaring selenium web driver conf_file = process.argv[3] || "conf/single.conf.js"; // passing the configuration file var caps = require("../" + conf_file).capabilities; // Build the web driver that we will be using in Lambda Test var buildDriver = function(caps) { return new webdriver.Builder() .usingServer( "http://" + LT_USERNAME + ":" + LT_ACCESS_KEY + "@hub.lambdatest.com/wd/hub" ) .withCapabilities(caps) .build(); }; // declaring the test group Search Engine Functionality for Single Test Using Mocha in Browser describe("Search Engine Functionality for Single Test Using Mocha in Browser " + caps.browserName, function() { var driver; this.timeout(0); // adding the before an event that triggers before the rest execution beforeEach(function(done) { caps.name = this.currentTest.title; driver = buildDriver(caps); done(); }); // defining the test case to be executed it("should find the required search result in the browser ", function(done) { driver.get("https://www.mochajs.org").then(function() { driver.getTitle().then(function(title) { setTimeout(function() { console.log(title); assert( title.match( "Mocha | The fun simple flexible JavaScript test framework | JavaScript | Automated Browser Test" ) != null ); done(); }, 10000); }); }); }); // adding the after event that triggers to check if the test passed or failed afterEach(function(done) { if (this.currentTest.isPassed) { driver.executeScript("lambda-status=passed"); } else { driver.executeScript("lambda-status=failed"); } driver.quit().then(function() { done(); }); }); }); var assert = require("assert"), // declaring assert webdriver = require("selenium-webdriver"), // declaring selenium web driver conf_file = process.argv[3] || "conf/parallel.conf.js"; // passing the configuration file var capabilities = require("../" + conf_file).capabilities; // Build the web driver that we will be using in Lambda Test var buildDriver = function(caps) { return new webdriver.Builder() .usingServer( "http://" + LT_USERNAME + ":" + LT_ACCESS_KEY + "@hub.lambdatest.com/wd/hub" ) .withCapabilities(caps) .build(); }; capabilities.forEach(function(caps) { // declaring the test group Search Engine Functionality for Parallel Test Using Mocha in Browser describe("Search Engine Functionality for Parallel Test Using Mocha in Browser " + caps.browserName, function() { var driver; this.timeout(0); // adding the before event that triggers before the rest execution beforeEach(function(done) { caps.name = this.currentTest.title; driver = buildDriver(caps); done(); }); // defining the test case to be executed it("should find the required search result in the browser " + caps.browserName, function(done) { driver.get("https://www.mochajs.org").then(function() { driver.getTitle().then(function(title) { setTimeout(function() { console.log(title); assert( title.match( "Mocha | The fun simple flexible JavaScript test framework | JavaScript | Automated Browser Test" ) != null ); done(); }, 10000); }); }); }); // adding the after event that triggers to check if the test passed or failed afterEach(function(done) { if (this.currentTest.isPassed) { driver.executeScript("lambda-status=passed"); } else { driver.executeScript("lambda-status=failed"); } driver.quit().then(function() { done(); }); }); }); }); Finally, we have our package.json that has an additional added configuration for parallel testing and required files: "scripts": { "test": "npm run single && npm run parallel", "single": "./node_modules/.bin/mocha specs/single_test.js conf/single.conf.js", "parallel": "./node_modules/.bin/mocha specs/parallel_test.js conf/parallel.conf.js --timeout=50000" }, { "name": "mocha selenium automation test sample", "version": "1.0.0", "description": " Getting Started with Our First New Mocha Selenium Test Script and Executing it on a Local Selenium Setup", "scripts": { "test": "npm run single && npm run parallel", "single": "./node_modules/.bin/mocha scripts/single_test.js conf/single.conf.js", "parallel": "./node_modules/.bin/mocha scripts/parallel_test.js conf/parallel.conf.js --timeout=50000" }, "author": "rohit", "license": "" , "homepage": "https://mochajs.org", "keywords": [ "mocha", "bdd", "selenium", "examples", "test", "bdd", "tdd", "tap" ], "dependencies": { "bluebird": "^3.7.2", "mocha": "^6.2.2", "selenium-webdriver": "^3.6.0" } } The final thing we should do is execute our tests from the base project directory by using the below command: $ npm test This command will validate the test cases and execute our test suite, i.e., the single test and parallel test cases. Below is the output from the command line. Now, if we open the LambdaTest platform and check the user interface, we will see the test runs on Chrome, Firefox, and Safari browsers on the environment specified, i.e., Windows 10 and Mac OS, and the test is passed successfully with positive results. Below, we see a screenshot that depicts our Mocha code is running over different browsers, i.e Chrome, Firefox, and Safari, on the LambdaTest Selenium Grid Platform. The results of the test script execution along with the logs can be accessed from the LambdaTest Automation dashboard. Alternatively, if we want to execute the single test, we can execute the following command: $ npm run single To execute the test cases in different environments in a parallel way, run the below command: $ npm run parallel Wrap Up! This concludes our Mocha testing tutorial and now, we have a clear idea about what Mocha is and how to set it up. It allows us to automate the entire test suite and get started quickly with the minimal configuration and is well readable and also easy to update. We are now able to perform an end-to-end test using group tests and the assertion library. The test cases results can be fetched directly from the command line terminal.
The rapid growth of technology has led to an increased demand for efficient and effective software testing methods. One of the most promising advancements in this field is the integration of Natural Language Processing (NLP) techniques. NLP, a subset of artificial intelligence (AI), is focused on the interaction between computers and humans through natural language. In the context of software testing, NLP offers the potential to automate test case creation and documentation, ultimately reducing the time, effort, and costs associated with manual testing processes. This article explores the benefits and challenges of using NLP in software testing, focusing on automating test case creation and documentation. We will discuss the key NLP techniques used in this area, real-world applications, and the future of NLP in software testing. Overview of Natural Language Processing (NLP) NLP is an interdisciplinary field that combines computer science, linguistics, and artificial intelligence to enable computers to understand, interpret, and generate human language. This technology has been used in various applications such as chatbots, voice assistants, sentiment analysis, and machine translation. The primary goal of NLP is to enable computers to comprehend and process large amounts of natural language data, making it easier for humans to interact with machines. NLP techniques can be classified into two main categories: rule-based and statistical-based approaches. Rule-based approaches rely on predefined linguistic rules and patterns, while statistical approaches utilize machine learning algorithms to learn from data. NLP in Software Testing Traditionally, software testing has been a labor-intensive and time-consuming process that requires a deep understanding of the application's functionality and the ability to identify and report potential issues. Testers must create test cases, execute them, and document the results in a clear and concise manner. With the increasing complexity of modern software applications, the manual approach to software testing becomes even more challenging and error-prone. NLP has the potential to revolutionize software testing by automating test case creation and documentation. By leveraging NLP techniques, testing tools can understand the requirements and specifications written in natural language, automatically generating test cases and maintaining documentation. Automating Test Case Creation NLP can be used to automate the generation of test cases by extracting relevant information from requirement documents or user stories. The main NLP techniques involved in this process include: Tokenization: The process of breaking down a text into individual words or tokens, making it easier to analyze and process the text. Part-of-speech (POS) tagging: Assigning grammatical categories (such as nouns, verbs, adjectives, etc.) to each token in a given text. Dependency parsing: Identifying the syntactic structure and relationships between the tokens in a text. Named entity recognition (NER): Detecting and categorizing entities (such as people, organizations, locations, etc.) in a text. Semantic analysis: Extracting the meaning and context from the text to understand the relationships between the entities and actions described in the requirements or user stories. By using these techniques, NLP-based tools can process natural language inputs and automatically generate test cases based on the identified entities, actions, and conditions. This not only reduces the time and effort needed for test case creation but also helps in ensuring that all relevant scenarios are covered, minimizing the chances of missing critical test cases. Automating Test Documentation One of the key aspects of software testing is maintaining accurate and up-to-date documentation that outlines test plans, test cases, and test results. This documentation is crucial for understanding the state of the application and ensuring that all requirements have been met. However, manually maintaining test documentation can be time-consuming and error-prone. NLP can be used to automate test documentation by extracting relevant information from test cases and test results and generating human-readable reports. This process may involve the following NLP techniques: Text summarization: Creating a condensed version of the input text, highlighting the key points while maintaining the original meaning. Text classification: Categorizing the input text based on predefined labels or criteria, such as the severity of a bug or the status of a test case. Sentiment analysis: Analyzing the emotional tone or sentiment expressed in the text, which can be useful for understanding user feedback or bug reports. Document clustering: Grouping similar documents together, making it easier to organize and navigate the test documentation. By automating the documentation process, NLP-based tools can ensure that the test documentation is consistently up-to-date and accurate, reducing the risk of miscommunication or missed issues. Real-World Applications Several organizations have already started incorporating NLP into their software testing processes, with promising results. Some examples of real-world applications include: IBM's Requirements Quality Assistant (RQA) RQA is an AI-powered tool that uses NLP techniques to analyze requirement documents and provide suggestions for improving their clarity, consistency, and completeness. By leveraging NLP, RQA can help identify potential issues early in the development process, reducing the likelihood of costly rework and missed requirements. Testim.io Testim is an end-to-end test automation platform that uses NLP and machine learning to generate, execute, and maintain tests for web applications. By understanding the application's user interface (UI) elements and their relationships, Testim can automatically create test cases based on real user interactions, ensuring comprehensive test coverage. QTest by Tricentis QTest is an AI-driven test management tool that incorporates NLP techniques to automate the extraction of test cases from user stories or required documents. QTest can identify entities, actions, and conditions within the text and generate test cases accordingly, streamlining the test case creation process. Challenges and Future Outlook While NLP holds great promise for automating test case creation and documentation, there are still challenges to overcome. One major challenge is the ambiguity and complexity of natural language. Requirements and user stories can be written in various ways, with different levels of detail and clarity, making it difficult for NLP algorithms to consistently extract accurate and relevant information. Additionally, the accuracy and efficiency of NLP algorithms depend on the quality and quantity of the training data. As software testing is a domain-specific area, creating high-quality training data sets can be challenging and time-consuming. Despite these challenges, the future outlook for NLP in software testing remains optimistic. As NLP algorithms continue to improve and mature, it is expected that the integration of NLP in software testing tools will become more widespread. Moreover, the combination of NLP with other AI techniques, such as reinforcement learning and computer vision, has the potential to further enhance the capabilities of automated testing solutions. Summary Natural Language Processing (NLP) offers a promising approach to automating test case creation and documentation in software testing. By harnessing the power of NLP techniques, software testing tools can efficiently process and understand requirements written in natural language, automatically generate test cases, and maintain up-to-date documentation. This has the potential to significantly reduce the time, effort, and costs associated with traditional manual testing processes. Real-world applications, such as IBM's RQA, Testim.io, and QTest by Tricentis, have demonstrated the value of incorporating NLP into software testing workflows. However, there are still challenges to be addressed, such as the ambiguity and complexity of natural language and the need for high-quality training data. As NLP algorithms continue to advance and improve, it is anticipated that the role of NLP in software testing will expand and become more prevalent. Combining NLP with other AI techniques may further enhance the capabilities of automated testing solutions, leading to even more efficient and effective software testing processes. To summarise, the integration of Natural Language Processing (NLP) in software testing holds great promise for improving the efficiency and effectiveness of test case creation and documentation. Furthermore, as technology continues to evolve and mature, it is expected to play an increasingly important role in the future of software testing, ultimately transforming the way we test and develop software applications.
In this article, I will look at Specification by Example (SBE) as explained in Gojko Adzic’s book of the same name. It’s a collaborative effort between developers and non-developers to arrive at textual specifications that are coupled to automatic tests. You may also have heard of it as behavior-driven development or executable specifications. These are not synonymous concepts, but they do overlap. It's a common experience in any large, complex project. Crucial features do not behave as intended. Something was lost in the translation between intention and implementation, i.e., business and development. Inevitably we find that we haven’t built quite the right thing. Why wasn’t this caught during testing? Obviously, we’re not testing enough, or the wrong things. Can we make our tests more insightful? Some enthusiastic developers and SBE adepts jump to the challenge. Didn’t you know you can write all your tests in plain English? Haven’t you heard of Gherkin's syntax? She demonstrates the canonical Hello World of executable specifications, using Cucumber for Java. Gherkin Scenario: Items priced 100 euro or more are eligible for 5% discount for loyal customers Given Jill has placed three orders in the last six months When she looks at an item costing 100 euros or more Then she is eligible for a 5% discount Everybody is impressed. The Product Owner greenlights a proof of concept to rewrite the most salient test in Gherkin. The team will report back in a month to share their experiences. The other developers brush up their Cucumber skills but find they need to write a lot of glue code. It’s repetitive and not very DRY. Like the good coders they are, they make it more flexible and reusable. Gherkin Scenario: discount calculator for loyal customers Given I execute a POST call to /api/customer/12345/orders?recentMonths=6 Then I receive a list of 3 OrderInfo objects And a DiscountRequestV1 message for customer 12345 is put on the queue /discountcalculator [ you get the message ] Reusable yes, readable, no. They’re right to conclude that the textual layer offers nothing, other than more work. It has zero benefits over traditional code-based tests. Business stakeholders show no interest in these barely human-readable scripts, and the developers quickly abandon the effort. It’s About Collaboration, Not Testing The experiment failed because it tried to fix the wrong problem. It failed because better testing can’t repair a communication breakdown between getting from the intended functionality to implementation. SBE is about collaboration. It is not a testing approach. You need this collaboration to arrive at accurate and up-to-date specifications. To be clear, you always have a spec (like you always have an architecture). It may not always be a formal one. It can be a mess that only exists in your head, which is only acceptable if you’re a one-person band. In all other cases, important details will get lost or confused in the handover between disciplines. The word handover has a musty smell to it, reminiscent of old-school Waterfall: the go-to punchbag for everything we did wrong in the past, but also an antiquated approach that few developers under the age of sixty have any real experience with it. Today we’re Agile and multi-disciplinary. We don’t have specialists who throw documents over the fence of their silos. It is more nuanced than that, now as well as in 1975. Waterfall didn’t prohibit iteration. You could always go back to an earlier stage. Likewise, the definition of a modern multi-disciplinary team is not a fungible collection of Jacks and Jills of all trades. Nobody can be a Swiss army knife of IT skills and business domain knowledge. But one enduring lesson from the past is that we can’t produce flawless and complete specifications of how the software should function, before writing its code. Once you start developing, specs always turn out over-complete, under-complete, and just plain wrong in places. They have bugs, just like code. You make them better with each iteration. Accept that you may start off incomplete and imprecise. You Always Need a Spec Once we have built the code according to spec (whatever form that takes), do we still need that spec, as an architect’s drawing after the house was built? Isn’t the ultimate truth already in the code? Yes, it is, but only at a granular level, and only accessible to those who can read it. It gives you detail, but not the big picture. You need to zoom out to comprehend the why. Here’s where I live: This is the equivalent of source code. Only people who have heard of the Dutch village of Heeze can relate this map to the world. It’s missing the context of larger towns and a country. The next map zooms out only a little, but with the context of the country's fifth-largest city, it’s recognizable to all Dutch inhabitants. The next map should be universal. Even if you can’t point out the Netherlands on a map, you must have heard of London. Good documentation provides a hierarchy of such maps, from global and generally accessible to more detailed, requiring more domain-specific knowledge. At every level, there should be sufficient context about the immediately connecting parts. If there is a handover at all, it’s never of the kind: “Here’s my perfect spec. Good luck, and report back in a month”. It’s the finalization of a formal yet flexible document created iteratively with people from relevant disciplines in an ongoing dialogue throughout the development process. It should be versioned, and tied to the software that it describes. Hence the only logical place is together with the source code repository, at least for specifications that describe a well-defined body of code, a module, or a service. Such specs can rightfully be called the ultimate source of truth about what the code does, and why. Because everybody was involved and invested, everybody understands it, and can (in their own capacity) help create and maintain it. However, keeping versioned specs with your software is no automatic protection against mismatches, when changes to the code don’t reflect the spec and vice versa. Therefore, we make the spec executable, by coupling it to testing code that executes the code that the spec covers and validates the results. It sounds so obvious and attractive. Why isn’t everybody doing it if there’s a world of clarity to be gained? There are two reasons: it’s hard and you don’t always need SBE. We routinely overestimate the importance of the automation part, which puts the onus disproportionally on the developers. It may be a deliverable of the process, but it’s only the collaboration that can make it work. More Art Than Science Writing good specifications is hard, and it’s more art than science. If there ever was a need for clear, unambiguous, SMART writing, executable specifications fit the bill. Not everybody has a talent for it. As a developer with a penchant for writing, I flatter myself that I can write decent spec files on my own. But I shouldn’t – not without at least a good edit from a business analyst. For one, I don’t know when my assumptions are off the mark, and I can’t always avoid technocratic wording from creeping in. A process that I favor and find workable is when a businessperson drafts acceptance criteria which form the input to features and scenarios. Together with a developer, they are refined: adding clarity, and edge cases, and removing duplication and ambiguity. Only then can they be rigorous enough to be turned into executable spec files. Writing executable specifications can be tremendously useful for some projects and a complete waste of time for others. It’s not at all like unit testing in that regard. Some applications are huge but computationally simple. These are the enterprise behemoths with their thousands of endpoints and hundreds of database tables. Their code is full of specialist concepts specific to the world of insurance, banking, or logistics. What makes these programs complex and challenging to grasp is the sheer number of components and the specialist domain they relate to. The math in Fintech isn’t often that challenging. You add, subtract, multiply, and watch out for rounding errors. SBE is a good candidate to make the complexity of all these interfaces and edge cases manageable. Then there’s software with a very simple interface behind which lurks some highly complex logic. Consider a hashing algorithm, or any cryptographic code, that needs to be secure and performant. Test cases are simple. You can tweak the input string, seed, and log rounds, but that’s about it. Obviously, you should test for performance and resource usage. But all that is best handled in a code-based test, not Gherkin. This category of software is the world of libraries and utilities. Their concepts stay within the realm of programming and IT. It relates less directly to concepts in the real world. As a developer, you don’t need a business analyst to explain the why. You can be your own. No wonder so many Open Source projects are of this kind. Gherkin Scenario: discount calculator for loyal customers Given I execute a POST call to /api/customer/12345/orders?recentMonths=6 Then I receive a list of 3 OrderInfo objects And a DiscountRequestV1 message for customer 12345 is put on the queue /discountcalculator [ you get the message ]
TestOps is an emerging approach to software testing that combines the principles of DevOps with testing practices. TestOps aims to improve the efficiency and effectiveness of testing by incorporating it earlier in the software development lifecycle and automating as much of the testing process as possible. TestOps teams typically work in parallel with development teams, focusing on ensuring that testing is integrated throughout the development process. This includes testing early and often, using automation to speed up the testing process, and creating continuous testing and improvement. TestOps also works closely with operations teams to ensure that the software is deployed in a stable and secure environment. TestOps is an approach to software testing that emphasizes collaboration between the testing and operations teams to improve the overall efficiency and quality of the software development and delivery processes. Place of TestOps in Software Development The Need for TestOps Initial Investment Adopting DevOps requires an initial investment of time, resources, and financial investment. This can be a significant barrier to adoption for some organizations, particularly those with limited budgets or resources. Learning Curve DevOps requires a significant cultural shift in the way that teams work together, and it can take time to learn new processes, tools, and techniques. This can be challenging for some organizations, particularly those with entrenched processes and cultures. Security Risks DevOps practices can increase the risk of security vulnerabilities if security measures are not properly integrated into the development process. This can be particularly problematic in industries with strict security requirements, such as finance and healthcare. Automation Dependencies DevOps relies heavily on automation, which can create dependencies on tools and technologies that may be difficult to maintain or update. This can lead to challenges in keeping up with new technologies or changing requirements. Cultural Resistance DevOps requires a collaborative and cross-functional culture, which may be difficult to achieve in organizations with siloed teams or where there is resistance to change. Advantages of the TestOps Continuous Testing TestOps allows continuous testing that enables organizations to detect defects early in the development process. This reduces the cost and effort required to fix defects and ensures that software applications can be delivered with high quality. Improved Quality By integrating testing processes into the DevOps pipeline, TestOps ensures that quality is built into software applications from the outset. This reduces the risk of defects and improves the overall quality of the software. Greater Efficiency TestOps enables the automation of testing processes, which can help organizations reduce the time and effort required to test software applications. This can also reduce the costs associated with testing. Increased Collaboration TestOps promotes collaboration between development and testing teams, which can help identify and resolve issues earlier in the development process. This can lead to faster feedback and better communication between teams. Faster Time-to-Market TestOps allows the automation of testing processes, which reduces the time required to test software applications. This enables organizations to release software applications faster, which can give them a competitive advantage in the marketplace. Scope of TestOps in the Future The scope of TestOps in the future is significant as software development continues to become more complex and fast-paced. TestOps combines software testing with DevOps practices. As a result, it is becoming increasingly important for organizations to implement TestOps to ensure that they can deliver high-quality software applications to market quickly. Some of the trends that are likely to shape the future of TestOps include: Increasing Adoption of Agile and DevOps Methodologies Agile and DevOps methodologies are becoming increasingly popular among organizations that want to deliver software applications faster and more efficiently. TestOps is a natural extension of these methodologies, which are likely to become an essential component of Agile and DevOps practices in the future. Greater Focus on Automation Automation is a critical aspect of TestOps and will likely become even more important in the future. The use of automation tools and techniques can help organizations reduce the time and effort required to test software applications while also improving the accuracy and consistency of testing. The Growing Importance of Cloud Computing Cloud computing is becoming increasingly popular among organizations that want to reduce their IT infrastructure costs and improve scalability. TestOps can be implemented in cloud environments, and they are likely to become even more important as more organizations move their software applications to the cloud. Overall, the scope of TestOps in the future is vast, and it is likely to become an essential component of software development practices in the coming years. Conclusion Is TestOps the future of software testing? Obliviously Yes. With the increasing adoption of Agile and DevOps methodologies, there is a growing need for software testing processes that can keep pace with rapid development and deployment cycles. TestOps can help organizations achieve this by integrating testing into the software development lifecycle and ensuring that testing is a continuous and automated process. Furthermore, as more and more software is deployed in cloud environments, TestOps will become even more important in ensuring that applications are secure, scalable, and reliable. In summary, TestOps is a key trend in software testing that is likely to continue to grow in the future as organizations look for ways to improve the efficiency and quality of their software development and delivery processes.
Justin Albano
Software Engineer,
IBM
Thomas Hansen
CTO,
AINIRO.IO
Soumyajit Basu
Senior Software QA Engineer,
Encora
Vitaly Prus
Head of software testing department,
a1qa