The final step in the SDLC, and arguably the most crucial, is the testing, deployment, and maintenance of development environments and applications. DZone's category for these SDLC stages serves as the pinnacle of application planning, design, and coding. The Zones in this category offer invaluable insights to help developers test, observe, deliver, deploy, and maintain their development and production environments.
In the SDLC, deployment is the final lever that must be pulled to make an application or system ready for use. Whether it's a bug fix or new release, the deployment phase is the culminating event to see how something works in production. This Zone covers resources on all developers’ deployment necessities, including configuration management, pull requests, version control, package managers, and more.
The cultural movement that is DevOps — which, in short, encourages close collaboration among developers, IT operations, and system admins — also encompasses a set of tools, techniques, and practices. As part of DevOps, the CI/CD process incorporates automation into the SDLC, allowing teams to integrate and deliver incremental changes iteratively and at a quicker pace. Together, these human- and technology-oriented elements enable smooth, fast, and quality software releases. This Zone is your go-to source on all things DevOps and CI/CD (end to end!).
A developer's work is never truly finished once a feature or change is deployed. There is always a need for constant maintenance to ensure that a product or application continues to run as it should and is configured to scale. This Zone focuses on all your maintenance must-haves — from ensuring that your infrastructure is set up to manage various loads and improving software and data quality to tackling incident management, quality assurance, and more.
Modern systems span numerous architectures and technologies and are becoming exponentially more modular, dynamic, and distributed in nature. These complexities also pose new challenges for developers and SRE teams that are charged with ensuring the availability, reliability, and successful performance of their systems and infrastructure. Here, you will find resources about the tools, skills, and practices to implement for a strategic, holistic approach to system-wide observability and application monitoring.
The Testing, Tools, and Frameworks Zone encapsulates one of the final stages of the SDLC as it ensures that your application and/or environment is ready for deployment. From walking you through the tools and frameworks tailored to your specific development needs to leveraging testing practices to evaluate and verify that your product or application does what it is required to do, this Zone covers everything you need to set yourself up for success.
Kubernetes in the Enterprise
In 2022, Kubernetes has become a central component for containerized applications. And it is nowhere near its peak. In fact, based on our research, 94 percent of survey respondents believe that Kubernetes will be a bigger part of their system design over the next two to three years. With the expectations of Kubernetes becoming more entrenched into systems, what do the adoption and deployment methods look like compared to previous years?DZone's Kubernetes in the Enterprise Trend Report provides insights into how developers are leveraging Kubernetes in their organizations. It focuses on the evolution of Kubernetes beyond container orchestration, advancements in Kubernetes observability, Kubernetes in AI and ML, and more. Our goal for this Trend Report is to help inspire developers to leverage Kubernetes in their own organizations.
Getting Started With OpenTelemetry
It hasn't been smooth sailing for startups this year. As this week's guest Nick Cobb puts it, "You can add bank runs to the list of things founders have to deal with." Of course, it hasn't been easy going for engineering leaders either. That's why Nick, the VP of Engineering and Head of Product at Kyte, sat down with us to discuss how to build an engineering culture with a bias toward action, why he deleted his team's staging environment, and what it takes to outmaneuver his former employer, Uber. An angel investor, Nick also touches on the aftershocks of the SVB crash and its lasting effects on the startup community. Recorded live at LeadDev New York, this episode is a must-listen for engineering leaders who want to place product innovation in the driver's seat of their engineering org. Episode Highlights (2:21) Impact of SVB on the Startup ecosystem (7:59) Is now the perfect time to invest in an early-stage startup? (13:02) Metrics to pay attention to (19:33) Uber's lack of innovation (24:44) Dogfooding culture (27:30) What's happening in self-driving? (30:23) Leading engineering teams at Kyte (41:09) Postmortem culture Episode Excerpt Nick: I have a lot of friends who worked there, probably some of the best engineering culture in the world, and those folks are gone. I think that the way the company has been managed lately, like the level of innovation, has significantly backtracked, you know, a lot of folks talking about App redesign and stuff like that. I mean, that's been four years in the works to create this kind of Super App. And I'm not sure that when I opened the Uber app, or customers open the Uber app, they think about getting every single thing through Uber. And I think it's a bet. It's, you know, I think it's courageous to take that kind of bet and really look for that. But I'm not sure that that strategy wouldn't have been more effective, implemented faster, to kind of, like, get more people thinking that way. You go into the pandemic, and you really have to kind of burn a lot of the businesses and the core bets that really made Uber successful in a lot of different ways. Like, if you don't take moonshots, you don't get the microwave, you don't get, you know, tinfoil, you don't get, you know, the tang and all these other things that came out of that, these great innovations that we have as humans, because we're taking moonshots. And if you're not taking those, it does impact the culture overall. And you know, how people feel about what they're building. And I think, you know, the business has, has just kind of like reacted to more of the market need and less of maybe the customer need. And I think, you know, that's an advantage startup has, you know, build for speed and focus, listen to your customers and build what they want. You're not doing that, and I don't know what you're doing right. Conor: If I put you in charge of movers product and engineering teams, and it's like, suddenly, Nick, hey, and you're the new head of all engineering products at Uber, you've got this like carte blanche to reimagine how Uber's taking their approach. What would you do to try to inject more of that innovation DNA or improve what you're seeing as far as outcomes? Nick: Yeah, I mean, I think that the dogfooding culture seems to have disappeared. And I think that's the big thing. We're creating that at Kyte, and we want everybody internally to use the product as often as possible. The first thing that I would want to fix, just like personally, is dispatch latency in San Francisco. Conor: You're a San Francisco native, right? Nick: I'm from Memphis, but I live in San Francisco, and I use the product and, you know, I just dropped a general kind of observation on Twitter and said, it feels like dispatch latency is like, three, four minutes when I request the car, I can kind of leave my phone and just go get a coffee, go do something else. And, you know, there was a realization that we had at Uber back in the day, which was people start to demand a, like, a better experience year over year because they get used to the wait times being lower. And they so they anticipate that you know — Conor: It's gone the other direction, it feels like. Nick: Yes. And I think it's gone in the other direction. And my hypothesis is they've, they've integrated taxi into the supply, and they're doing that for, you know, core business supply constrained reasons, I think, and partnership as well. And yeah, I mean, every time I get a taxi, it's a 10-minute dispatch, it takes forever, and I'm upset with the product experience right now. So that's something I would focus on.
Are you looking to get away from proprietary instrumentation? Are you interested in open-source observability but lack the knowledge to just dive right in? This workshop is for you, designed to expand your knowledge and understanding of open-source observability tooling that is available to you today. Dive right into a free, online, self-paced, hands-on workshop introducing you to Prometheus. Prometheus is an open-source systems monitoring and alerting tool kit that enables you to hit the ground running with discovering, collecting, and querying your observability today. Over the course of this workshop, you will learn what Prometheus is, what it is not, install it, start collecting metrics, and learn all the things you need to know to become effective at running Prometheus in your observability stack. Previously, I shared an introduction to Prometheus, installing Prometheus, an introduction to the query language, and exploring basic queries as free online workshop labs. In this article, you'll continue your journey using advanced Prometheus queries with PromQL. Your learning path dives deeper into using advanced PromQL queries. Note: this article is only a short summary, so please see the complete lab found online here to work through it in its entirety yourself. The following is a short overview of what is in this specific lab of the workshop. Each lab starts with a goal. In this case, it is fairly simple: This lab takes you deeper into PromQL expanding your query toolbox with more advanced queries for visualizing collected metrics data. A start is made by looking back in review, and sharing how you've gotten an understanding of how to build and execute basic PromQL queries so far. You've done this up to now using the default Prometheus console expression browser and graphs. For this lab, you'll be diving deeper into PromQL, and, to broaden your knowledge of the tooling available, you'll install, configure, and query using an open-source query tool called PromLens. This is one of the best assistants you can find to help you build and understand what you are querying while seeing the results directly. Installing PromLens Your first task is to install on your machine PromLens, a standalone tool for learning PromQL and displaying insights into the queries you are running. To test out your new installation, you dive right into the concept of nested queries. PromQL expressions are not a single query, but often a set of nested expressions, each one being evaluated and used as an argument or operand to the expressions above it in the nested structure. You run examples of nested queries and explore their results using the teaching aspects of PromLens in the explainer tab. When this query is entered: rate(demo_api_request_duration_seconds_count{job="services"}[5m]) You see the explanation of each part of this query in PromLens like this: These explanations are extremely valuable when you are first starting out and trying to master a complex functional language like PromQL. Language Theory Before diving in further, you explore some of the language theory and definitions that are crucial to you learning to use PromQL effectively. There are two concepts of expression type when talking about querying Prometheus, and it's crucial you're able to understand the differences: Metric type - As reported by a scraped target (counter, gauge, histogram, summary, or untyped) Results type - Data type of a PromQL expression (string, scalar, instant vector, or range vector) PromQL has no concept of metric types. It's only concerned with expression result types. Each PromQL expression has a type, and each function, operator, or other type of operation requires its arguments to be of a certain expression type. Not only do the expression types exist, but there are also 10 different node types, which are the types of queries or expressions you can write. Here is the list with details about each one: Number literals: 6.45 String literals: "hello o11y" - Occur infrequently, used as parameter values to functions Instant vector selectors: some_metric{job="services"} - Explained previously Range vector selectors: some_metric{job="services"}[15m] - Explained previously Aggregation: sum by(job) (some_metric) - Allows aggregating over multiple series, always yields an instant vector Unary operators: -some_metric - Negates any scalar or instant vector values, returns the same type as it was applied on Binary operators: some_metric_1 + some_metric_2 - Returns scalar if both operands are scalar, otherwise vector Function calls: rate(some_metric[15m]) - Takes input parameters of varying types, returns varying types Sub-queries: (expression)[1d:] - Takes instant vector expression as input, returns a range vector Parentheses expressions: (42) - May return string, scalar, instant vector, or range vector, depending on usage Feels like we are entering the realm of mathematics and you might even remember some of this theory in your computer science courses from university, no? Don't worry, just the short foundational theory is covered before you jump right back into the hands-on application of it all. Advanced Topics You jump right into the more advanced topics, such as: histograms and quantiles, learn to calculate latency, aggregate away extra metrics dimensions (cardinality problems), apply filters, create queries with thresholds for alerting rules, filter with time series data, filter with booleans, explore the set operators available to you (AND, OR, UNLESS), explore metrics with timestamps, start manipulating metrics with timestamps, set up detection queries to discover slow batch jobs, set up a second services demo instance to explore how to query for running instance health in your infrastructure, and learn how to smooth out spiky graphs you generate with complex queries. This was pretty fun to see, so let's slow down here and share the spiky graph-generating query: go_goroutines{job="services"} Which indeed does produce something pretty ugly: To make this graph more useful, you smooth it out using averages over time as follows: avg_over_time(go_goroutines{job="services"}[10m]) Which sorts out the graph into something you can make sense of: There is so much you learn in this lab that it does not fit into an article, so make sure to take your time and run through this lab and you'll be running advanced queries to solve all kinds of observability issues! Missed Previous Labs? This is one lab in the more extensive free online workshop. Feel free to start from the very beginning of this workshop here if you missed anything previously: You can always proceed at your own pace and return any time you like as you work your way through this workshop. Just stop and later restart Perses to pick up where you left off. Coming Up Next I'll be taking you through the following lab in this workshop where you'll start to explore open dashboards and visualization of your metrics data. Stay tuned for more hands-on material to help you with your cloud-native observability journey.
System testing, also known as system-level testing, involves evaluating how the various components of an application interact in a fully integrated system. It is carried out on the entire system under either functional or design requirements. It helps you find flaws, gaps, or needs that are lacking in the software application's overall functionality. It validates the design, behavior, and customer expectations of the system. Software Requirements Specification (SRS) is necessary to function the system outside the specified restrictions. Functional and non-functional System testing can be performed by your company internally. However, you can hire professionals with in-depth knowledge of the complete procedure. It is important to note that System testing is performed by a separate team from the development team. It aims to measure the quality of the system objectively. In this System testing tutorial, learn why System testing is important and all the intricacies of the System testing process. What Is System Testing? System testing helps to understand how the end users will utilize the software applications and what issues they might encounter. However, understanding the application is just as crucial as having a requirements document. It is indeed typically accomplished by software testing engineers. Its performance occurs in a context comparable to the one used in production, permitting the developers and other relevant parties to analyze user responses. The below flowchart shows where the System testing happens in the software development lifecycle. It is important to follow a specific order when testing software. Following is a chronologically organized list of software testing categories. Unit testing: During development, each module or block of code is subjected to unit testing. Most of the time, the programmer who writes the code is responsible for unit testing. Integration testing: It occurs before, during, and after integrating a new module into the main software package. This includes testing each code module. A single piece of software may comprise various modules, which are frequently built by multiple developers. System testing: A high-skilled software tester performs System testing on the entire software application before it is released to the general public. Acceptance testing: Acceptance testing and beta testing of a software application are performed by real end users. Why Should You Consider System Testing? The flowchart above shows that System testing comes after integration testing but before acceptance testing. A component or system's observed behavior during testing is the outcome of System testing because it's crucial to thoroughly test a software application before making it available to users. System testing ensures that the system satisfies its goals and specifications. Therefore, an entire test cycle must reach its completion, and System testing is where you can accomplish it. It determines whether the system adequately spaces out when designed. System testing is a procedure that confirms the accuracy and checks for completeness. Since System testing is carried out in a setting akin to the production environment, stakeholders can accurately predict the user's response. One of its key goals is to ensure that the system complies with its requirements and operates as anticipated. Scope of System Testing Until now, in this System testing tutorial, we have got a good gist of what System testing is. Let’s look at the scope of it. Implementing System testing methodology lays out a procedure for confirming the functionality of a fully-fledged, integrated system to prevent defects. It aids in reducing troubleshooting and support calls made after deployment, making sure to find flaws, inadequacies, or unmet requirements. System testing also helps determine whether it complies with the specifications listed in the system specification. In System testing, the business requirements and the application architecture pass through tests to identify and fix the problems ahead of time to help decrease risks and ensure smooth operation. As a result, this testing is crucial and helps to ensure that the consumer receives a high-quality software application. Basic Requirements of System Testing A system test's objective is to assess the complete system requirements. System testing consists of several tests to exercise the entire computer-based system. To achieve your end goal, you must abide by the factors listed below while executing system test operations. Industry type: To comprehend the testing process and ensure you have the resources to do the assignment, be aware of the industry vertical to which your organization belongs. An organization can employ more automatic analyses, like something of a functionality evaluation if it chooses a more practical methodology. Rather than using a conventional analysis like exploratory testing, an organization may choose to adopt a user's perspective to discover flaws. The testing time needed: System testing may execute several tasks and take more time to complete in some contexts, particularly when regular monitoring becomes necessary. You must be aware of the time commitment you can make to testing. It will offer you a realistic notion of the progress and assist you in arranging your work.An organization may highlight system tests that require minimal steps if its product release deadlines are shorter, or it may adjust screening procedures to accommodate its specifications. Resources at your disposal for testing: As previously said, when making plans for System testing, you must consider your test team's size, expertise, and experience. You may need to train your current personnel or hire extra testers based on the size and complexity of your application. Background of tester: Planning for System testing should consider the testers' experience. The test cycle could take longer to complete if the testers are inexperienced. However, if they had previous experience, it would take less time. Entire testing expense: When developing your test strategy, it's crucial to keep the entire cost of testing in mind. System testing may be a costly procedure. The size and complexity of the system, the number of test cases needed, and the time and resources required to carry out the tests are just a few of the numerous variables that might affect the cost of System testing. What Do You Verify in System Testing? The software application code is tested as part of System testing for the following areas: Checking how components interact with one another and the system as a whole. This involves testing the fully integrated programs with external peripherals. This scenario is also known as end-to-end testing. Check for intended results by thoroughly checking each input in the program. Evaluation of the application's user experience. This gives a very brief overview of what goes into System testing. You must create thorough test cases and test suites that evaluate every program component from the outside without access to the source code. Positive Aspects of System Testing Each Software testing method that is now accessible has its learning curve. A tester must learn how to operate some of the software involved. Large businesses employ different strategies than medium-sized and small businesses. This testing examines the business needs as well as the application architecture. Testers don't need additional programming expertise to execute the System testing methodology. If this testing is carried out methodically and correctly, it will reduce post-production problems. It will test the complete piece of hardware or software, allowing us to quickly find any flaws or issues that slipped through integration and unit testing. The test environment is comparable to live, working, or commercial settings. End-to-end testing of the system is part of this testing. System testing is carried out in an environment that is identical to the production environment, which aids in understanding the user's perspective and helps avoid problems arising after the system goes live. It also addresses users' technical and business needs while using various test scripts to verify the system's complete performance. Following System testing, the software application will have practically all potential flaws or faults fixed, allowing the development team to go on to acceptance testing safely. Phases of System Testing System testing can be more successful with a focused and clear requirement document that includes the most recent modifications and comprehension of real-time application usage. It establishes clarity in requirements to have a smooth functional performance and fulfill the security and recoverability expectations. Further, the tester must also be aware of how the program in OS versions used is in practice. Depending on the development lifecycle model, you can perform Component testing in isolation from other system components. Isolation is used to keep external influences away. So, to test that component, you need to simulate the interaction between the software application's components using stubs and drivers. From the flowchart below, you can understand the different phases of System testing easily. Analyzing the requirements to build the test environment: Setting up the test environment comprises identifying the frameworks, programming language(s), and testing tools the tester will use and establishing any necessary dependencies and configurations. Making of a test case: It covers all the precise information regarding what you must test and how to conduct the test. The test case document should also include what counts as a pass or fail for each test. Development of test data: Test data generation comes after the test case. The inputs and outputs' findings may be both favorable and unfavorable. One must be extremely careful when gathering test data to include all necessary fields and not overlook any significant area. Implementing the test case: After that, you should have a plan for running the test case to produce an output. The output will indicate whether the test case was successful or unsuccessful. Bug reporting: The test case should demonstrate how the system responds to an error or flaw. We must comprehend the reporting and resolve issues to plan system testing appropriately. Regression testing: A tester opts for regression testing to avoid the broken functionality problems that may arise due to new features. Various tools are available to automate this process. Fixing errors: Having a strategy in place to address flaws is also crucial. Although it is difficult to manage all faults, success depends on having a procedure to identify and fix them. Retesting: The development team should fix a fault after it has been discovered and recorded by a tester, and the tester should then confirm that the repair functions as intended. If you don't do this, you risk your software having undetected flaws. When necessary, you must prepare for either a complete or selective retesting. System Testing Types Due to its inclusion of all the major testing kinds, System testing is a superset of all testing types. Although the emphasis on particular testing methods may change depending on the product, organizational procedures, deadline, and requirements. Functionality testing: It determines whether the system, particularly its features, complies with the requirements to confirm that the product's functionality complies with the specified standards while staying within the bounds of the system.During functional testing, testers could compile a list of potential extra features a product could have to enhance it. The available analysis represents testing environments in both manual and automated testing methods. Recoverability testing: Testing for recoverability helps you to see if the system can bounce back from crashes, hardware failures, and other significant issues determining how effectively the method jumps back from different input errors and other failure scenarios.Additionally, it ensures that no problems from the past recur as more software modules keep added to the queue over time. It gauges the system's resilience to unforeseen inputs and circumstances. Performance testing: Verify the components of your system abide by the performance characteristics through performance testing. It establishes if a system achieves its performance goals, such as throughput or reaction time. Usability testing: In usability testing, the tester must examine the system's user-friendliness to guarantee that the system is simple to understand, operate, and use. Participants in the test must carry out some tasks with the system. It would be best if you optimized an excellent scrutinized performance and any issues that may evolve while running the application. Regression testing: It ensures that new system modifications don't compromise existing functionality and is crucial to ensure the system's stability as it integrates various subsystems and maintenance procedures. This testing contrasts the system's present behavior with a previous iteration. Hardware/Software testing: You can test both hardware and software of a system using System testing. IBM refers to both hardware and software testing as HW/SW testing. Here, the system's operation evaluates without considering its structural elements.The functionality of each piece of hardware examines to ensure that it functions as intended. Testing the system's wiring, power supply, and input/output components may be necessary—the functionality of each software that makes up the evaluated system. System Testing: Is it White Box or Black Box Testing? System testing comes under the classification of the black-box test method. Although the white box technique requires internal code knowledge, the black box testing technique does not. Testers include functional and non-functional System testing, security testing, performance testing, and other addressed test cases. Here, components that passed the integration test are used as input and tested using the black-box methodology. The purpose of integration testing is to find discrepancies between the integrated units and the output, which is confirmed. However, let us look into the entry and exit criteria to plan the System testing. Entry requirements: The system should have met the integration testing exit criteria, meaning that all test cases should reach their endpoint, and there shouldn't be any open critical or priority bugs. The test plan should be authorized and signed off. Execution of the scenarios and test cases should be as per the preparation. A proper framework of designed test scripts must abide by the implementation strategy. There should be test cases for each non-functional requirement, and they should all be available. The testing set ought to be ready to execute. Exit standards: Shown below are a few of the exit standards for the system. Execute each test case. There shouldn't be any open critical, priority, or security-related problems. If any medium or low-priority issues are still outstanding, the developer should fix them with the customer's consent. The tester must turn in the exit report to keep a performance record. Tools for System Testing Following is the list of tools that get along well with System testing. Robot: The Robot framework comes with several pre-built tools and libraries, but you may also make your own. It's application and operating system are separate. Although Java and .NET have interfaces in the standard library, Python is used to implement the core framework. Gallen: You can automate System testing using the Gallen framework. It is simple to include in our development process as the framework belongs to the open-source category. Selenium: Selenium is an open-source test automation framework used to automate web application testing across different browsers and platforms. Testers can speed up testing cycles by automating repeated test cases with the Selenium framework. As a part of a CI/CD pipeline, Selenium can also help ensure a stable and bug-free release deployment process. JMeter: JMeter is one of the performance testing tools that help you conduct performance testing of websites and web applications. It can simulate a great demand on a network or other object to assess a server's durability or examine its overall performance under various load types. LambdaTest: LambdaTest is a cloud-based testing platform to help you perform System testing over its secure and scalable online Selenium Grid. With LambdaTest, you can automate cross-browser testing across 3000+ operating systems, browsers, and OS, resulting in increased test coverage and much faster build times.You can assess how effectively your web application renders across various browsers and OS combinations. Using LambdaTest Tunnel, you can test locally hosted web pages. A single test might potentially be executed parallel on several browsers and OS configurations. How System Testing Fits With Other QA Methods Software testing encompasses a variety of techniques with several involved processes. Applications have many components that need examination before being released. Sometimes, IT professionals have trouble understanding these QA testing approaches' jargon. After developers have integrated and tested all its components, system testing takes the application. An application is practically ready for production at this point, and this process ensures that the code satisfies all specifications before going live. This test level involves a variety of non-functional tests, like performance and load testing, to look at the application's customer-facing features. An Example of a System Test Case This section of the System testing tutorial will help you to understand how System testing works with an example. Firstly, a tester must check if the website loads smoothly, includes all the necessary functions, pages, and logos, and if a user can access the site and log in. The user must be able to add items to his cart, complete the purchase, and receive a confirmation via email, SMS, or phone call if he can see the available products. Next, the tester must check that the site can simultaneously accommodate the number of users (as specified in the requirement specification). Further, the primary features, such as searching, filtering, sorting, adding, altering, wishlist, etc., must function as intended. A tester should check if the website functions well across all platforms, including Windows, Linux, mobile, etc., loads in all popular browsers, and its recent versions with the text on the pages correctly spaced, organized, and free of typos. Further, the tester also verifies if the session timeout is working as expected. Lastly, the tester must verify that the user manual or guide, return policy, privacy policy, and terms of service are all provided as separate documents. The ability to download the same by new or first-time users would help ensure that the transactions made on the website through a particular user are safe. The end-user must be pleased with the website after using it and should face no hiccups. How to Perform System Testing Using LambdaTest? There are many ways to perform System testing. You can do it manually or through automation. To streamline your testing process and avoid the hassle of maintaining in-house device labs, you can use cloud-based platforms like LambdaTest. It enables simple access to the cloud environment, making software testing a manageable and scalable procedure. LambdaTest gives you access to an online browser farm and device farm for your mobile and web testing needs. Below are the steps to perform live-interactive System testing on the LambdaTest platform. Sign in to your LambdaTest account. Register for free if you don't already have an account. Navigate to Real-Time Testing > Browser Testing from the left sidebar. Enter the URL, and select the browser VERSION, OS, and RESOLUTION. Then, click START. A cloud-based virtual machine will launch, running a real operating system. Here you can perform live-interactive System testing of your web applications. Limitations of System Testing Despite many benefits, System testing has many limitations too. Following are some of its limitations. Need debugging tools and highly competent testers. Results will be more effective, but the process may become costlier depending on the available resources. Even if every route in the source code is examined, there is still a potential that certain bugs may go undetected. Compared to other types of software testing, testing the entire system requires more time. System Testing vs. Acceptance Testing In this section of the System testing tutorial, let’s look at how System testing differs from acceptance testing. System testing Acceptance testing Verifies whether the software application meets the specified requirements or not. Verifies whether the software application meets the end-user requirements or not. Used by developers and testers. Used by testers, users, and stakeholders. It involves both functional and non-functional testing. It involves functional testing only. It takes place before acceptance testing. It takes place after System testing. It checks for dummy inputs. It checks for random inputs. It consists of both system and integration testing. It consists of both alpha and beta testing. It is possible to fix defects found during testing. An acceptance test for a product that finds any defects is considered a failure. Testing a system involves both positive and negative cases. It involves positive test cases only. Best Practices of System Testing This section of the System testing tutorial lists some of the best practices for running a system test. Instead of performing ideal testing, you must opt for simulating real-world circumstances because end users will be using the system, not skilled testers. Further, try to verify the system's answer multiple times because people do not like waiting or viewing inaccurate information. The end user will install and configure the system following the documentation. A better system can be sent in by including individuals from many fields, such as business analysts, developers, testers, and customers. So, routine testing is the only way to ensure that even the most minor update in the code to remedy a bug hasn't introduced another severe bug into the system. Humans do not like to wait or see incorrect data, so they verify the system's response in various ways. As the end-user will be configuring the system, install and configure it according to the documentation. Testing regularly ensures that even the most minor change in the code to fix a bug doesn't introduce another critical bug. To minimize the possibility of outside interference and deliver reliable assessments to an organization, ensure you can execute each test case in the same conditions. Simulate a prospective customer's environment in your user-based tests to improve them. To guarantee that all testers are aware of the requirements, it could be essential to produce a comprehensive document.Analyzing these documents can enable you to comprehend the appropriate measures and increase competitiveness. Stay on top of and highlight any errors you discover, then indicate to a developer how to fix them by modifying the coding language. Consider implementing a management structure that satisfies your demands and expectations to develop an efficient report during the process. Summing Up Software testing provides a standardized procedure. Identifying the test environment, establishing test scenarios, writing scripts, assessing test results, and sending defect reports are tasks or steps. Application programming interface (API), user interface, and system levels are all included in a comprehensive testing strategy. Moreover, the more early and completed automated tests, the better. Some work together to develop their customized internal test automation frameworks. System testing is crucial to ensure quality control. Hence, it carries its importance throughout the software development cycle. It is essential; serious problems could arise in real-world environments if done incorrectly. You can ask experts to perform System testing services for your organization if the procedure appears too daunting and you are unsure where to begin. However, you must go through System testing to test every aspect of the website.
Deployment is the day when the software is finally released to the world. Yet, as Stackify CEO Matt Watson said, organizations lack confidence in deployment. One of the greatest strengths of agile is the ability to deploy rapidly. However, moving too fast and without following the right processes, one can witness problems like downtime, errors, and poor user experience. In addition, software deployment includes various activities like installing, configuring, testing, and monitoring the performance of newly deployed environments. For instance, there are a few practices for deploying software like A/B testing, Shadow Deployment, Grey Box Testing, Black Box Testing, White Box Testing, etc. Here are the best practices that can assist you with an effective software deployment: Separate Clusters for Non-Production and Production Having an enormous cluster often creates problems for security and resource consumption. Therefore it is crucial to have two clusters at the minimum – one for production and another for non-production. Here is how you can separate clusters for production and non-production: When using Kubernetes, use the K8 cluster for each environment. Keep all the clusters in different namespaces. Give access to the production cluster to fewer people. Iterate faster to prevent production failure. Carefully Collect Deployment Metrics Kubernetes clusters have distributed services that support dynamic software. Therefore, it becomes crucial to have appropriate metrics to enable the applications to adapt to traffic. In addition, metrics can help measure the success of the deployment, enabling continuous monitoring of the performance of an application. Follow these ways to collect Kubernetes clusters with ease: Deploy Kubernetes cluster to run commands. Use kubectl get to run commands against Kubernetes clusters and query the metrics API. Retrieve compact metric snapshots. Query resource allocations Employ the Kubernetes dashboard to browse cluster objects. Implement a Continuous Integration Server A continuous integration server is software that centralizes all of your integration processes and provides a dependable building environment. In addition, it is highly adaptable and allows you to create different projects for various platforms. The most critical consideration when utilizing a CI server is having a clean machine ready for installation. An environment free of excessive tools, environment variables, and other customizations is essential for running the CI server and overall process properly. Here are the practices to run a continuous integration smoothly: Frequently commit codes Fix broken builds as soon as possible Write unit tests Ensure all tests must be passed Avoid breaking code Use of Deployment Checklist Every task constituting multiple steps seems complex to accomplish unless you have a process. The same goes for deploying new software too. Preparing an app deployment checklist ensures that all the critical tasks are completed with utmost precision. Also, it is a must for one to be aware of the application's KPI. Based on the above two parameters, you can make customized checklists for software deployment catered to your team's needs. Consider Applying Resource Limits If you are deploying your application to Kubernetes, there is no resource limiter by default. Without a limiter, your application can consume the entire cluster's resources, disrupting the production cluster's performance. That's why it is crucial to have resource limiters to avoid such circumstances. When setting limits, it is important to consider potential traffic and load bursts. Although Kubernetes is known to provide resource elasticity, maintaining a balance is important. Setting the limit too low can lead to application crashes, and setting it too high can make the cluster becomes inefficient. Automate Your Deployment Process Manually deploying can work, but it is not the right way to do it. When a complex process like deployment is done by hand, it leaves too much room for human error. Automating deployment processes reduces errors and speeds up deployments making the process convenient for your team. Automation can be deployed simply by using scripts to deploy specific actions in a specific environment. Many advanced CI/CD tools in the market support automated deployment.
2023 is gearing up to be the year platform engineering goes mainstream. As budgets tighten, businesses want more insight into what engineering is doing, and platform teams look to simplify the developer experience — helping to do more with less. The discipline of platform engineering aims to remove bottlenecks and help accelerate the speed at which application teams deliver value to end users. To abstract out the complexity of engineering in the cloud-native world. To take care of what Syntasso’s Abigail Bangser calls “non-differential but not unimportant work” — security, Kubernetes, cloud deployments, observability, compliance, and more. For Port CEO Zohar Einy, platform engineering covers all the Ops that distract from the Dev — DevOps, SecOps, FinOps, RevOps… you get the idea. All those portmanteaus admirably aim to tear down silos among departments but also risk increasing the cognitive load of already burnt-out developers. The term “full stack developer” is an impossibility, but with seven-layer technical stacks, it’s hard to even remain a full stack team. And a lot of these requirements are repetitive and redundant across organizations, regressing these creative workers back to code monkeys. Platform engineering — and the internal developer portals or IDPs it demands — offers the opportunity to finally break down that important silo between business and DevOps, enabling FinOps and GreenOps at a time of tighter finance and carbon budgets. All while enabling developers to focus on work that matters to the business and to your users. Platform Engineering Connects BizDevOps A few years ago, when Einy and his co-founder Yonatan Boguslavski were tasked with simplifying the DevOps experience for an Israel Defense Forces’ intelligence team of thousands of developers, “We realized how big the problem is.” Complexity had grown so much that application teams could no longer be autonomous, lacking the knowledge of or access to the infrastructure they needed. “The entire ecosystem of Ops became very complicated,” Einy reflected. “You can’t have one repo, one machine. You have a lot of things going on: Infrastructure as a Service, cloud resources, microservices, Kubernetes, ArgoCD — and the list goes on, and it never stops. Innovation brought into DevOps work created a lot of complexity, with different architectures and moving parts.” There were 10 to 20 necessary steps if a developer wanted to create a new microservice: create a repository, configure the CI/CD pipeline, run infrastructure as code, and stay within guardrails (if they exist). “And to do that, you need to be an expert in the entire ecosystem of DevOps,” Einy said, and understand who owns the microservice and if your Kubernetes cluster is the best version to comply internally. Again, non-differential — yet organizationally important — work that distracts from delivering end-value to users. It is also repetitive work that takes away the creative side of engineering. “With DevOps, we keep innovating, but we can’t expect developers to keep up the pace,” he said. They eventually realized that an in-house internal developer portal (IDP) was the right solution as a way to provide engineers with everything they need to go “zero to hero” tucked behind a single interface, so they can reclaim application team-level autonomy. If platform engineering is the what, then the internal developer portal becomes the how. An Internal Developer Portal Prints FinOps Receipts Now that platform engineering is a more widely understood and adopted discipline. Its use cases are going well beyond delivering on the promise of DevOps. That’s because an internal developer portal or platform provides transparency to the business side while helping development teams understand how they are delivering business value. One platform engineering use case that’s gaining traction is FinOps, which Passion.io’s Sara Miteva pegs as a way to optimize cloud spending “to empower teams to balance between speed, expense, and quality.” The FinOps Foundation calls it a cloud financial management discipline and cultural practice which “enables organizations to get maximum business value by helping engineering, finance, technology, and business teams to collaborate on data-driven spending decisions.” Indeed, cloud computing costs continue to rise while organizations look to cut all spending — unfortunately emphasized by ongoing tech layoffs. Last month, the co-founder of Basecamp and Hey productivity tooling, David Heinemeier Hansson, aka DHH, recommended looking at your other expenses, including cloud cost before you cut staff or payroll. In fact, their team left the cloud completely last year and are already showing massive savings. It’s unlikely that more companies will follow suit and leave the cloud totally behind, but optimizing your cloud usage can significantly cut waste, along with your carbon footprint. “Many people are dependent on the software that is being developed in a company. It’s become necessary to better understand your expenses and connect them to the business,” Einy said, offering another definition of FinOps. The challenge is that cloud resource provider reports tend to be rather meaningless to developers, teams, and business owners, as they reflect a collective cloud spend without drilling down to the team or microservice level. FinOps looks to understand cloud expenses and how it affects them from a business point of view: How much does this customer cost? How many resources is this microservice using? How many dollars does each development team expense? How could resources be better shared? The necessary data to answer is there across existing cloud reporting tools, but it needs to be mapped in a way for all stakeholders to understand. An internal developer portal that integrates with tooling like Kubecost can offer a lot of insights into reporting, offering a common visual language between business and DevOps teams, mapping cloud spend data to developers, teams, microservices, systems, and domains, giving insight at the team level of who is spending what, within different environments. This transparency nurtures more cost-conscious engineering teams, which in turn enables right-sizing cloud spend. The internal developer portal becomes a way to represent the logical aspects of your environment, tagging and tracking different elements. This horizontal view empowers engineers and product managers with better decision-making at the business level, not just at the operational level. Making decisions based on the business value gained per resource, teams can now optimize what to leave in the cloud and what to remove from it, and where to share resources. "We see DevOps teams as a major beneficiary of IDPs, and FinOps specifically, since, instead of organizing FinOps reports or trying to manage all of their DevOps asset information, they can use the software catalog in the internal developer portal," Einy said, pointing to this common theme of platform engineering and IDPs not replacing, but finally delivering on the promise of DevOps. Yes, FinOps is another move to shift things left. Which could just put more work on the already overburdened application teams. But the thought of IDP-enabled FinOps is to present the information and options simply and transparently, without teams having to waste time searching for it. FinOps Offers Back Door to GreenOps Agile has taught us that you cannot improve what you cannot measure. Therein lies the challenge with most initiatives to decrease carbon footprint — carbon measurement across the tech industry is still nascent. Cloud cost remains the most reliable way to measure the carbon output of engineering teams, especially in less green cloud regions like the Eastern U.S. Within a no-code internal developer portal like Port, you can represent your regions and cloud providers and tie them to data and expenses. After all, your cloud region is a known logic. If an organization is better at tracking its cloud usage, you can see within the IDP what resources are being underused or not used at all. You could then flag that a resource isn’t being used and then ping the developer connected with it, asking if — yes or no — that resource can be auto-terminated. It also allows departments to identify resources that could be shared among multiple teams. If your cloud usage is tracked at both team and resource levels, product managers can query the data and create reports on cloud cost per team, service, environment, system, and domain and start to automate resource allotment for how much is usually used monthly. This skips the usually manual step of developers asking permission to procure cloud resources, but then, if a team needs to go over, a manual approval process is still warranted. After all, some bottlenecks, especially in these tight times, are necessary. The end effect of GreenOps is, of course, not just cutting carbon footprint but cutting cloud cost. When platform engineering drives a common language between business and tech, everybody wins.
Smart contract development, more so than most web2 development, requires thorough testing and careful deployment. Because smart contracts are immutable and often involve large sums of money, it’s very important to do all you can to be sure they are secure, reliable, and as free from bugs as possible. Two important tools for achieving these goals are the same tools most web3 devs use for their day-to-day work—Truffle and Infura. These tools give you what you need to write, test, debug, and deploy your smart contracts. In this article, we’ll walk you through a step-by-step guide on how Truffle and Infura can be utilized to debug and deploy smart contracts on Ethereum. We’ll begin by establishing a development environment and creating a basic smart contract using Solidity. Next, we’ll conduct debugging of the contract, followed by deployment using Infura, and ultimately debugging on the blockchain. Finally, we will provide some recommendations for writing resilient and secure smart contracts. Let's get started! Setting up the Development Environment Our first step is to establish a basic smart contract development setup using Truffle and Infura. However, it is important to note that this is not a comprehensive guide for setting up the environment, as there are already numerous tutorials available for this purpose. If you need a little background on smart contracts, blockchain, Ethereum, etc., then check out my previous article: Learn To Become a Web3 Developer by Exploring the Web3 Stack. Prerequisites Node.js: Make sure you have Node.js installed on your system. You can download and install the latest version from the official website. Truffle: Truffle is a suite of development tools for smart contract development. It gives you the tools you need to manage your workflow, test, deploy, run local blockchains, and more. You can install it globally using npm, the Node.js package manager. Run the following command in your terminal: Infura: Infura is a set of blockchain APIs that provides a simple and reliable way to connect to the Ethereum (and others) network without running a full node. It’s the industry standard way to access blockchains etc. You'll need to create an account on the Infura website and obtain an API key to use their services. Configuring Truffle Once you've installed Node.js and Truffle, you're ready to create a new Truffle project. Simply run the following command in your terminal to create a new project: This command will create a basic Truffle project with everything you need to get started on a new project. The contracts/ folder is where we'll write our smart contract, while the migrations/ folder is where we'll write migration scripts to deploy our contract to the blockchain. Next, we need to configure Truffle to use our local blockchain simulator. Open the truffle-config.js file and modify the development network to point to Ganache. Ganache allows you to fire up a personal instance of Ethereum for local development and testing. Here's an example configuration: JavaScript module.exports = { networks: { development: { host: "127.0.0.1", port: 8545, network_id: "*" } } }; Debugging Smart Contracts Locally Now that we have set up the development environment let's start by writing a simple Ethereum smart contract using Solidity. Smart contract development is a big topic! For an intro, check out this 10-minute orientation. For the purpose of this tutorial, we’ll keep it super simple and create a contract that simply allows users to store and retrieve a string value. JavaScript // SimpleStorage.sol pragma solidity ^0.8.0; contract SimpleStorage { string private value; function setValue(string memory _value) public { value = _value; } function getValue() public view returns (string memory) { return value; } } Save the source code above into a file named SimpleStorage.sol. The Truffle Debugger offers two ways to work with this code in your local environment. These are in-test debugging and read-only debugging calls. In-test debugging works within tests and is quite simple. You just wrap the line of interest in a debug statement like this. JavaScript it("should get latest result", async function() { const result = await debug( SimpleStorage.getValue() ); }); When you run your tests, add a debug flag and see the magic happen. This will pause the tests at the designated debug line and then launch the debugger's CLI interface, which allows you to step through code, inspect variables, and set breakpoints. Another way to access the Truffle Debugger is through read-only debugging calls. This method is preferred by many developers because it uses a transaction hash to access the debugger. With read-only debugging, developers can debug a transaction that has already been executed on the blockchain, making it a more practical method for debugging. This will open the Truffle Debugger in the terminal and then allow you to step. For an exhaustive list of options available in debug mode, check out the documentation. Deploying Smart Contracts With Infura Once you've written and tested your smart contract locally, it's time to deploy it. While it’s possible to deploy a contract using your own node, this can be time-consuming and resource-intensive. An alternative approach is to use a remote node like Infura. Infura is a node provider that gives you a simple and reliable way to deploy smart contracts to various layer 1 blockchain networks. To use Infura for contract deployment, you'll need to sign up for an account and create a new project (which you probably did above). Once you've done that, you'll be able to access your project's API endpoint, which you'll need to use to interact with the blockchain network. This tutorial offers a step-by-step guide if you need help with this. To deploy our smart contract to the Ethereum network, we'll need to connect to Infura using our API key. Open the truffle-config.js file again and add the following configuration for the Sepolia network. (Replace <PROJECT_ID> with your Infura project ID.): JavaScript const HDWalletProvider = require("@truffle/hdwallet-provider"); const infuraProjectId = "<PROJECT_ID>"; const mnemonic = "your mnemonic goes here"; module.exports = { networks: { development: { // ... }, sepolia: { provider: () => new HDWalletProvider(mnemonic, `https://sepolia.infura.io/v3/${infuraProjectId}`), network_id: 11155111, gas: 4000000 } } }; Note that we're using the @truffle/hdwallet-provider package to connect to Infura with our mnemonic. You can replace Sepolia with the name of the Ethereum network you want to deploy to. JavaScript truffle migrate --network sepolia We’re using a test Ethereum network here because there is a cost (gas fee) associated with deploying smart contracts. To pay this fee, you need ETH. On the Ethereum main net, you have to buy ETH—and the expenses can add up quickly. But on the test networks, there are faucets where you can get test ETH for free. Debugging Online Smart Contracts While debugging smart contracts locally will be the most common task, there may be occasions when you need to debug a contract that has already been deployed for some reason. Fortunately, the Truffle Debugger can accommodate this scenario as well. The added benefit is that it doesn't necessarily have to be your own contract as long as it has been verified on the blockchain. To debug a smart contract on a deployed chain, you can use the Truffle Debugger and Infura just as you did for local debugging. The only difference is that you'll need to specify the network to connect to. Here's an example of how to do it: In this example, we're connecting to the Sepolia testnet using the --network flag and specifying the transaction hash of the contract we want to debug. Once connected, you can use the same debugging techniques as you did for local debugging, such as stepping through code, inspecting variables, and setting breakpoints. Just keep in mind that network conditions may be different than your local machine, so be sure to thoroughly test your smart contract before deploying to an online chain. If you are still not feeling confident enough, you can take a look at this nice video tutorial. Best Practices for Testing and Deployment As you continue to develop and deploy smart contracts, here are some basic best practices to ensure your code is reliable and secure. Most of these are best practices for any type of coding—so be sure to keep them in mind: Test your smart contracts extensively before deploying them: Before deploying your smart contracts, make sure you test them thoroughly to catch any bugs or errors. Use the Truffle Debugger to debug your code locally (and on a deployed chain if necessary) to make sure it behaves as intended. Smart contracts on Ethereum are immutable. Be careful with production deployments. Use meaningful variable names: Give your variables meaningful names so that it's easier to debug your code. Avoid using abbreviations and acronyms unless they are widely understood. Comment your code: Like with all code, add comments to explain what it does and why it's necessary! This will help you and other developers understand the code and make changes to it later. Keep your contracts simple: Complex smart contracts can be difficult to test and debug. Whenever possible, break your contracts into smaller, more manageable pieces. Consider gas costs: Make sure you optimize your code to reduce gas costs and keep them as low as possible. Gas prices can drive up your expenses quickly. Keep your dependencies up to date: Make sure you keep your dependencies up to date to avoid any security vulnerabilities. Use tools like Truffle's truffle-verify plugin to verify your smart contracts and ensure they are secure. To dive in further, check out these best practices more specific to smart contracts. Conclusion We explored the benefits of using the Truffle Debugger and Infura in tandem to enhance the security and reliability of smart contracts. We have also seen how to deploy smart contracts on an EVM-based network using Infura and then how to debug them on a deployed chain. This is just the beginning! Once you are familiar with tools such as Truffle and Infura, you’re ready to start exploring smart contracts and blockchain development. Have fun!
With cy.intercept(), you can intercept HTTP requests and responses in your tests, and perform actions like modifying the response, delaying the response, or returning a custom response. When a request is intercepted by cy.intercept() the request is prevented from being sent to the server, and instead, Cypress will respond with the mock data you provide. This allows you to test different scenarios and responses from a server without actually having to make requests to it. Before intercepting network requests, one of the main challenges was that it was difficult to debug and diagnose network-related issues. Developers needed more visibility into what was happening with network traffic between a client and a server. Intercepting network requests provides insight into the network traffic generated by the application. Without this capability, troubleshooting issues can become more complex and time-consuming. The team may not have the necessary information to identify the cause of the problem, which can result in delays in the testing process. Moreover, QA teams had little access to the requests and responses transmitted between the client and server due to their inability to intercept and examine network data. Because of this, it was challenging to comprehend the application’s behavior. Many tools can be used for intercepting network requests. Cypress is one of the most popular automation testing frameworks through which you can intercept network requests. Cypress intercept — cy.intercept() is a method provided by Cypress that allows you to intercept and modify network requests made by your application. It will enable you to simulate different server responses or network conditions to test how your application handles them. This can be very useful when writing end-to-end tests. This can be very useful when writing end-to-end tests. To use the Cypress intercept — cy.intercept() method, you can call it within a Cypress test like this: cy.intercept(<url>, <options>) The parameter specifies the URL of the network request you want to intercept and is an object that can be used to specify additional options, such as the response to return and the status code to use, and so on. Before deep diving into using Cypress intercept for handling network requests, let’s first understand the nuances of network testing while performing Cypress testing. What Are Network Requests? Network requests refer to exchanging data between a client and a server over a network. When testing web applications, verifying the correct metadata, such as headers, cookies, authentication tokens, and other information sent with the request, can be important. In testing, network request metadata can be used to verify that the correct headers and cookies are being sent with the request, to ensure that the data is being sent in the correct format, and to verify that the authentication tokens are being set correctly. Tools such as Cypress provide APIs for intercepting and inspecting network request metadata, making it easy to test the behavior of network requests and ensure that they are being handled correctly. Network requests can be made using HTTP (Hypertext Transfer Protocol) or HTTPS (Hypertext Transfer Protocol Secure) protocols. HTTP is the traditional protocol for sending and receiving data over the Internet. It’s a simple, text-based protocol for sending and receiving information between clients and servers. HTTPS is a secure version of the HTTP protocol that uses SSL/TLS encryption to secure the data transmitted between the client and the server. HTTPS protects sensitive data, such as passwords, credit card information, and other personal information. When you make an HTTPS request, your browser establishes an encrypted connection with the server. All data exchanged between the two parties is encrypted to ensure that third parties cannot intercept and read it. This is crucial to how the Internet works and allows data transfer between devices. Here’s an example: A user opens a web browser and types in the URL for a website, such as “www.example.com.” The browser sends a network request to the server hosting the website, asking for the HTML, CSS, and JavaScript files that make up the website. The server receives the request and sends back the requested files. The browser receives the files and uses them to render the website on the user’s screen. The user interacts with the website by clicking a button or filling out a form. The browser sends another network request to the server with additional data, such as the information the user entered in the form. The server processes the request and returns the appropriate response, such as confirming the form submission or sending back data for the next page of the website. In the below diagram, you can see there is a network request between a client (the web browser) and a server (the website host). The client requests resources, and the server sends back the requested information. The process can be repeated as the user interacts with the website. What Is Intercepting Network Requests? Intercepting network requests refers to intercepting and inspecting the traffic between a client and a server during communication over a network. Web development usually involves intercepting HTTP requests and responses. Intercepting network requests can be helpful in various situations: Debugging and troubleshooting network issues. Inspect the request and response payloads. Modify the requests in real-time. By intercepting and modifying network requests, inspecting and manipulating various aspects of the communication, such as headers, parameters, cookies, and response data, is possible. When intercepting network requests, usually have a proxy between the client and the server. The proxy will intercept all network traffic between the two endpoints and allow the user or tool to inspect and modify the traffic before forwarding it to its intended destination. In the diagram above, the client is requesting the server, but the request is intercepted by a proxy before it reaches the server. The proxy can then analyze and modify the request before forwarding it to the server and can also analyze and modify the response before sending it back to the client. One commonly used tool for analyzing network traffic in Wireshark. Wireshark is a free and open-source packet analyzer that lets you capture and analyze network traffic in real-time. Another tool for analyzing network traffic is tcpdump. It can capture and filter network traffic for various protocols, including TCP, UDP, ICMP, and more. Why Intercept Network Requests? Intercepting network requests can provide many benefits, depending on the context and the reason for the interception. Here are a few examples: Debugging: By intercepting network requests, developers can easily see what data is being sent and received during different application stages. This can help to quickly identify and resolve any issues related to the application’s network communication. Testing: Intercepting network requests can be beneficial for automating tests of web applications. Test scripts can be written to simulate user actions and inspect the resulting network traffic, allowing developers to automate the testing of complex workflows and scenarios. Data Modification: Interception can modify network requests, allowing developers to manipulate the data sent over the network. Accelerating test execution: By intercepting network requests and returning mock responses, you can reduce the number of requests your application makes to the server. Security: Intercepting network requests can also be used as a security measure. For example, In organizations, you can monitor network traffic to detect malicious activity, such as a cyber-attack. Modifying network requests: By intercepting network requests, developers can modify the requests made by an application to test different scenarios or to implement new features. By modifying requests and analyzing responses, you can quickly identify and fix issues in your code. Below are some other benefits of intercepting the network request, which is helpful for the network team. Traffic analysis: By capturing and analyzing network requests, network administrators can gain insights into network usage patterns, identify potential performance bottlenecks, and optimize the network for better performance. Content filtering: By intercepting network requests, organizations can filter content based on various criteria, such as security policies, bandwidth limitations, or content type. Monitoring: Interception can also be used for monitoring purposes, for example, to gather statistics about the usage of an application or to gain insights into user behavior. QA Tools for Intercepting Network Requests Here are some QA automation testing tools that can be used for intercepting network requests Cypress: Cypress can intercept network requests and manipulate their responses. In Cypress, you can intercept network requests using the Cypress intercept — cy.intercept() method. This command allows you to intercept and modify network requests and responses. Playwright: With Playwright, you can intercept network requests using the page.route() method. This method allows you to intercept network requests and provide a custom response. Postman: Postman is a popular API development and testing tool that allows you to inspect and modify network requests. It has a user-friendly interface and a wide range of features, making it a great option for QA automation testing. Charles: Charles is a web debugging proxy tool that can inspect, debug, and modify network requests. It is widely used by developers to troubleshoot issues with web applications and APIs. Fiddler: Fiddler is a web debugging proxy tool that allows you to inspect and modify network requests, and it is widely used for QA automation testing. JMeter: JMeter is a popular open-source load testing tool with features for intercepting and modifying network requests. JMeter can be used to test the performance of web applications and APIs. Burp Suite: Burp Suite is a powerful and widely used tool for testing web applications. It allows you to intercept and manipulate HTTP requests and responses and provides detailed information about the requests and responses. SoapUI: SoapUI is an open-source tool for testing web services, including features for intercepting network requests. It is widely used for testing SOAP and REST APIs. Selenium: Selenium is a popular open-source framework for automating web application testing. While Selenium itself does not have built-in capabilities for intercepting network requests. Intercepting Network Requests With Cypress Here are some common use cases for using Cypress intercept for handling network requests. Mocking APIs Cypress allows developers to create mock APIs to simulate different server responses. This is useful when testing application behavior under different conditions, such as when the server is down or the response is delayed. Testing HTTP Requests and Responses By intercepting network requests, the Cypress UI automation tool allows developers to test how the application handles HTTP requests and responses. This includes testing error response handling, response time, and response code. Testing Authorization and Authentication Using Cypress, developers can test how their application handles authentication and authorization by intercepting network requests and passing the authorization tokens. Here are some circumstances under which authorization and authentication are used in intercepting network requests: Access Control: Authorization is used to control access to resources such as files, web pages, APIs, and databases. By requiring authorization, only authorized users or systems are granted access to the resource. Secure Data Transmission: Authentication ensures data is securely transmitted between systems. By authenticating the sender and receiver, data can be encrypted and decrypted only by the intended parties. Testing for Performance-Related Issues Cypress UI testing tool can be used to measure the performance of web applications by intercepting network requests and measuring the response times of each request. This can help developers identify performance bottlenecks in their applications and optimize performance. Using Cypress Intercepts for Handling Network Requests Cypress is a JavaScript-based end-to-end testing framework that makes writing, running, and debugging tests for web applications easy. It has built-in support for intercepting and stubbing network requests, allowing you to control the data returned from the server and make assertions about the network requests made by your application while performing Cypress end-to-end testing. You can use the Cypress intercept — cy.intercept() command to intercept network requests in Cypress. This Cypress intercept command takes a URL pattern and a callback function as arguments and will intercept all requests that match the pattern. In the callback function, you can then modify the request, return a response, or continue the request to the network. Below are the methods you can use in Cypress for Spy and stub network requests and responses. JavaScript cy.intercept(url) cy.intercept(method, url) cy.intercept(routeMatcher) Here’s a simple example of how you could use the Cypress intercept command to return a fake response for a certain request: JavaScript cy.intercept('/api/data', { method: 'GET' }).as('getData') .reply(200, { data: 'Test Data' }); // … Perform your test logic … cy.wait('@getData') .its('response.body') .should('deep.equal', { data: 'Test Data' }); In this example, the Cypress intercept — cy.intercept() command is used to intercept all GET requests to the /api/data endpoint. The .reply method is then used to return a fake response with a status code of 200 and a JSON body of { data: Test Data’ }. The cy.wait() command is then used to wait for the request to be intercepted and complete, and the response body is asserted to match the expected value. Before explaining all the methods in detail, let’s first see how the Cypress intercept — cy.intercept() method works. How Does the cy.intercept() Method Work? The Cypress intercept or cy.intercept() is a method used to intercept and modify HTTP requests and responses made by the application during testing. This allows you to simulate different network scenarios and test the behavior of your application under different conditions. In the diagram below, the Cypress intercept — cy.intercept() method intercepts the requests and responses made by the Application Under Test (AUT). The Cypress intercept method can intercept requests to specific URLs or requests made by specific methods (e.g., GET, POST, etc.) The steps of the diagram are explained below: The client (browser) initiates a request to the server. The server receives the request and sends back a response. The client receives the response and handles it. Cypress test code intercepts the request before it reaches the server. The test code can modify the request or the response in any way it wants, such as adding headers, delaying the response, or changing the status code. The test code returns a stubbed response that replaces the actual response from the server. The client receives the stubbed response and handles it as if it came from the server. Once you get the response, we can verify the stubbed response by explaining the assertion below with various examples. Different Ways of Intercepting Network Requests in Cypress There are various ways to use Cypress intercept for handling network requests: 1. Matching URL The first way of intercepting the request is by Matching the URL. There are three ways of matching the URL. The interception by matching the exact URL. JavaScript it('Intercept by Url', () => { cy.visit('https://reqres.in/'); cy.intercept('https://reqres.in/api/users/').as('posts') cy.get("[data-id=users]").click() cy.wait('@posts').its('response.body.data').should('have.length', 6) }) In this example, we are intercepting the complete reqres.in URL. The intercepted request is assigned a named alias, ‘posts,’ using the .as method. The test then waits for the ‘posts’ request to complete and verifies the length of the response body. 2. Interception of multiple URLs using pattern matching. JavaScript it('Intercept by use pattern-matching to match URLs', () => { cy.visit('https://reqres.in/'); cy.intercept('/api/users/').as('posts') cy.get("[data-id=users]").click() cy.wait('@posts').its('response.body.data').should('have.length', 6) }) In this example, we are intercepting the request having the URL match the /api/users/ pattern. The intercepted request is assigned a named alias, ‘posts,’ using the .as method. The test then waits for the ‘posts’ request to complete and verifies the length of the response body. 3. Interception of the URL using a regex pattern. JavaScript it('Intercept by regular expression', () => { cy.visit('https://reqres.in/'); cy.intercept('/\/api/users?page=2').as('posts') cy.get("[data-id=users]").click() cy.wait('@posts').its('response.body.data').should('have.length', 6) }) In this example, the Cypress intercept — cy.intercept() method intercepts the URL that matches the regex pattern /\/api\/users. The as() method gives a name to the intercepted request, which can be later used with cy.wait() to wait for the response. In this case, cy.wait(‘@posts’) waits for the intercepted request to complete before proceeding with the test. 2. Matching Method Another way of intercepting the request is by Matching the methods. By default, if you don’t pass a method argument, then all HTTP methods (GET, POST, PUT, PATCH, DELETE, etc.) will match. Bypassing the method in the cy.intercept() will intercept a particular method request in a network call. Suppose you have provided an interceptor command like cy.intercept('/api/users/'). In that case, we will match parameters in all Methods (GET, POST, PUT, PATCH, DELETE, etc.) But if you pass the Method name in command cy.intercept ('GET', '/users?page=2'), then, in that case, only the GET method is intercepted. JavaScript it('Intercept by matching GET method', () => { cy.visit('https://reqres.in/'); cy.intercept('GET','api/users?page=2').as('posts') cy.get("[data-id=users]").click() cy.wait('@posts').its('response.body.data').should('have.length', 6) }) Another example of a POST request is where we have manipulated the response by providing the data in the body. In the below example, we have provided the data in the body, and thus, we have mocked the data with provided data in the body. JavaScript it('Intercept by matching POST method', () => { cy.visit('https://reqres.in/'); cy.intercept('POST', 'api/users', (req) => { req.reply({ status: 200, body: { "name": "John", "job": "QA Manager", } }) }).as('updateuser') cy.get("[data-id=post]").click() cy.wait('@updateuser') }) Below is the output of the above test case. You can see we have mocked the data by intercepting the POST call. 3. Matching With RouteMatcher RouteMatcher is a part of the Cypress API that allows you to match specific network requests based on their URL, method, headers, and other attributes. By using a RouteMatcher, you can match requests based on their URL patterns, which provides a flexible way to intercept API requests and test our application’s behavior under different conditions. JavaScript it('Intercept by RouteMatcher ', () => { cy.visit('https://reqres.in/') cy.intercept({ method: 'GET', url: 'https://reqres.in/api/users/**' }, (req) => { req.reply({ statusCode: 200, body: { data: [{ id: 7, email: 'tim.Bluth@reqres.in', first_name: 'tim', last_name: 'Bluth', avatar: 'https://reqres.in/img/faces/1-image.jpg'}, { id: 8, email: 'janet.weaver@reqres.in', first_name: 'Janet', last_name: 'Weaver', avatar: 'https://reqres.in/img/faces/2-image.jpg'}]} }) }).as('postdata') cy.wait('@postdata').its('response.body.data').should('have.length', 2) }) In this example, we have used a RouteMatcher to match any GET request to the https://reqres.in/api/users/** endpoint. The ** notation matches any path after /api/users/. We then use the req.reply() function to return a custom response for matching requests. Finally, we load our application and verify that the response has length 2. Output: The output of the above test case is attached below: 4. Pattern Matching In Pattern matching, you can provide the matching pattern string like in the below example any GET Or PATCH requests that match the pattern **/users/** will be intercepted. JavaScript it('Intercept by Pattern Matching using glob matching ', () => { cy.visit('https://reqres.in/') cy.intercept({ method: '+(GET|PATCH)', url: '**/users/**' }, (req) => { req.reply({ statusCode: 200, body: { data: [{ id: 7, email: 'kim.smith@reqres.in', first_name: 'Kim', last_name: 'Smith', avatar: 'https://reqres.in/img/faces/1-image.jpg'}, { id: 8, email: 'janet.weaver@reqres.in', first_name: 'Janet', last_name: 'Weaver', avatar: 'https://reqres.in/img/faces/2-image.jpg'}]} }) }).as('postdata') cy.wait('@postdata').its('response.body.data').should('have.length', 2) }) Output: In the below code, you can see the data that we have mocked displaying under the response body. 5. Stubbing a Response In Cypress, stubbing a response refers to the process of intercepting a network request made by the application being tested and returning a predefined response instead of the actual response from the server. There are two ways to stub a response for a network request: — With a string Here’s an example of how you can use the Cypress intercept — cy.intercept() method to stub a response for a network request by passing a string in the body. JavaScript it('Stubbing a response With a string', () => { cy.visit('https://reqres.in/') cy.intercept('GET', '**/users/**', { statusCode: 200, body: 'Hello, world!' }).as('getUsers') cy.wait('@getUsers') cy.get('@getUsers').then((interception) => { expect(interception.response.body).to.equal('Hello, world!') }) }) Output: — With Fixture files Another way of stubbing the response using the fixture file. You can mock the data from the fixture file instead of providing data in the body. JavaScript it('Stubbing a response With Fixture file', () => { cy.visit('https://reqres.in/') cy.intercept('GET', 'https://reqres.in/api/users?page=2', { fixture: 'users.json' }).as('getUsers') cy.visit('https://reqres.in/') cy.wait('@getUsers') cy.get('.data').should('have.length', 6) }) In this example, we are intercepting a GET request to the endpoint reqres.in and responding with a fixture file called users.json. We also use the .as() method to assign the intercepted request to an alias to wait for the response before performing further actions. Assuming you have a fixture file named users.json in your cypress/fixtures directory, this test will verify that the .data element on the page has a length of six, which matches the number of records in the users.json fixture file. Output: In the below output of the above code, you can see the data we have mocked using the fixture file. 6. Changing Headers You can also use the Cypress intercept — cy.intercept() method to mock header data. JavaScript it('Intercept a request and modify headers', () => { cy.visit('https://reqres.in'); cy.intercept('GET', 'https://reqres.in/api/users', (req) => { req.headers['Authorization'] = 'Bearer my-token'; }).as('getUserList'); //cy.visit('https://reqres.in/api/users'); cy.wait('@getUserList') cy.get('@getUserList').then((interception) => { const requestHeaders = interception.request.headers; expect(requestHeaders).to.have.property('Authorization', 'Bearer my-token'); }); }); In this example, we intercept a GET request to reqres.in users and modify the Authorization header by adding a token value. We give this interception a unique alias using the .as() command so that we can wait for it to complete using cy.wait(). After waiting for the interception to complete, we use the cy.get(‘@getUserList’) command to get the interception object and assert that the Authorization header was modified correctly. How to Override an Existing Cypress Intercept? Overriding an existing Cypress intercept allows you to modify or cancel an existing network request intercept that has already been defined in your test code. This can be useful when you want to change the behavior of an existing intercept, for example, to simulate a different response from a server or to modify the request data in a different way. The main difference between intercepting network requests and overriding an existing intercept is that intercepting allows you to define new intercepts in your test code while overriding allows you to modify existing intercepts. Cypress provides a rich API for working with both intercepts and overrides to help you thoroughly test and debug your web applications. The below example shows how you can use the Cypress intercept — cy.intercept() method to override an existing intercept and modify the behavior of your application during testing. JavaScript describe.only('Override an existing intercept example', () => { beforeEach(() => { cy.intercept('GET', 'https://reqres.in/api/users').as('getUsers') }) it('overrides the response of the /api/users request', () => { cy.visit('https://reqres.in/') cy.intercept('GET', 'https://reqres.in/api/users', (req) => { req.reply((res) => { res.send({ data: [{ id: 1, email: 'test@test.com' }], page: 1, per_page: 1, total: 1, total_pages: 1 }) }) }).as('getUsers') cy.wait('@getUsers').then((interception) => { expect(interception.response.body.data).to.have.length(1) expect(interception.response.body.data[0].email).to.eq('test@test.com')}) }) }) In this example, we first define an intercept for the GET /api/users request and give it an alias of getUsers. Then, in the test itself, we override the same request by defining a new intercept with the same alias of getUsers. In the new intercept, we use the req.reply() function to override the original request’s response and return a new response that includes a single user with an email of ‘test@test.com‘. Finally, we use the cy.wait() command to wait for the getUsers alias to complete, and then we test the response to ensure it contains the expected data. Output: Wrapping Up Using stubbing, requests to a network are intercepted and replaced with predefined responses, rather than sending the request over the network and waiting for a response. However, stubbing can also lead to false positives, as the behavior of the stubbed requests may not accurately reflect the behavior of the actual requests. This can lead to a false sense of security in your tests and may miss bugs or errors that only occur in the actual network request. On the other hand, not using stubbing with the Cypress intercept — cy.intercept() can provide a more accurate picture of the behavior of your application in the real world. By allowing actual network requests to be made and handling them accordingly, you can be more confident that your tests accurately reflect the behavior of your application in the wild.
Platform engineering is the discipline of building and maintaining a self-service platform for developers. The platform provides a set of cloud-native tools and services to help developers deliver applications quickly and efficiently. The goal of platform engineering is to improve developer experience (DX) by standardizing and automating most of the tasks in the software delivery lifecycle (SDLC). Instead of context switching like provisioning infrastructure, managing security, and learning curve, developers can focus on coding and delivering the business logic using automated platforms. Platform engineering has an inward-looking perspective as it focuses on optimizing developers in the organization for better productivity. Organizations benefit greatly from developers working at the optimum level because it leads to faster release cycles. The platform makes it happen by providing everything developers need to get their code into production so they do not have to wait for other IT teams for infrastructure and tooling. The self-service platform that makes developers' day-to-day activities more effortless and autonomous is called an internal developer platform (IDP). What Is an Internal Developer Platform (IDP)? IDP is a platform that comprises self-serving cloud-native tools and technologies which developers can use to build, test, deploy, monitor or does almost anything regarding application development and delivery with as little overhead as possible. Platform engineers or platform teams build it after consulting the developers and understanding their unique challenges and workflows. After discussing and implementing Kubernetes CI/CD pipelines and GitOps solutions for many large hi-tech enterprises, we realized a typical IDP would consist of the below 5 pillars: CI/CD platforms for automated deployments (Jenkins, Docker Hub, Argo CD, Devtron, Spinnaker) Container orchestration platforms for managing containers (Kubernetes, Nomad, Docker Swarm) Security management tools for authentication, authorization, and secret management (HashiCorp Vault, AWS Secrets Manager, Okta Identity Cloud) Infrastructure as code (IaC) tools for automated infrastructure provisioning (Terraform, Ansible, Chef, AWS CloudFormation) Observability stacks for workloads and applications visualization across all the clusters (Devtron Kubernetes dashboard, Prometheus, Grafana, ELK stack) The platform team designs IDP in a way that is easy to use for developers with a minimal learning curve. IDPs can help reduce developers' cognitive load and improve DX by automating repetitive tasks, reducing maintenance overhead, and eliminating the need for endless scripting. IDP enables development teams to independently manage resources, infrastructure needs, deployments, and rollbacks by providing a self-service platform. This increases developer autonomy and accountability, reduces dependencies, and streamlines the development cycle. Why Is Platform Engineering Important? Platform engineering can help organizations reap several internal (developers) and external (end users) benefits: Kubernetes Dashboard is an external service developed on top of Kubernetes architecture. Under the hood, the Dashboard uses APIs to read all cluster-wide information for visibility into a single pane. It also uses the APIs to deploy resources and applications into a cluster. Both CLI and Kubernetes Dashboards depend on the kube-API-server to process the requests. To get started with the CLI, the Ops team must deploy the Kubernetes Dashboard in the same cluster (similar to Kubectl deployment). Improved developer experience (DX): The plethora of cloud-native tools increases the cognitive load of developers, as it takes a good amount of time to decide which one to use for their specific use cases and master it. Platform engineering solves this and improves DX by providing a simplified, standardized set of tools and services that suit developers’ unique workflows. Increased productivity: The IDP provides everything developers need to get their code tested and deployed in a self-service manner. This reduces the delays in different stages of SDLC, like waiting for someone to provision the infrastructure to deploy, for example. Platform engineering ensures developer productivity by helping them focus mainly on the core development work. Standardization by design: IT teams use a variety of tooling in a typical software organization, varying from team to team. Maintaining and keeping track of things becomes complex in such a situation. Platform engineering solves this by standardizing the tools and services, and it is easier for them to solve any bottlenecks because the platform is identical for every developer. Faster releases: The platform team ensures developers are working on delivering the business logic by providing toolchains that are easily consumable, reusable, and configurable. Developers are very productive as a result, and it accelerates faster time-to-market for features and innovations reliably and securely. Implementing a successful platform team in an organization and leveraging the above benefits requires following some common principles. Treating the platform as a product is one of them. Platform as a Product One of the core principles of platform engineering is productizing the platform. The platform team needs to employ a product management mindset to design and maintain a platform that is not only user-friendly but meets the expectations and needs of the customers (app developers). It starts with collecting data points around the problems developers have and identifying which area to facilitate. This could improve deployment frequency, reduce the change failure rate, improve reliability and security, improve DX, etc. It is important to note that building a platform is all about building a core product that solves common challenges most teams have. It is not about solving the problems of a single team but providing the product across multiple teams to solve the same set of problems. For example, if multiple teams require the same piece of infrastructure, it makes sense for the platform team to work on that shared piece and distribute it. This idea of reusing the platform and repeatability is crucial as it allows for standardization, consistency, and scalability in application delivery. As in product management, the platform team owns the product, chooses certain metrics, and continues taking customer feedback to improve the user experience. The platform's product roadmap evolves with respect to feedback, and it accommodates changing needs and desires of the customers. Roles and Responsibilities of Platform Engineers The primary role of a platform engineer is to design and maintain a self-service platform (IDP) and provide platform services for developers. It starts with engaging with the developers and understanding their pain points: Listen to the Customers Interview developers and different IT teams to understand their engineering landscape and challenges and to know what they are optimizing for. They may be trying to build an effective CI/CD pipeline or implement better access control, among many other challenges around software delivery. Prioritize Identify common challenges most teams share and prioritize solving them over problems individual teams face. For example, if most teams find it hard to store and retrieve secrets securely, it is ideal to prioritize and solve them for everyone. Platform Designing Design IDP with required tools that would solve those problems for users, along with documentation to enable developers to self-serve resources and infrastructure. Adopting a secret management tool would solve challenges around securely managing secrets in the above case. Part of platform designing also includes writing scripts to automate routine development tasks, such as spinning up new environments and provisioning infrastructure to reduce errors and friction points in the development flow. Metrics Choose specific metrics around the goals to measure the platform's effectiveness. For example, if the goal is to improve DX, the metrics include engagement scores, team feedback, etc. Similarly, the metrics will change if the goal is to reduce the change failure rate or to increase deployment frequency. Gather Feedback and Maintain the Platform Continue listening to the customers and watch the metrics. Gather user feedback to add new tools to the platform and optimize for a better user experience. This also includes staying up-to-date with emerging tools and technologies in the DevOps and cloud infrastructure space and adopting them if necessary. It is easy to confuse the roles of a DevOps engineer or SRE with that of a platform engineer since they all manage the underlying infrastructure and support software development teams. Although there are certain overlapping responsibilities between all these roles, each differs from the others with its unique focus. Platform Engineering vs. DevOps DevOps is a philosophy that brought a cultural shift to SDLC to improve software delivery speed and quality. DevOps facilitated collaboration and communication between development and ops teams and accelerated automation to streamline deployments. Platform engineering — a practice rather than a philosophy — can be considered the next iteration of DevOps as it shares some core principles of DevOps: collaboration (with Ops), continuous improvement, and automation. The daily tasks of a platform team and DevOps differ from each other in some aspects. DevOps use certain tools and automation to streamline getting the code to production, managing it, and observing it using logging and monitoring tools. They mostly work on building an effective CI/CD pipeline. Platform engineers take all the tools used by DevOps and integrate them into a shared platform, which different IT teams can use on an enterprise level. This eliminates the need for teams to configure and manage infrastructure and tooling on their own and saves significant time, effort, and resources. Platform engineers also create the documentation and optimize the platform so developers can self-serve the tools and infrastructure in their workflow. Platform teams are required only in matured companies with many different IT teams using complex tools and infrastructure. Naturally, a dedicated platform team to manage the complexity will become necessary in such an engineering landscape. The platform team builds and manages the infrastructure, helping DevOps speed up continuous delivery. However, it is common for the DevOps team to perform platform engineering tasks (configuring Terraform, for example) in startups. Platform Engineering vs. SRE Site reliability engineers (SREs) focus on ensuring the application is reliable, secure, and always available. They work with developers and Ops teams to create systems or infrastructure that support delivering highly reliable applications. SREs also perform capacity planning and infrastructure scaling and manage and respond to incidents so that the platform meets required service level objectives (SLOs). On the other hand, platform engineering manages complex infrastructure and builds an efficient platform for developers to optimize SDLC. While both work on platforms and their roles sound similar, their goals differ. The major difference between platform engineering and SRE regards whom they face and cater their services to. SREs face end users and ensure the application is reliable and available for them. Platform engineers face internal developers and focus on improving their developer experience. The daily tasks of both teams differ with respect to these goals. Platform engineering provides the underlying infrastructure for rapid application delivery, while SREs do the same to deliver highly reliable and available applications. SREs work more on troubleshooting and incident response, and platform engineers focus on complex infrastructure and enabling developer self-service. To achieve their respective goals, both SREs and platform teams use different tools in their workflows. SREs mostly use monitoring and logging tools like Prometheus or Grafana to detect anomalies in real-time and to set automated alerts. Platform teams work with different sets of tools spanning various stages of the software delivery process, such as container orchestration tools, CI/CD pipeline tools, and IaC tools. All in all, SREs and platform teams work on building a reliable and scalable infrastructure with different goals but with some overlapping between the tools they use. How To Implement Platform Engineering in an Organization A platform team will not be an immediate requirement in a startup with a few engineers. Once the organization grows to multiple IT teams and starts dealing with complex tooling and infrastructure, it is ideal to have platform engineers to manage the complexity. Create the Role (Head/VP of Engineering) Top-level engineers like the VP or Head of Engineering usually create the role of a platform engineer when developers spend more time configuring the tools and infrastructure rather than delivering the business logic. They would find that most IT teams are solving the same problems, like spinning up a new environment, which lags the delivery process. So the Head of Engineering would define the scope of platform engineering, identify the areas of responsibility, and create the role of a platform engineer/team. Create an Internal Developer Platform (Platform Engineers/Team) The platform engineer starts by building the logs of the infrastructure and tools that are already used in the organization. Then they would interview developers and understand their challenges and build the internal developer platform with tools and services that solve problems on an enterprise level. They will build the platform in a way that is flexible and facilitates different architectures and deployment styles. Platform engineers also create documentation and conduct training sessions to help developers self-serve the platform. It is ideal for platform engineers to have a developer background so they know what it is like to be a developer and understand the challenges better. Onboard Users (Application Developers) Once the platform is ready, platform engineers onboard application developers. It will require internal marketing and letting teams know of the platform and what it can solve. The best way to onboard users is to pull them to the platform rather than throw the platform at them. This can be done by starting with a small team and helping them overcome a challenge. For example, help a small team optimize CI/CD pipeline and provide the best experience possible in the process. Word-of-mouth from early adopters will have a positive ripple effect throughout the organization, which will help onboard more users to the platform. Platform engineering does not stop at onboarding the users. It is a continuous process where the platform accommodates emerging tools and technologies and the changing needs and requirements of the users. Conclusion: Platform Engineering With Open-Source Tools Selecting an open-source platform that is built to enable platform engineers with a standardized toolchain that helps developers accelerate software delivery is important. Devtron is one such platform that helps developers by automating CI/CD platform, security, and observability for end-to-end SDLC.
Test tools are software or hardware designed to test a system or application. Various test tools are available for different types of testing, including unit testing, integration testing, and more. Some test tools are intended for developers during the development process, while others are designed for quality assurance teams or end users. In addition to automating testing tasks, test tools can produce test data, monitor system performance, and report on test results. Nowadays, organizations require not just high-quality software applications but also timely releases to ensure better results and gain potential customers. The key to this is premium-quality products developed at high speed. For this, they can use various test tools currently available to build products that are unmatched in the market. What Is a Test Tool? A test tool is a product that supports one or more test activities, including test planning, requirements gathering, building, running tests, logging defects, and test analysis. Using test management or a Computer Aided Software Engineering (CASE) tool, you can identify the input fields, including the range of valid values. The test tool will identify all the fields and assist you in getting started with the test design, but it won't complete the task for you as more verification may be required. A test coverage tool could also come with another test design tool. For instance, a coverage tool can determine the route to cover the untested branches if a set of current tests has already decided which components have been covered. Why Should You Consider Using a Test Tool? Software testers, developers, and QA engineers utilize software test tools to ensure that software applications work as intended and seem as expected. Throughout the software development lifecycle (SDLC), the test tool tests everything from individual code units to entire software systems. Organizations may ensure that the software application they produce complies with specifications and requirements, offers a superior user experience, and is mainly free of flaws and errors by using efficient software testing techniques. Keep your goals focused: Each test should have a specific objective and focus on evaluating a single software feature or component, such as the user interface or security. Unit and integration tests should be the first: Unit testing looks at specific apps' modules or parts. Integration tests evaluate units assembled into subsystems. Regression Tests: Regularly perform regression testing to ensure that past functionality does not break; a regression test runs for each new feature or piece of code Track bugs carefully: Establish a systematic approach to reporting and tracking bugs, including the details of the data each defect needs. Test various scenarios and environments: Test on different combinations like browsers, browser versions, operating systems, and resolutions. Rely on analytics: Track the outcomes of each test, compile the data, and create new tests focusing on issue areas after determining where flaws are most likely to be found. Try out the UI: You can capture the user experience using scenarios, functional testing, and exploratory testing. Benefits of Test Tools Software testing tools are advantageous for QA teams, testers, and developers. It makes tests more accurate, increases the amount of tested code, speeds up testing cycles, and provides early feedback to developers, all of which contribute to the delivery of applications of higher quality. Increased safety: Software vulnerabilities are a significant target for unscrupulous actors, and cybercrime poses a danger to both large and small enterprises. By ensuring that software applications are free of bugs and vulnerabilities that hackers could exploit, the Test tool shields businesses, their users, business partners, and customers from cybercriminals. More economic growth: It makes it easier and more affordable for developers to address bugs by spotting flaws and design problems earlier in the software development lifecycle. Check for compatibility: The test tool can ensure software applications function correctly on various browsers, operating systems, and gadgets. Client satisfaction: In the end, the test tool assists development companies in producing goods that meet customer expectations and criteria. Limitations of Test Tools While Test tools can be considered advantageous, they also have some limitations. The following are some limitations: It's more expensive to automate. Initial investments are more in a test tool than in performing manual testing. You cannot automate everything; some tests must be done manually. You cannot rely on test tools always. Where Does the Test Tool Fit in the Software Development Life Cycle? QA engineers adhere to well-established software testing life cycle phases to ensure the application functions as planned. The flowchart below will help you understand where the test tool fits in the software development life cycle. Requirements analysis: In this stage, testers identify the application'sication’s intended users and map the environments in which the software will operate. The actual planning begins when this stage concludes. Any potential result should be considered, both now and in the future.QA experts consider what is required to finish the test and achieve the objectives at this point. They should consider the following: What is required to evaluate a program? How many users can the application accommodate before scaling; You cannot rely on test tools always. How many resources the application has before scaling out, such as CPU and memory? Case development for tests. QA experts must decide the technical specifications for each test case after planning their tests and the things they will test.They should gather the following information throughout the test case development phase: any required automation code, where to keep the automation code and who will need access to it, who is executing the tests, and written-out test cases. Creating a test environment: QA experts should decide where the tests will execute at this step of the software testing lifecycle. The objective is to effectively develop a test environment while ensuring that the environment accurately simulates how users will interact with the application. One of the several deployment alternatives should be put into practice by QA. Testing the test cases: All QA engineers should provide shared access to the testing environment and related code. Once you have the shared access, you can start the testing process. This is where a test tool comes into the picture. You must select the right test tool which suits your business requirements Testing results aren't beneficial without reports. Vice presidents and directors, among other stakeholders, might wish to view the results of the initial tests to determine whether an application performs as intended. Management will seek responses to inquiries like the ones below: Will the app be released to the public? Should the code be modified? Do you see any bugs? In light of this, the report must contain as many specifics as possible regarding the outcomes of the tests. Types of Test Tools Tools for managing various functional and non-functional tests are available in the market. These tests check whether an application has all the features and requirements outlined in the project requirements. The non-functional testing of the software's performance, usability, dependability, security, and other elements to identify how well the software runs and what kind of user experience it has. Today's development teams use several popular software testing technologies. Test management systems control several facets of the testing protocol by monitoring activities, analyzing data, managing test cases, running automated tests, and organizing and monitoring manual testing. Take a glance below to understand which type of test tool can cover which module during the testing process. Tools for unit testing ensure individual code modules or units function as intended. The most fundamental component of software testing is unit testing. Tools for integration testing discover faults when combining various modules. Regression testing tools check for degradation or breakage of functionality caused by new code or additions to the software. You access the performance of a piece of software using performance testing tools, commonly referred to as load testing tools, as it scales to handle more users and data. Tools for tracking bugs during testing assist in finding bugs and keeping track of bug remedies. The management process of creating and running automated tests and tracking and reporting results using automation testing tools. Tools for cross-browser testing application's performance across many platforms, devices, and browsers. Security testing tools seek software vulnerabilities that bad actors could exploit. Tools for UI testing analyze the user interface to ensure the program provides a superior user experience. Roadmap to Successfully Implement Test Tool Consequently, you've followed along; we may conclude to implement a Test tool successfully. It makes more sense to use it first for automating advanced test cases before you finish the regression ones. However, there are two queries now surface: What is the ideal time to start developing progression tests? How should the effort on regression testing be prioritized? When to Start Working on Progression? When a new test automation project launches, the investment is more than the return. It would probably take longer to implement the functionality of the user narrative than it would build all the infrastructure required to make the initial tests function. Therefore, experts advise starting with test suites that system's fundamental functionality and key features. You can construct and stabilize the infrastructure application's key features with the aid of this suite. Moreover, this is the moment for you to incorporate the automated tests into the CI tools and development procedures. Make sure that all tests pass as you create the suite. Adjust your tests and infrastructure as necessary to accommodate changes to the product. If a test fails due to an application bug, try to fix it as soon as you can rather than just opening a bug report. Additionally, ensure that the tests execute well on all devices you'll want to give developers the option to perform the tests on their devices shortly. It would be best if you worked to keep your tests simple to manage and ensure that failures are simple to identify. Prioritizing the Work to Fill the Regression Gap Your primary focus should be on the things that add the most value to the company, and the custom product's ROI is directly affected by locating and eliminating defects in these features. It is not worthwhile to build tests for a feature or component at this time if it anticipates that it will be replaced shortly and have its entire behavior altered. You need to wait until the new behavior establishes itself before creating the new tests to go along with it. It is also not particularly cost-effective to automate a feature if it is highly stable and there are no plans to modify it anytime. There is an extremely slim chance that something will negatively impact it. It is especially true for legacy components, whose code is never touched even though they may be essential to the system. Searching for anything with more risk will result in a higher value more quickly. A feature that has been completed and is functioning correctly but will soon have its underlying technology replaced or undergo a significant internal reorganization is an excellent candidate for test automation. Changing the internal structure of a feature while keeping the external functionality improves efficiency. In this instance, the tests should continue to pass after the changes as they did before. Think about covering a feature that frequently fails. However, remember that a component frequently fails signifies a flawed design. Because of this, the feature has a strong chance of being a candidate for restructuring or a complete overhaul. Before a bug demands repairing, you must build an automated test to replicate any non-automated bug findings shouldn't be fixed until a test successfully reproduces it. How to Use a Test Tool? Are you confused about what test tool you need to become familiar with? The market's abundance of automation testing solutions, the one you pick should be capable of streamlining your responsibilities and providing you with the necessary break. Generally, we use the test tool to test websites or mobile applications. However, on-premise testing has several significant challenges infrastructure, costs, maintenance, scalability, etc. To eliminate the pain point of on-premise testing, a viable solution is to leverage cloud-based test tools like LambdaTest. Improving the Efficiency of the Test Tool After going through the test tool in detail, let’s explore how to increase its efficiency of the test tool. There are several methods of doing the same, and a few of them are discussed below. Continuous Integration "Continuous Integration" is a word that most developers are familiar with. Continuous Integration automatically produces and tests the code before each check-in. The modification checking occurs if everything checks out. The process that generates the code and executes the tests is usually carried out on one or more dedicated build servers rather than on the developer's computer. It gives the process centralized control and frees up the developer's computer to perform other tasks while it runs. Acceptance Test-Driven Development Continuous integration answers the issues of who, when, and how to run tests. It does not address the issues of who, when, and how to write tests. Before implementing the product code, let's test. The execution of the tests may reveal new issues and loopholes in the user story's specification. Additionally, it motivates the team to begin structuring"the product code in a stable way." However, the tests cannot pass at this point. To make the tests pass, the engineers put the code into practice. They must refrain from creating any functionality that goes beyond what is covered by the tests. They must also run the current tests to ensure they didn't break anything. The user narrative can be presented to tdeveloper'sowner and the client or even deployed to production once the test developer's method ensures that testers and automation developers are involved as early as possible, allowing them to significantly impact the product's quality. Furthermore, using the procedure from the start of the project indicates that the tests cover all of the defined functionality. Upgrade Test Upgrade tests are significantly more challenging if the installation tests the story's default. The test matrix for upgrading tests above installation tests has two additional dimensions: The source version follows the upgrades; the most recent version is not always used. When upgrading, users can skip one, two, or more versions. Additionally, there may be a few minor versions and possibly a few beta versions for each major version. User data refers to the user most likely created while using the previous version. Given that the setup parameters are still applicable in the upgraded version, the user would presumably anticipate being able to use his data with it. Approaches for the Upgrade Test Examining the direct effect product's: Installing the old version and possibly, adding or altering some data, the test should run the upgraded application to check if it contains the newly copied files. You must also confirm whether any new configuration items need an addition. Before conducting all the tests, upgrade the application: We may directly upgrade and run the tests while maintaining the environment from the prior build or version. Functional testing frequently produces the data they want (for isolation purposes), but they never check to see if the data produced in the prior version is still usable. Tests for explicit upgrades: Most tests require creating or modifying some data before the upgrade and executing a more focused test following the upgrade to ensure that the data is still usable. The test matrix and the difficulty of effectively implementing these tests are enormous, to put the upgrade testing topic in a nutshell. You must design trade-offs following risk and cost analysis to decide what to test. Wrapping Up When asked what customers hope to gain from test automation, the most typical response is a reduced time to test the software before it is made available. On the one hand, while this is a significant objective, test automation can help you with a lot more things than just this. It typically takes quite a while to reduce the number of manual test cycles. On the other hand, you might observe the other advantages earlier. Hope by now you have got a good gist of the test tool for software testing. You can choose the best one that suits your business needs the most.
A blue-green deployment model is a software delivery release strategy based on maintaining two separate application environments. The existing production environment running the current production release of the software is called the blue environment, whereas the new version of the software is deployed to the green environment. As part of testing and validation of the new version of the software, application traffic is gradually re-routed to the green environment. If no issues are found, then the green environment becomes the new blue environment. The former blue environment can be taken down, and a new green environment can be established for the next release. Why Is Blue-Green Deployment Useful? The primary benefits of implementing a blue-green strategy are 1) minimal or zero application downtime and 2) no negative impact on end-users when switching users to a new software release or when rolling back a release in the event of unforeseen issues with the new release or deployment. The concepts and components required to implement blue-green deployments include but are not limited to, load balancers, routing rules, and container orchestration platforms like Kubernetes. How Blue-Green Deployment Works As shown in the image, let’s assume that version 1 is the current version of the application, and we want to move to the new update, version 1.1. Version 1 will be called the blue environment, and version 1.1 will be called the green environment. The Process of Switching Traffic Between the Two Environments Now that we have two instances of the application, named blue and green, we want users to access the new green (v 1.1) instance rather than the older blue instance. For this to happen, we normally use a load balancer instead of a DNS record exchange because DNS propagation is not instantaneous. By using load balancers and routers, there is no need to change DNS records because the load balancer references the same DNS record but routes new traffic to the green environment. This gives administrators full control of user access, which is important because it enables quickly switching users back to version 1 (the blue instance) in the event of a failure in the green instance. Because of the speed of the switchover, most users won’t notice that they are now accessing a newer version of the service or application — or that they have been rolled back to a previous version. Monitoring Traffic can be switched from the blue to the green environment gradually or all at once. As the traffic flows to the green instance, the DevOps engineers get a small window of time to run smoke tests on the green instance. This is crucial, as they need to ensure that all aspects of the new version are running as they should before users are impacted on a wide scale. The Benefits of Implementing Blue-Green Deployments Improved user experience — As noted above, users don’t experience any downtime, and the new environment can be rolled back instantly to the previous best state if necessary. Disaster recovery — The Blue-Green strategy is also a best practice for simulating and running disaster recovery scenarios because of the inherent equivalence of the blue and green instances and the ability to instantly failover to the (back-up) green instance in case of an issue with the (production) blue instance. Simulating actual production scenarios — With a Canary deployment, the testing environment is often not identical to the final production environment. Instead, we use a small portion of the production environment and move a small amount of traffic to the new system. (Read more about Canary Analysis here.) By contrast, in a Blue-Green deployment, the new green instance can simulate the entire production environment running in the blue instance. Increasing developer productivity — Gone are the days when DevOps engineers had to wait for low-traffic windows to deploy updates. The Blue-Green strategy eliminates the need for maintaining downtime schedules, and developers can quickly move their updates into production as soon as they are ready with their code. Best Practices and Challenges To Keep In Mind When Implementing a Blue-Green Deployment Choose Load Balancing Over DNS Switching Do not use multiple domains to switch between servers. This is a very old way of diverting traffic. DNS propagation takes from hours to days, and it can take browsers a long time to get the new IP address. Some of your users may still be served by the old environment. Instead, use load balancing. Load balancers enable you to set your new servers immediately without depending on the DNS. This way, you can ensure that all traffic is served to the new production environment. Keep Databases in Sync One of the biggest challenges of blue-green deployments is keeping databases in sync. Depending on your design, you may be able to feed transactions to both instances to keep the blue instance as a backup when the green instance goes live. Or you may be able to put the application in read-only mode before cut-over, run it for a while in read-only mode, and then switch it to read-write mode. That may be enough to flush out many outstanding issues. Backward compatibility is business critical. Any new users added to the new version must still have access in the event of a rollback. Otherwise, the business could, for instance, lose new customers. In addition, any new data added to the new version must also be passed to the old database in the event of a rollback. Execute a Rolling Update The container architecture has enabled the use of a rolling, or seamless, blue-green update. Containers enable DevOps engineers to perform a blue-green update only on the required pod. This decentralized architecture ensures that other parts of the application do not get affected. Challenges To Consider While Implementing Blue-Green Deployments Errors When Changing User Routing Blue-green is the best choice of deployment strategy in many cases, but it comes with some challenges. One issue is that during the initial switch to the new (green) environment, some sessions may fail, or users may be forced to log back into the application. Similarly, when rolling back to the blue environment in case of an error, users logged in to the green instance may face service issues. With more advanced load balancers, these issues can be overcome by slowing the moving of new traffic from one instance to another. The load balancer can either be programmed to wait for a fixed duration before users become inactive or force-close sessions for the users still connected to the blue instance after the specified time limit. This might slow down the deployment process and result in some failed or stuck transactions for a small fraction of users, but it will provide a far more seamless and uninterrupted service quality than having routers force the exit of all users and divert traffic. Seamless Blue-Green Deployment Instantaneous Blue-Green Deployment High Infrastructure Costs The elephant in the room with blue-green deployments is infrastructure cost. Organizations that adopt a blue-green strategy need to maintain an infrastructure that doubles the size required by their application. If you utilize elastic infrastructure, the cost can be absorbed more easily. Similarly, blue-green deployments can be a good choice for applications that are less hardware intensive. Code Compatibility Lastly, the blue and green instances live in the production environment, so developers need to ensure that each new update is compatible with the previous environment. For example, if a software update requires changes to a database (e.g., adding a new field or column), the blue-green strategy is difficult to implement because, at times, traffic is switched back and forth between the blue and green instances. It should be a mandate to use a database that is compatible across all software updates, as some NoSQL databases are. Conclusion The blue-green software deployment strategy can involve significant costs, but it is one of the most widely used advanced deployment strategies. Blue-green is particularly helpful when you expect environments to remain consistent between releases and you require reliability in user sessions across new releases.