DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations

Agile

The Agile methodology is a project management approach that breaks larger projects into several phases. It is a process of planning, executing, and evaluating with stakeholders. Our resources provide information on processes and tools, documentation, customer collaboration, and adjustments to make when planning meetings.

icon
Latest Refcards and Trend Reports
Trend Report
DevOps
DevOps
Refcard #266
Agile Patterns
Agile Patterns
Refcard #093
Lean Software Development
Lean Software Development

DZone's Featured Agile Resources

The Art of Being Ready: Reliability in Extreme Conditions

The Art of Being Ready: Reliability in Extreme Conditions

By Eugene Retunsky
When it comes to online services, uptime is crucial, but it’s not the only thing to consider. Imagine running an online store — having your site available 99.9% of the time might sound good, but what if that 0.1% of downtime happens during the holiday shopping season? That could mean losing out on big sales. And what if most of your customers are only interested in a few popular items? If those pages aren’t available, it doesn’t matter that the rest of your site is working fine. Sometimes, being available during peak moments can make or break your business. It’s not just e-commerce — a small fraction of airports handle most of the air traffic, just a tiny minority of celebrities are household names, and only a handful of blockbuster movies dominate the box office each year. It’s the same distribution pattern everywhere. To be successful, it’s important to not only maintain uptime but also be ready for significant events. Some teams implement change freezes before key times, such as Prime Day, Black Friday, or Cyber Monday. This approach is reasonable, but it can be limiting as it doesn’t allow teams to quickly respond to unexpected opportunities or critical situations. Additionally, not all demand can be predicted, and it’s not always clear when those high-impact events will happen. This is where “Reliability when it matters” comes in. We need to be able to adapt and respond quickly to changes in customer demand without being held back by code freeze periods and being prepared for unforeseen situations. By considering time as a valuable resource and understanding the relative significance of different moments, organizations can better translate customer value and adjust risk and availability budgets accordingly. This approach allows organizations to be flexible and responsive to changes in demand without missing out on crucial features or opportunities. In the end, it’s about being ready when luck comes your way. It’s important to note that a system is not static and is constantly changing. The system itself, the infrastructure it’s hosted on, and the engineering organization all change over time. This means that knowledge about the system also changes, which can impact reliability. Besides that, incidents and outages are inevitable, no matter how much we try to prevent them. Bugs will be shipped, bad configurations will be deployed, and human error will occur. There can also be interdependencies that amplify outages. An incident rarely has a single cause and is often a combination of factors coming together. The same goes for solutions, which are most effective when they involve a combination of principles and practices working together to mitigate the impact of outages. Operating a system often means dealing with real-world pressures, such as time, market, and management demands to deliver faster. This can lead to shortcuts being taken and potentially compromise the reliability of the system. Growth and expansion of the user base and organization can also bring additional complexity and result in unintended or unforeseen behaviors and failure modes. However, by adopting a holistic approach and utilizing the principles and practices of engineering I’m going to cover below, we can have the best of both worlds — speed, and reliability. It’s not an either-or scenario but rather a delicate balance between the two. What Is Reliability? Reliability is a vital component of any system, as it guarantees not only availability but also proper functioning. A system may be accessible, yet if it fails to operate accurately, it lacks reliability. The objective is to achieve both availability and precision within the system, which entails containing failures and minimizing their impact. However, not all failures carry equal weight. For instance, an issue preventing checkout and payment is far more crucial than a minor glitch in image loading. It’s important to focus on ensuring important functions work correctly during critical moments. In other words, we want to focus on being available and functioning correctly during peak times, serving the most important functionality, whether it be popular pages or critical parts of the process. Making sure systems work well during busy times is tough, but it’s important to approach it in a thoughtful and thorough way. This includes thinking about the technical, operational, and organizational aspects of the system. Key parts of this approach include: Designing systems that are resilient, fault-tolerant, and self-healing. Proactively testing systems under extreme conditions to identify potential weak spots and prevent regressions. Effective operational practices: defining hosting topology, auto-scaling, automating deployment/rollbacks, implementing change management, monitoring, and incident response protocols. Navigating the competing pressures of growth, market demands, and engineering quality. Cultivating a culture that values collaboration, knowledge sharing, open-mindedness, simplicity, and craftsmanship. It also requires a focus on outcomes in order to avoid indecision and provide the best possible experience for customers. Further, we’re going to expand on the concept of “Reliability when it matters” and provide practical steps for organizations to ensure availability and functionality during critical moments. We’ll discuss key elements such as designing systems for reliability, proactively testing and monitoring, and also delve into practical steps like automating deployment and incident response protocols. Reliability Metrics: A Vital Tool for Optimization When optimizing a service or system, it's essential to initially define your objectives and establish a method for monitoring progress. The metrics you choose should give you a comprehensive view of the system’s reliability, be easy to understand, share, and highlight areas for improvement. Here are some common reliability metrics: Incident frequency: the number of incidents per unit of time. Incident duration: the total amount of time incidents last. While these metrics are a good starting point, they don’t show the impact of incidents on customers. Let’s consider the following graph: Blue — the number of requests per five minutes, Red — errors, Green — reliability in 0..1 Suppose we have two incidents, one at 1 am and one at 2 pm, each causing about 10% of requests to fail for an equal duration of 30 minutes. Treating these incidents as equally impactful on reliability wouldn’t reflect their true effects on customers. By considering traffic volume, the reliability metric can better show that an incident during peak traffic has a bigger impact and deserves higher priority. Our goal is to have a clear signal that an incident during peak traffic is a major problem that should be fixed. This distinction helps prioritize tasks and make sure resources are used effectively. For example, it can prevent the marketing team’s efforts to bring more visitors from being wasted. Additionally, tracking the incident frequency per release can help improve the deployment and testing processes and reduce unexpected issues. In the end, this should lead to faster delivery with lower risks. Digging Deeper Into Metrics To get a deeper understanding of these metrics and find areas for improvement, try tracking the following: Time to detection: how long it takes to notice an incident. Time to notification: how long it takes to notify relevant parties. Time to repair: how long it takes to fix an incident. Time between incidents: this can reveal patterns or trends in system failures. Action item completion rate: the percentage of tasks completed. Action item resolution time: the time it takes to implement solutions. Percentage of high-severity incidents: this measures the overall reliability of the system. Finally, regularly reviewing these metrics during weekly operations can help focus on progress, recognize successes, and prioritize. By making this a regular part of your culture, you can use the data from these metrics to drive better decisions and gradually optimize the system. Remember, the usefulness of metrics lies in the actions taken from them and their ability to drive progress. It’s a continuous feedback loop of refining both the data and the action items to keep the system improving. Designing for Resilience A system that isn’t designed to be resilient probably won’t handle peak times as smoothly. Here are some considerations that can help ensure a system’s reliability under a variety of conditions: Do’s: Prepare for component failure: By partitioning the service or using isolation, you can limit the blast radius and reduce the impact of failures. Implement fault-tolerance: Implementing mechanisms like retries, request hedging, and backpressure will improve the system’s availability and performance. Use rate-limiting and traffic quotas: Don’t rely solely on upstream dependencies to protect themselves. Use rate-limiting and traffic quotas to ensure that your system remains reliable. Categorize functionality: Prioritize functions by categorizing them into “critical,” “normal,” and “best-effort” categories. This will help keep essential functions available at all costs during high demand. Implement error-pacing and load-shedding: These mechanisms help prevent or mitigate traffic misuse or abuse. Continuously challenge the system: Continuously challenge the system and consider potential failures to identify areas for improvement. Plan for recovery: Implement fail-over mechanisms and plan for recovery in the event of a failure. This will help reduce downtime and ensure that essential services are available during challenging conditions. Make strategic trade-offs: Make strategic trade-offs and prioritize essential services during challenging external conditions. Dont’s: Don’t assume callers will use your service as intended. Don’t neglect rare but potential failures; plan and design prevention measures. Don’t overlook the possibility of hardware failures. I explored some of the ideas in the following blog posts: Ensuring Predictable Performance in Distributed Systems Navigating the Benefits and Risks of Request Hedging for Network Services FIFO vs. LIFO: Which Queueing Strategy Is Better for Availability and Latency? Isolating Noisy Neighbors in Distributed Systems: The Power of Shuffle-Sharding Reliability Testing Reliability testing is essential for maintaining the availability and functionality of a system during high demand. To ensure a reliable system, it is important to: Design for testability so each component can be tested individually. Have good enough testing coverage as a prerequisite for being agile. Calibrate testing by importance, focusing on essential functions and giving a bit of slack to secondary or experimental features. Perform extensive non-functional testing, such as load testing, stress testing, failure-injection testing, soak testing, and fuzzing/combinatorial testing. It’s crucial to avoid: Blindly pursuing high coverage numbers. Assuming that a single data point provides a comprehensive understanding. Ensure that results are robustly reproducible. Underinvesting in testing environments and tooling. Proper testing not only ensures correctness, serves as living documentation, and prevents non-functional regressions but also helps engineers to understand the system deeper, flex their creative muscles while trying to challenge them, and ultimately create more resilient, reliable systems for the benefit of all stakeholders. Remember, if you don’t deliberately stress test your system, your users will do it for you. And you won’t be able to choose when that moment comes. Reliability Oriented Operations Operating a distributed system is like conducting an orchestra, a delicate art that requires a high level of skill and attention to detail. Many engineers tend to underestimate the importance of operations or view it as secondary to software development. However, in reality, operations can have a significant impact on the reliability of a system. Just like a conductor’s skill and understanding of the orchestra is vital to ensure a harmonious performance. For example, cloud computing providers often offer services built on open-source products. It’s not just about using the software but how you use it. This is a big part of the cloud computing provider business. To ensure reliability, there are three key aspects of operations to consider: Running the service: This involves hosting configuration, deployment procedures, and regular maintenance tasks like security patches, backups, and more. Incident prevention: Monitoring systems in real-time to quickly detect and resolve issues, regularly testing the system for performance and reliability, capacity planning, etc. Incident response: Having clear incident response protocols that define the roles and responsibilities of team members during an incident, as well as effective review, communication, and follow-up mechanisms to address issues and prevent similar incidents from happening or minimize their impact in the future. The incident response aspect is particularly crucial, as it serves as a reality check. After all, all taken measures were insufficient. It’s a moment of being humble and realizing that the world is much more complex than we thought. And we need to try to be as honest as possible to identify all the engineering and procedural weaknesses that enabled the incident and see what we could do better in the future. To make incident retrospectives effective, consider incorporating the following practices: Assume the reader doesn’t have prior knowledge of your service. First of all, you write this retrospective to share knowledge and write clearly so that others can understand. Define the impact of the incident. It helps to calibrate the amount of effort needed to invest in the follow-up measures. Only relatively severe incidents require a deep process, do not normalize retrospectives by having them for every minor issue that doesn’t have the potential to have a lasting impact. Don’t stop at comfortable answers. Dig deeper without worrying about personal egos. The goal is to improve processes, not blame individuals or feel guilt. Prioritize action items that would have prevented or greatly reduced the severity of the incident. Aim to have as few action items as possible, each with critical priority. In terms of not stopping at the “comfortable answers,” it’s important to identify and address underlying root causes for long-term reliability. Here are a few examples of surface-level issues that can cause service disruptions: Human error while pushing configuration. Unreliable upstream dependency causes unresponsiveness. Traffic spike leading to the temporary unavailability of our service. It can be difficult to come up with action items to improve reliability in the long term based on these diagnoses. On the other hand, deeper underlying root causes may sound like: Our system allowed the deployment of an invalid configuration to the whole fleet without safety checks. Our service didn’t handle upstream unavailability and amplified the outage. Our service didn’t protect itself from excessive traffic. Addressing underlying root causes can be more challenging, but it is essential for achieving long-term reliability. This is just a brief overview of what we should strive for in terms of operations, but there is much more to explore and consider. From incident response protocols to capacity planning, there are many nuances and best practices to be aware of. The Human Factor in System Reliability While procedures and mechanisms play a vital role in ensuring system reliability, it is ultimately the humans who bring them to life. So, it’s not just about having the right tools but also cultivating the right mindset to breathe life into those mechanisms and make them work effectively. Here are some of the key qualities and habits that contribute to maintaining reliability (and not only): Collaboration with other teams and organizations in order to share knowledge and work towards a common goal. A degree of humility and an open-minded approach to new information in order to adapt and evolve the system. A focus on simplicity and craftsmanship in order to create evolvable and maintainable systems. An action-driven and outcome-focused mindset, avoiding stagnation and indecision. A curious and experimental approach akin to that of a child, constantly seeking to understand how the system works and finding ways to improve it. Conclusion Ensuring reliability in a system is a comprehensive effort that involves figuring out the right metrics, designing with resilience in mind, and implementing reliability testing and operations. With a focus on availability, functionality, and serving the most important needs, organizations can better translate customer value and adjust risks and priorities accordingly. Building and maintaining a system that can handle even the toughest conditions not only helps drive business success and pleases customers but also brings a sense of accomplishment to those who work on it. Reliability is a continuous journey that requires attention, skill, and discipline. By following best practices, continuously challenging the system, and fostering a resilient mindset, teams, and organizations can create robust and reliable systems that can withstand any challenges that come their way. More
Self-Management

Self-Management

By Stefan Wolpers CORE
Is self-management an essential building block on an organization’s path to business agility or a nice-to-have cultural twist to, for example, keep teams happy and attract new talent? While many people, particularly at the management level, are skeptical about the concept, I am convinced that organizations need to descale and regroup around aligned, autonomous, self-managing teams in a complex environment. Ultimately, only the people closest to the customers’ problems can solve those within the given constraints while contributing to an organization’s sustainability. Please continue reading and delve into the reasons that support self-management. The Top Ten Business Reasons To Embrace Self-Management Here are my top ten reasons why self-management is essential for developing new products in complex environments and addressing customer needs: Increased innovation: Self-management fosters a culture of creativity and experimentation. Team members are empowered to take risks, try new ideas, and learn from failures, leading to more innovative solutions for customers. Greater adaptability: In a complex environment, change is inevitable. Self-managed teams are more agile and can adapt to new situations, pivot their approach, and respond to customer needs more effectively than traditional hierarchical teams. Improved communication: Self-management promotes open and transparent communication within the team. Transparency ensures that information is shared effectively, leading to better collaboration and problem-solving. Empowerment and autonomy: Self-management empowers individuals and teams to make decisions and take responsibility for their work. This autonomy leads to higher job satisfaction and increased commitment to the organization’s goals. Moreover, it attracts talent from other organizations. Faster decision-making: Self-managed teams can make decisions quickly without waiting for approval from multiple levels of management, accelerating the development process and enabling more immediate responses to customer needs. Better problem-solving: Self-managed teams work close to customers and have a deeper understanding of their needs. This proximity enables them to identify and address problems more effectively than a management-driven approach. Resilience and risk mitigation: Self-managed teams are better equipped to identify and address potential risks early in development. This proactive approach to risk management helps build resilience and ensures more predictable outcomes. Continuous improvement: Self-managed teams focus on continuous learning, improving, refining processes, and iterating on products within the given constraints of the organization. This commitment to constant improvement ensures that products evolve to meet customer needs. Higher engagement: When team members own their work, they are more engaged and motivated. This ownership leads to increased productivity, better quality work, and a more substantial commitment to meeting customer needs. More efficient use of resources: By allowing team members to allocate their own time and prioritize tasks, self-managed teams can use resources better, improving productivity and reducing waste. Now that we have established the usefulness of self-management from a business perspective, the question is: how do we get there? (Spoiler alert: Your teams won’t become self-managing by contracting McBoston to roll out a new initiative.) Why the Change to Self-Management Cannot Be Outsourced While external consultancies may support your organization’s effort to become an agile organization due to their broad experience with other clients, real change can only come from within an organization. Any change effort needs to include people, give them a voice, and convince them that change is in their best interest: “Agile” cannot be pushed; it needs to be pulled. Consequently, avoid relying on external consultants. Instead, to foster self-management within the organization, consider the following suggestions: Redefine leadership roles: Shift the focus of management from controlling and directing to supporting and enabling teams. Managers should help remove obstacles and provide resources for self-managed teams to thrive. Managers need to move on from problem-solvers on behalf of their teams to become servant-leaders who strive to make their teams successful. Internal agile champions: Identify and empower individuals with experience or interest in agile practices. These internal champions can advocate for and drive the adoption of self-management practices across teams. Agile training and education: Invest in training and education for employees at all levels, including workshops, online courses, or even certifications, to help them better understand and apply agile principles and self-management practices. Coaching and mentoring: Encourage experienced agile practitioners to coach and mentor others, helping to create a culture of learning, sharing, and fostering the growth of self-managed teams. Foster a culture of trust and transparency: Encourage open communication and collaboration across all levels of the organization. Transparency will build trust among team members and empower them to take more ownership of their work. Regularly inspect and adapt: Conduct periodic Retrospectives and assessments to gauge the progress of self-management adoption. Use the insights gathered to inspect and adapt the approach, ensuring it aligns with the organization’s unique needs and culture. Incremental adoption: Start small by implementing self-management practices in a few pilot teams. Learn from their experiences and gradually expand self-management adoption to other teams as they become comfortable with the new approach. Encourage cross-functional teams: Form cross-functional teams that bring together individuals with diverse skills and backgrounds. This encourages collaboration and knowledge-sharing and fosters self-management. Provide the necessary tools: Equip teams with the tools and resources to collaborate, plan, and track their work effectively. This could include agile project management tools, communication platforms, and continuous integration and deployment systems. Celebrate successes and learn from failures: Recognize and celebrate the accomplishments of self-managed teams. At the same time, encourage a culture of learning from mistakes and iterating on processes to improve continually. By focusing on these strategies, an organization can foster self-management among its teams and embark on its journey to become agile. Conclusion Self-management is essential for developing new products in complex environments and addressing customer needs. By embracing self-management, organizations can foster innovation, adaptability, and a stronger customer focus, ultimately leading to better products and satisfied customers. Moreover, adopting self-management also offers tangible benefits to shareholders by increasing efficiency, promoting innovation, and enhancing adaptability, ultimately driving growth and success for the organization. What is your experience with self-managing teams? Please share with us in the comments. More
Agile Self-Organizing Teams
Agile Self-Organizing Teams
By Krishna P
Agile vs. Scrum
Agile vs. Scrum
By Deepali chadokar
Workshop Design With ChatGPT
Workshop Design With ChatGPT
By Stefan Wolpers CORE
The Holy Grail of Agile-DevOps Value Stream Hunting: Actualizing DevOps Transition Purpose
The Holy Grail of Agile-DevOps Value Stream Hunting: Actualizing DevOps Transition Purpose

Value streams have been a central tenet of Lean thinking for decades, starting with Toyota and the Lean Manufacturing movement, and are now widely adopted across industries. Despite this, many businesses still need to harness the full potential of value streams to drive organizational change and achieve greater efficiency and effectiveness. Instead, they may focus narrowly on metrics like team velocity or production pipeline speed, missing the broader picture of the end-to-end system. In modern product development, understanding value streams is crucial to optimizing our ways of working and delivering value to customers. By mapping the path to value, we can gain visibility into our processes and identify improvement areas, such as code deployment bottlenecks or mismatches between personnel and roles. In this blog, we will explore the concept of value stream mapping and its role in actualizing the purpose of DevOps transitions. We'll debunk common myths and misunderstandings around value stream mapping and introduce principles to help you succeed in this activity and beyond. Whether you're a seasoned DevOps practitioner or just starting on your journey, you will want to take advantage of this opportunity to unlock the holy grail of Agile-DevOps value stream hunting. What Is Value Steaming, and Why the Path to Value Streaming Is Quintessential for Your Agile-DevOps Journey? Value stream mapping is the process of analyzing and improving the flow of value to customers by mapping out the end-to-end process, from idea to delivery. Understanding value streams and mapping the path to value streaming is essential for any Agile-DevOps journey. Consider a software development team struggling to deliver value to customers efficiently in real-life scenarios. They may focus on completing tasks and meeting deadlines but only take a holistic view of the entire process. Through value stream mapping, they can identify bottlenecks in the development process, such as long wait times for testing or approval processes, and adjust and streamline the flow of value to customers. Value stream mapping is quintessential to an Agile-DevOps journey because it helps teams understand how their work fits into the larger picture of delivering value to customers. By mapping out the entire process, teams can see where delays occur, where handoffs are inefficient, and where there is room for improvement. Consider a DevOps team struggling to smoothly integrate code changes into the production environment. Through value stream mapping, they may discover that their testing process could be more time-consuming or that there are too many manual steps in the deployment process. By identifying these inefficiencies, they can automate testing and deployment, leading to faster value delivery to customers. By taking a holistic view of the entire process, teams can identify inefficiencies, reduce waste, and deliver customer value more efficiently and effectively. Value stream mapping helps organizations identify and eliminate inefficiencies in their processes, leading to the faster, more efficient delivery of value to customers. Following are some more examples: A financial services company wants to improve the time it takes to process customer loan applications. Through value stream mapping, they discover that there are long wait times between different departments and multiple handoffs that slow down the process. By identifying these inefficiencies, they can redesign the operation to eliminate unnecessary steps and reduce wait times, resulting in faster loan processing and improved customer satisfaction. A healthcare organization wants to improve patient care by reducing the time it takes for lab results to be processed and returned to the doctor. Through value stream mapping, they discover that there are too many manual steps in the lab testing process and bottlenecks in the information flow between departments. By redesigning the process to automate testing and improve communication, they can reduce the time it takes to process lab results, leading to faster patient diagnosis and treatment. A software development company wants to improve the quality of its code releases. Through value stream mapping, they discover that multiple handoffs between development, testing, and operations teams lead to errors and delays. By redesigning the process to automate testing and improve communication between teams, they can reduce the time it takes to identify and fix bugs, resulting in higher-quality code releases and happier customers. Embarking a Lightweight Quest to Value Stream Mapping for Agile-DevOps Teams A lightweight approach to value stream mapping can help Agile-DevOps teams to streamline their processes, improve efficiency, and deliver value to their customers more quickly. By avoiding unnecessary complexity and focusing on the most critical areas of the process, teams can achieve success and stay competitive in today's fast-paced business environment. A lightweight approach means using simple tools and methods to map out your processes instead of getting bogged down in complex and time-consuming activities. This approach can be particularly beneficial for Agile-DevOps teams, often focused on delivering value quickly and efficiently. By taking a lightweight approach, teams can focus on identifying the most critical areas of the process that need improvement and acting quickly to address them. A lightweight approach also allows for greater flexibility and agility, which is essential in the fast-paced world of Agile-DevOps. Teams can quickly adapt and adjust their value stream mapping activities as needed to stay aligned with their goals and objectives. Busting the Myths and Misconceptions: The Truth About Value Streams and Value Stream Mapping Some common myths and misconceptions about value streams and value stream mapping include the idea that they are only relevant to manufacturing or physical products, that they are too complex and time-consuming to implement, or that they are only helpful for large organizations. However, the truth is that value streams and value stream mapping can be applied to any industry or process, regardless of its size or complexity. Instead, they provide a holistic view of the end-to-end process, allowing teams to identify and address bottlenecks, reduce waste, and improve efficiency. Another misconception is that value stream mapping is a one-time activity, but in reality, it should be an ongoing process that evolves with the organization's needs and goals. It's also optional to completely understand all the processes upfront. It's perfectly acceptable to start with a smaller scope and build on that as needed. By busting these myths and misconceptions, teams can better understand the actual value of value stream mapping and how it can be a valuable tool in their Agile-DevOps journey. They can avoid unnecessary complexity and focus on the critical areas of the process that need improvement. Ultimately, this will lead to a more efficient and effective operation and better customer value delivery. Unlocking Business Excellence: Maximize the Benefits of Agile-DevOps Value Stream Mapping Using 8 Lean Principles If you want to take your Agile-DevOps team to the next level, then unlocking business excellence with Agile-DevOps value stream mapping and eight Lean principles is the way to go. Value stream mapping (VSM) is a Lean tool that visually represents the process steps required to deliver value to customers. The VSM process identifies bottlenecks, waste, and opportunities for improvement in the value stream. In addition, it helps Agile-DevOps teams to focus on value-added activities and eliminate non-value-added activities, resulting in reduced lead time, improved quality, and increased customer satisfaction. To maximize the benefits of VSM, Agile-DevOps teams should follow eight Lean principles. These principles are: Define value from the customer's perspective: Identity what your customers consider valuable and focus your efforts on delivering that value. Map the value stream: Create a visual representation of the entire value stream, from idea to delivery, to identify inefficiencies and opportunities for improvement. Create flow: Eliminate waste and create a smooth workflow through the value stream to improve delivery time. Implement pull: Use customer demand to drive work and avoid overproduction. Seek perfection: Continuously improve the value stream to eliminate waste and improve efficiency. Empower the team: Provide your Agile-DevOps team with the tools, resources, and authority they need to succeed. Practice Lean leadership: Create a culture of continuous improvement and empower your team to drive change. Respect people: Treat your team members respectfully and create a positive work environment encouraging collaboration and innovation. By implementing these eight Lean principles, Agile-DevOps teams can unlock business excellence and deliver superior customer value. Deploying the Power of Principles: Succeeding in Value Stream Mapping in a Lightweight Way and the Horizons Beyond By embracing a lightweight approach and deploying the power of Lean principles, organizations can succeed in value stream mapping and achieve business excellence. The lightweight approach enables organizations to identify areas that need improvement, break down silos, and facilitate collaboration across teams, thus unlocking the true potential of value stream mapping. It also helps organizations to sustain their efforts and continue to make improvements in the long run. By embracing the eight Lean principles, organizations can achieve business excellence by continuously improving their value stream and delivering value to their customers. These principles include creating customer value, mapping the value stream, establishing flow, implementing pull, seeking perfection, embracing scientific thinking, empowering teams, and respecting people. So, if you're looking to unlock the true potential of your Agile-DevOps transition, take advantage of value stream mapping. Don't wait; take the first step towards success, start your value stream mapping (VSM) journey today, and take your Agile-DevOps team to the next level!

By Priya Kumari
What Will Come After Agile?
What Will Come After Agile?

I think that probably most development teams describe themselves as being “agile” and probably most development teams have standups, and meetings called retrospectives. There is also a lot of discussion about “agile,” much written about “agile,” and there are many presentations about “agile.” A question that is often asked is, what comes after “agile?” Many testers work in “agile” teams, so this question matters to us. Before we can consider what comes after agile, we need to consider what agile is — an iterative, incremental development methodology. Agile teams develop software in iterations and each iteration makes an increment toward the team’s goal. An agile team can decide, after an iteration or two, that the goal they are working towards should be changed and start to work on a new goal. Working iteratively makes the team agile as it can change direction quickly and easily. There are several agile methodologies and one of the most widely used methodologies is scrum. What Is Agile? When thinking about how to define what agile is we tend to be drawn to the Agile Manifesto which was created in 2001, but there were agile ways of working before the “Agile Manifesto.” The earliest Iterative and incremental development I found was at Bell Telephone labs in the 1930s. Walter Shewhart was an engineer at Bell Telephone Labs In his lectures in the 1930s, he introduced the concept of a straight-line, three-step scientific process of specification, production, and inspection. He went on to revise this idea into a cycle. The creation of this cycle has been described as part of the evolution of the scientific method and it became known as the Shewhart Cycle. The cycle is shown in the diagram below: The cycle is sometimes known as the Plan-Do-Study-Act-Cycle. A team using the Shewhart Cycle will Plan a change or test. The team will then Do, which means to carry out the change or test. Then the team will Study the results of the change or test to consider what they have learned before Acting on what they have learned. The team will then repeat the cycle and move onward. W. Edwards Deming said that the cycle is a helpful procedure to follow for the improvement of anything in production stage. He also said that at the end of a cycle the team might tear up the work that they had done previously and start again with fresh ideas and that doing this was “a sign of advancement.” Deming said the reason to study was to “try to learn how to improve tomorrow’s product.” Sometimes the Deming Cycle is referred to as the Plan Do Check Act. Deming did not like replacing the word study with the word check as studying is an important part of the cycle. He felt that the word check was inaccurate because it meant to “hold back.” The Shewhart Cycle was included by Deming in his lectures to senior management in Japan in 1950, and the cycle went into use in Japan as the Deming Cycle. What Is the Deming Cycle? The Deming Cycle has also been described as the Deming Wheel, as it just rolls on without a beginning or an end. All four parts of the Deming Cycle can be drawn inside a circle as below. This means that the four parts of the cycle are related to one another and that there is no hierarchy, as can be seen in the diagram below: Scrum is one of the most widely used agile methodologies, and Jeff Sutherland, one of the co-creators of scrum, has written that the Deming Cycle is how scrum development is done. He also says that retrospectives are the “check” part of the “Plan-Do-Check-Act cycle,” and says that it is important to get the team to change and improve their process by moving on to the act part of the cycle. It is useful for software testers that retrospectives were designed to be used in this way as we want to help the teams we work with to improve quality. Testers can use retrospectives to raise issues that help to improve quality. Sutherland says that he trains people to use Scrum by asking them to use the Deming Cycle to build paper airplanes and that by the third iteration they are making much better paper airplanes. The Deming Cycle is the heart of agile as it is a cycle that enables teams to change and improve quickly. The cycle enables change to be made at each iteration of the cycle. However, is this how agile is understood? Do we sometimes work in teams that describe themselves as “agile” but do not use The Deming Cycle? Is “agile” sometimes described through its ceremonies rather than through using the Cycle? Are teams using “agile” for continual improvement as Deming and Sutherland recommended? New ideas, such as Jobs to be Done, continue to be influenced by the Deming Cycle. Alan Klement describes the system of progress in Jobs to be Done as a cycle and says that his cycle is not an original idea as it comes from the Deming Cycle. Lean has also been influenced by the Deming Cycle. Lean is an American description of Japanese production systems and comes from a study by MIT. The Toyota Production System was of special interest in the study. Deming worked in Japan after World War Two where he helped to rebuild the Japanese economy. Jeffrey K. Liker says that “the Deming cycle embodies the learning cycle in the Toyota Production System.“ Teams, and testers, can develop their understanding of the cycle by reading the books in the references below, by using the resources of the Deming Institute, and by using the Deming Cycle. Teams can learn to use the cycle by planning an initiative, then carrying out the planned work or test, then studying the result of their work, and then acting on what they learned before repeating the cycle. Testers can help their teams to gain an understanding of the Deming Cycle by using plan-do-study-act for testing. When we test we plan the test, for example, we write a testing charter, then perform software testing, then we study the result of the test, and then act on the result as shown in the diagram below: Teams should not be put off by the Deming Cycle creating a new structure for their team. The Deming Cycle creates a new structure for a team because a team using the Deming Cycle must plan first, then do the work or test that they have planned, then study the effect of the work or test, and then act on what the team has learned. Using the Deming Cycle can sound demanding as it places a new structure on the team. However, all teams have structures that place constraints on them. If a team always has its planning meeting on a certain day of the week this practice places a constraint on the team. How often a team releases its work also puts a constraint on the team. If a team releases once a month then that monthly release will force the team to work towards that release. If a team releases many times a day with continuous delivery then that will create a different constraint for the team. All teams want to improve how they work and improve their product, and they will find that using the Deming Cycle will help them to improve their processes and product. Undoubtedly, there will be something after “agile.” It will have a new name, and I guess it will have to have new “ceremonies.” However, will the Deming Cycle be replaced by what replaces agile? The Deming Cycle is a profound philosophical insight that has been used by engineering teams to improve quality for nearly one hundred years and is continuing to influence new ideas. It seems unlikely that the Deming Cycle will be replaced by what comes after agile because it was so innovative, so useful, and is still being used after so many years. It would be great if the new way of working that comes after agile created a deeper understanding of the Deming Cycle, as this would help teams to learn, improve how they work, and improve the products they make.

By Mike Harris
How Agile Architecture Spikes Are Used in Shift-Left BDD
How Agile Architecture Spikes Are Used in Shift-Left BDD

An architecture spike in agile methodologies usually implies a software development method, which originates in the extreme programming offshoot of agile. It boils down to determining how much effort is needed to solve a particular problem or discover a workaround for an existing software issue. So, let us explore the benefits and see how these spikes can help in improving quality and making testing easier—by shifting our attention to the left—challenging the specification at a very early phase, asking questions, and getting the ground ready for sensible software architecture, which will, in turn, improve the testability of our application under test. More Details About Spikes There are many benefits of spiking—to get to know the unknown unknowns, discover risks, reduce complexity, and provide proof for proving or disapproving a theory. Taking a deep dive into the idea behind the solution can help us better understand the potential architectural solutions and the likelihood of whether it will work. A spike is not there to provide a finished working product or even an MVP. Its purpose is mainly to test a theory, so even though this concept is used (in the long run) to produce working software, the code written for spikes is often discarded after it has served its purpose. Spiking is usually done by ignoring architecture styles (which might seem odd at first as it can help discover the right architectural approaches for the system we are building), coding styles, design patterns, and general clean coding practices in favor of speed. Even though the spike may not directly produce software that will be delivered to the customer, in the long run, it still helps us ship better code in the end. Spiking is a good tool for handling risks by discovering unknown risks and provides a great way to learn and reduce complexity. A very common approach is to come up with spikes around a theory and follow the code with a small number of tests. Even though the spikes are seen as discardable code, we don’t just throw them aside. While they don’t end up in the actual code that gets delivered, they provide insights and can serve as documentation to show how a solution was reached. A Simple Example Let us assume that we have a new feature we need to develop, so we need to allow the users to be able to save a photo in their profile. To do that, a developer can make a spike where the following could be done: Have the JavaScript on the frontend communicate with the database: Set up a database server locally. Set up Node.js (or another server). Use ODBC (Open Database Connectivity) API to connect to the DB. Test the spike: Run a few sample queries. Test the CRUD functionality. What is mentioned in this simple example is all we need for a spike; it does not require any detailed documentation. The developer working on a spike will need to do some Googling, run a few commands from the terminal, and write a few lines of code for each theory. The spike would provide a possible direction for solving the challenge at hand; it can also include links for resources used, installs scripts, and the produced code to be used as a blueprint. Trying things out is way more beneficial than simply their sizing about them. The team was able to reduce the risk related to this feature—in this example, especially from the technical integrations side and even discovered new risks, such as accessing the DB using local JS! How Does This Impact Testing? Allowing us to explore spikes helps us identify the unknown unknowns, so, in a sense, spikes are a tool for early testing (used often when shifting testing to the left). By getting answers to what works and what will not work, we avoid many potential issues and delays by probing the requirements to distill them further. In turn, there are fewer bugs to report, fix, verify, and keep track of. Also, the earlier the testing is done; the more economical and fast it will be. Can QA Use Spikes? There is no real reason why not to. I have seen testers use spikes to try out and experiment with different approaches to automating some parts of the system under tests to determine the best approach. An architecture spike can help us try out different testing tools, such as new frameworks and libraries, and give us a first-hand experience of how a tool would behave without a system (when we try to automate some business rule, for example). Spikes are generally regarded as technical tasks (different than user stories), usually under an epic that is in the early development stages. Conclusion Spikes in agile are one of the tools that allow us to do what agile is intended to do in the first place: short, quick feedback cycles give us answers early in the development process. We focus on doing and trying instead of long, overly detailed planning. That is not to say that code architecture does not matter in agile (as we know, in waterfall architecture, it is very important and usually done in the design phase). In agile, we use a different approach. Agile practices, such as spikes, allow us to get an idea about architectural solutions that may work and info about the ones that may not work. Software produced in the above-mentioned manner helps us reduce risk in our user stories and enabled the team to discover the right solutions using collaboration, constructive discussion, frequent experimentation, and compromise. In an informal sense, a lot of people happen to be using spikes without even realizing it! As long as you are trying to identify the unknown unknowns, have short feedback cycles, and try to determine technical and functional risks, you are doing agile. Spikes will help us in situations where we are not certain about the requirements and if there are a lot of unknowns and answers that need answers.

By Mirza Sisic
How To Use Epics in Jira
How To Use Epics in Jira

Effective project management is essential in software development. Jira’s Epics provide a powerful solution for managing complex software projects, and understanding how to use them can make all the difference. In this article, we’ll dive deep into the technical aspects of utilizing Epics in the Jira workflow, exploring the advanced features and Jira best practices that can streamline your software development process. So, let’s delve into the intricacies of agile project management in software development and take your workflow to the next level. Example of an Epic In essence, an Epic is a collection of user stories that groups them together into one valuable item. This helps the team, Project Managers, and Stakeholders see the bigger picture and the overall strategy of the project without needing to dive into the details. Let’s say you run a general goods store. There are dozens of daily activities you need to keep tabs on: keeping the inventory in check, making sure new products are labeled properly on the shelves, cleaning the aisles, etc. These are your “stories.” Putting all of them into one gargantuan to-do list doesn’t make a lot of sense as you’ll end up with a lot of checklist items of various value and complexity. This will make prioritization and even simple analysis of the workload way more challenging than it needs to be. However, if you group them together using a common denominator, you’ll have a much more manageable list. In this case, related tasks like an inventory check, ordering a new delivery, and paying the vendor will become an Epic for your procurement process. You can then use the newly created Epics to prioritize your work based on the following factors: The impact an Epic has on your business. The effort or the amount of work you’ll need to complete the tasks. The risks that are associated with Epic like, for example, the risk of not having full information, the risk of being a bad market fit, or the risk of not meeting regulations, etc. Any dependencies with other tasks or Epics. An example that’s closer to software development can involve the addition of Augmented Reality functionality to a mobile app of a beauty product vendor. You’ll need the involvement of multiple people working on various tasks to accomplish your goal. Therefore it would make sense to group them together into an Epic. What’s the Right Size for an Epic? The appropriate size for an Epic is somewhat of a hot topic in the development community. You won’t find the “correct” answer in the Scrum Guide or in any other reference material. I’ve heard arguments that one Epic should be doable in one Sprint, and I’ve also heard that Epics can take as long as several years. In both cases, the justification was something along the lines of, “Hey, it works for our team; therefore, it is a good practice.” In my experience, several years is a very long time to complete an Epic. This timeframe is closer to themes and initiatives in terms of volume and complexity. A single Sprint time frame doesn’t make too much sense to me, either. This is enough time to make a feature, and a feature is indeed composed of multiple stories, but I don’t believe Epics should be feature-sized. I’m more inclined to agree with Mr. Chris Belknap and his estimation of how big an Epic should be: A theme is a multi-year endeavor. Epics take from six months to a year. Features take around three months. And stories take one Sprint or less. That said, at the end of the day, the “correct” size of an Epic is whatever works for you and your organization. The key is making sure that everyone, including the stakeholders, managers, and team members, understands what an Epic is in your case. Difference Between Epics, User Stories, Themes, and Initiatives By default, Jira has three elements to its hierarchy: Epics, Issues (Stories), and Subtasks. You can add additional layers to this functionality with Themes and Initiatives. The latter is only available for Jira Premium or third-party add-ons. Initiatives Initiatives are like epics for other epics, providing a strategic bird’s eye view for overseeing multiple company projects. For instance, an initiative could aim to boost a certain project’s market share in Spanish-speaking countries, such as an e-commerce store. This initiative would then be broken down into smaller projects like establishing connections with new vendors, investing more in advertising and marketing, or developing a mobile app for the store. Themes Themes are labels used to group work by categories in Jira. For example, a theme might involve adding a Spanish localization to reach a new market. Epics Epics serve as a project management tool to break down complex work, prioritize tasks, plan increments, and track progress among one or more teams working on the project. Epics are commonly used by developers to define project features. For instance, when creating an e-commerce website, account management, shopping cart functionality, and PayPal integration could be considered Epics. Each task required to launch the website becomes an issue or story associated with these Epics. It’s important to note that an Epic does not necessarily represent the entire project, as large projects are often divided into multiple Epics. For instance, filling website pages with content or improving SEO could be separate Epics in the e-commerce website example. Issues Issues refer to different types of tasks like Bugs or Stories, which form smaller features within an Epic. In this regard, a feature is the smallest element of your product that the customer can use or interact with to gain value. For instance, a customer logging into your product is a feature that can have multiple stories: log in with email, log in with social media, etc., whereas the login, password reset, and deleting an account would form an “Account Management” Epic together. Pro tip: You can use a Jira Checklist to refine your requirements through a more detailed decomposition of tasks. They can be also helpful when you have some research items, as you will have a list of items that specifies the things you want to check. Subtasks Subtasks are a way to segment work and further breakdown issues in Jira. How to use Jira subtasks? If there’s a story for deleting an account, subtasks could include creating a confirmation screen, sending a verification email, performing background clean-up according to GDPR rules, and ensuring adequate test coverage. How Are Epics Used in Agile? The Agile approach to Project management encourages teams to break down their work into manageable segments or Sprints. This approach helps with prioritization and makes sure developers are lean enough to incorporate feedback and respond to change. Epics are an element of the hierarchy that’s designed to organize the smallest pieces of work and track progress at a somewhat larger scale. Their role in Agile can be broken down into three parts: Break down the project to make the work more manageable. Align the stakeholders with the product and your vision. Highlight dependencies. These goals are achieved when Epics are visualized on a roadmap. This roadmap is similar to a Gantt chart but simpler. It offers a high-level overview of the scope by showing you the Epics (1), associated stories (2), and the time frame. You can also clearly see dependencies (3) on the roadmap. Check out this guide that’s dedicated to the Jira roadmap to learn more. Jira extends your abilities for management and planning with a wide selection of accessible reports. The three that will help you with managing work and Epics are: 1. Burnup Report This report shows the actual number of story points that were burned during a Sprint versus the scope of work that indicates the total number of story points that must be completed. 2. Velocity Report The velocity report compares your commitment or the amount of work that was present when the work began VS the work that has been completed. 3. Cumulative Flow Diagram The Cumulative flow diagram shows you all of the tickets based on the workflow status. You can use this to learn more about potential bottlenecks in your processes; for example, if too many tickets are piling up with the ready-for-QA status, perhaps you need more quality assurance engineers, etc. How To Create a New Epic Creating a new Epic in Jira is actually quite easy. You need to open the Roadmaps interface in your Jira and click the Create Epic button. You can then add new stories to your epic by clicking on the “+” button and using the drag-and-drop interface to adjust the time frame. How To Structure an Epic The tricky part is that most of the project planning work needs to be done before you even think about opening Jira. Our team uses the BRIDGeS framework for multi-context analysis. This planning stage can take some time, and it will require involvement from the whole team, but the entire process can be broken down into 4 major elements: 1. Problem Description Conduct a brainstorming session to identify key subjects and their descriptors. Ensure that all team members have a thorough understanding of the context before moving forward. 2. Prioritization Exercise Prioritize benefits, risks, and issues to focus on the most important tasks during the Solution Stage. This will prevent time wastage on less critical tasks. 3. Solution Variations Generate potential high-level solutions that address the benefits, risks, and issues identified in the Problem Space. 4. Solution Breakdown Break down the high-level solution into smaller, manageable bodies of work. These will become your list of Epics that are already designed within a roadmap-like implementation plan. Translate this plan into Jira. You can learn more about the framework in more detail from this guide to project management in Jira. Tips and Best Practices for Jira Epics Lastly, here are some handy tips for working with Epics and Roadmaps in Jira. Build Epics around OKRs or Objectives and Key Results. This way, the team will know what needs to be done and why. Define the completion of an Epic as well as the Definition of Done and Acceptance Criteria beforehand. You can also use checklist templates to help with automating this process. Don’t make your Epics too big. There is always the temptation of adding new tasks to an Epic simply because it’s already there. Close your Epics when they are actually done rather than when all of the tasks are closed. It is ok to leave open issues in an otherwise completed Epic if they’ve lost their relevance or have been deprioritized. Revise the roadmap on a regular basis. My team has a practice of revisiting the roadmap on a weekly basis. This helps us communicate and visualize blockers in a timely and efficient manner. Be realistic with your scope. Having more tasks than your team can realistically deliver will cause fast burnout, and it will make the working process much harder than it needs to be simply due to the need to juggle tasks. Clearly outline your dependencies. This will make prioritization much simpler, and the bigger picture will be more visible to the stakeholders. Take a clear look at issues without Epics during your backlog grooming sessions.

By Oleksandr Siryi
Dependency Poker for Scrum Teams
Dependency Poker for Scrum Teams

Dependency Poker is an Agile game — similar to planning poker — that enables teams to identify and manage dependencies in the development process. It can be utilized in Backlog Refinement or SAFe's PI Planning to enhance collaboration and reduce project risks. The Cards To play Dependency Poker, your team must create or obtain cards representing different types of dependencies. These are used in the first step to identify dependencies between software components or features. The second set of cards, strategy cards, are used to find effective solutions for managing dependencies and mitigating risks. 1. Dependency Cards In the paper "A Taxonomy of Dependencies in Agile Software Development," dependency categories were identified. The list describes different types of dependencies that can affect the completion of work in a software development project. These include Requirements, Expertise, Task Allocation, Historical, Activity, Business Process, Entity, and Technical Dependencies. In unFix, Planning and Environment dependencies are added. Self-printed Cards from unFix 2. Dependency Breaker unFix provides a list with 20 Dependency Breaker: Block, Throttle, Prioritize, Visualize, Flag, Measure, Automate, Standardize, Redesign, Decouple, Toggle, Contain, Coordinate, Attend, Reteam, Rotate, Swarm, Volunteer, Formalize, and Self-Service. Dependency Basher Cards 3. Best Practice Card Sets The Dependency Breaker cards from Unfix are a popular and widely used set of dependency cards that represent the different types of dependencies that can exist between software components or features. The Game To start the game, the team needs to understand the different types of dependencies by explaining Dependency Poker dependency type cards to each other. The perfect time to perform Dependency Poker is during SAFe's PI Planning, Scrums Backlog Refinement, or during Workshops. Placing all Cards on a to-be-Refined Backlog ItemThe team then places all(!) of the Dependency Cards on the next backlog item with the highest order. The team can then work together to remove the cards that are not relevant or that have been resolved already by using consensus-based decision-making to ensure that all team members have a shared understanding of the dependencies involved in the development of the backlog item. Clustering the DependenciesIn the second step of Dependency Poker, the team clusters dependencies together to understand their interconnections and develop strategies to manage them more effectively. Different team members may have different views on dependencies, and identifying clusters helps identify potential risks and dependencies that may have been overlooked. Finding a SolutionAfter clustering dependencies in Dependency Poker, the team finds and commits to strategies to handle them. Each participant uses his/her set of strategy cards and throws them on the cluster of dependencies to discuss which strategies are most effective. The goal is to agree on a shared strategy and commit to implementing it. Dot voting can be used to identify the best options, and the team should document these for further analysis. This step ensures that the team has a shared understanding of dependencies and a plan to address them, improving collaboration and reducing project risks. Analyzing the identified dependencies and strategies can provide valuable insights into how to optimize your business agility. By reviewing the strategies that the team has committed to, you can determine if they are effective and if there are any additional steps that could be taken to improve the management of dependencies. The growing Dependency Poker website provides a wealth of resources, including experience reports and artifacts, to help teams improve their dependency management process. By utilizing these resources and continuously evaluating and improving their approach to handling dependencies, teams can increase their agility, reduce project risks, and deliver successful outcomes.

By Nils Hyoma
ChatGPT on GPT-4
ChatGPT on GPT-4

TL; DR: ChatGPT 4: A Bargain for Scrum Practitioners? When OpenAI released its new LLM model GPT-4 last week, I could not resist and signed up for $20 monthly. I wanted to determine whether ChatGPT 4 is superior to its predecessor, which left a good impression in recent months; see my previous articles on Scrum, Agile, and ChatGPT. I decided to run three comparisons, using the identical prompt to trigger answers from the new GPT-4 and previous GPT-3.5 models. Read on and learn what happened. It was not a foregone conclusion. Setting up the Session I ran the session on ChatGPT on Wednesday, March 15, from 5:40 am CET to 6:30 am CET on the models GPT-3.5 and GPT-4 after upgrading to ChatGPT Plus. I did not apply Grammarly to ChatGPT’s answers. The Initial ChatGPT Prompt to Explore the New Mode GPT-4 I used the following initial prompt to prime ChatGPT: “Are you using the same training data as GPT-3.0 or 3.5?” ChatGPT-4.0: As a GPT-4 model, I am an updated version of the previous GPT models. While I share some similarities with GPT-3 and GPT-3.5, there are differences in the training data used for each model. The primary difference is the time period of the data. GPT-3 and GPT-3.5 were trained on data available up until around 2020 and 2021, respectively. In my case, as GPT-4, my training data extends until September 2021. This allows me to have a more up-to-date understanding of the world, including newer events and developments. Additionally, improvements in data curation, preprocessing, and filtering techniques have likely been made between versions, leading to differences in the quality and breadth of the training data. While the core data sources might be similar, such as books, articles, and websites, the data sets themselves would not be identical across these models. The Experiments Now that we have a better understanding of the model’s background let’s run three experiments to figure out which model is superior. I will run three comparisons or A/B tests, using the identical prompt to trigger answers from the new GPT-4 and previous GPT-3.5 models: The first experiment compares the Scrum Guide 2017 to the Scrum Guide 2020 The second experiment is designing a Retrospective exercise The third experiment is to help a new Scrum team to design a Definition of Done. Learn more about the new GPT-4 model from OpenAI’s GPT-4 Developer Livestream of March 14, 2023, or the OpenAI GPT-4 Technical Report. Experiment 1: Comparing the Scrum Guide 2017 to the Scrum Guide 2020 Prompt: I want you to act as a Scrum Master. Tell me: what is the difference between the Scrum Guide 2017 and the Scrum Guide 2020? ChatGPT-3.5: As a Scrum Master, I’d be happy to answer your question about the differences between the Scrum Guide 2017 and the Scrum Guide 2020. The Scrum Guide is the official guide to Scrum, and it provides a framework for implementing and improving Scrum practices. The latest version of the Scrum Guide was released in November 2020, and it contains several updates from the previous version released in 2017. Here are some of the key differences between the two versions: Focus on the Scrum team: The Scrum Guide 2020 places more emphasis on the Scrum team, rather than just the Development Team. The Product Owner and Scrum Master are now considered equal members of the Scrum Team. Empirical process control: The Scrum Guide 2020 highlights the importance of empirical process control and the three pillars of transparency, inspection, and adaptation. Updated language: The language in the Scrum Guide 2020 has been updated to be more inclusive and gender-neutral. Simplified framework: The Scrum Guide 2020 has removed some of the prescriptive language from the previous version to make the framework more flexible and adaptable to different contexts. Refinement instead of grooming: The term “Backlog Refinement” is now used instead of “Backlog Grooming” to emphasize the continuous improvement aspect of the practice. Overall, the changes in the Scrum Guide 2020 are intended to make the framework more accessible and adaptable to a wider range of contexts while still maintaining the core principles and values of Scrum. Comment: These answers are irrelevant, or hallucinations: Scrum Master and Product Owner are now “equal members of the Scrum?” No mention of commitments, namely the Product Goal? Highlighting the importance of empirical process control? That has been at the core of Scrum from its beginnings. The answer is embarrassing. ChatGPT-4.0: As a language model, I can provide you with the key differences between the Scrum Guide 2017 and the Scrum Guide 2020. Here are some of the most notable changes: Less prescriptive and more flexible: The Scrum Guide 2020 is shorter and more focused on the Scrum framework’s core concepts. It has reduced the prescriptive nature of some practices, allowing teams more flexibility in adopting Scrum according to their specific context. Introduction of the Product Goal: The 2020 version introduces the concept of a Product Goal, which is a long-term objective for the product. It provides guidance and direction for the team, and all the Sprint Goals should align with and contribute to the Product Goal. Simplified team structure: The Scrum Guide 2020 eliminates the distinction between development team members and other roles. It now refers to the whole team as the “Scrum Team,” consisting of the Scrum Master, Product Owner, and Developers, all of whom share accountability for delivering a potentially releasable increment each sprint. A greater focus on commitment: The Scrum Guide 2020 emphasizes the importance of commitment for each artifact. The Product Backlog has the Product Goal, the Sprint Backlog has the Sprint Goal, and the Increment has the Definition of Done. These commitments help provide clarity and focus for the Scrum Team. Self-managing instead of self-organizing: The 2020 version refers to Scrum Teams as self-managing, emphasizing their autonomy in making decisions regarding how to achieve the Sprint Goal, and how to create the product Increment. This change highlights the team’s responsibility for organizing and managing their own work, rather than relying on external direction. Removal of the Sprint Retrospective’s three-question format: The 2020 Scrum Guide no longer suggests the specific three-question format (“What went well? What can be improved? What will we commit to improve in the next Sprint?”) for the Sprint Retrospective. Instead, it emphasizes the need to plan improvements and foster a continuous improvement mindset. Updated the Definition of Done: The Scrum Guide 2020 clarifies that the Definition of Done is a formal description of the state of the Increment when it meets the quality measures required for the product. It also emphasizes that the Definition of Done is used to assess when work is complete on the product Increment. These are some of the main differences between the Scrum Guide 2017 and the Scrum Guide 2020. The 2020 version aims to provide a more straightforward, flexible, and focused framework for Scrum practitioners, emphasizing the importance of team collaboration, commitment, and continuous improvement. Comment: We have a clear winner! GPT-4’s answer is better and more substantial than the previous model’s. Score: ChatGPT-4 1-0 ChatGPT-3.5 Experiment 2: Designing a Retrospective Exercise Prompt: I want you to act as an experienced Scrum Master. Your Scrum team had a problematic Sprint and missed accomplishing the Sprint Goal. Some team members had unexpected food poisoning. The procurement department failed to renew credit card information for an essential service your team uses, and that service was no longer available. Additionally, the Scrum team ran into unexpected technical debt hidden in the legacy part of the code base. The tension in the team is high; arguments have become more frequent. As a Scrum Master, I want you to use your knowledge of Scrum, iterative and incremental product development, non-violent communication, collaboration, team building, Scrum values, psychology, and Liberating Structures to design a Retrospective that helps the Scrum team to overcome recent setbacks and improve the relationships among each other. ChatGPT-3.5: I’m sorry to hear that your Scrum team had a challenging Sprint. As an experienced Scrum Master, I’m here to help you navigate this difficult situation and design a Retrospective that will help the team overcome recent setbacks and improve relationships among team members. First, it’s essential to acknowledge that setbacks and challenges are a natural part of the Scrum process. Our goal is not to eliminate problems but to learn from them and use them as opportunities for growth and improvement. To start the Retrospective, we need to create a safe and respectful space where team members can express themselves honestly and openly. One Liberating Structure that could help us achieve this is the “1-2-4-All” structure, which involves individuals reflecting silently, sharing with a partner, joining a group of four to discuss their thoughts, and then sharing the group’s ideas with the entire team. We’ll start by asking the team to reflect on the past Sprint and identify what went well and what didn’t go as planned. We’ll then focus on the challenges that arose during the Sprint, including the unexpected technical debt, credit card issues, and team members getting sick. Once we have identified the challenges, we’ll use the Liberating Structure “TRIZ” to explore potential solutions. TRIZ stands for “Theory of Inventive Problem Solving,” and it’s a structured approach to generating creative solutions to complex problems. We’ll ask the team to identify the root cause of each problem and then use TRIZ to generate new ideas for addressing those issues. Next, we’ll use the Liberating Structure “Celebrity Interview” to encourage empathy and understanding among team members. Each team member will take on the role of a “celebrity” who embodies certain values or characteristics, and the team will ask them questions about how they would handle the recent setbacks. Throughout the Retrospective, we’ll emphasize the importance of non-violent communication and collaboration, and we’ll remind the team of the Scrum values of commitment, courage, focus, openness, and respect. By the end of the Retrospective, our goal is to have identified concrete steps that the team can take to address the challenges that arose during the previous Sprint and improve their relationships with each other. We’ll create an action plan that outlines these steps and assign owners to each task to ensure accountability and follow-through. Comment: Sounds good, initially, doesn’t it? Too bad the suggestion does not pass the test. Of course, you could use 1-2-4-All to start the Retrospective. However, if every team member shall have their airtime, a structure like “Conversation Café” would be better suited. Instead of TRIZ, I would use “Nine Whys” or “What. So What. Now What?” Finally, the “Celebrity Interview” is useless for the intended purpose. If empathy is your main concern, “Heard, Seen, Respected” would be a better choice. ChatGPT-4.0: As an experienced Scrum Master, I would design a Retrospective that not only addresses the setbacks but also focuses on rebuilding trust, improving communication, and fostering a collaborative environment within the team. The Retrospective will follow these five steps: Set the Stage: Start the session by acknowledging the challenges the team faced during the Sprint and emphasize the importance of learning from setbacks. Encourage everyone to participate in a safe, open, and respectful environment. Use a short icebreaker activity to help the team relax and prepare for the session. Gather Data: Begin by asking the team members to write down on sticky notes what went well, what didn’t go well, and any surprising events during the Sprint. Use a timeline activity to help visualize the events and identify patterns. Ask the team to share their experiences openly, being mindful of non-violent communication principles. Generate Insights: Use the ‘5 Whys’ technique or the ‘Fishbone’ diagram to analyze the root causes of the problems faced. Encourage the team to focus on the process and not on individuals. Emphasize Scrum values, such as courage and openness, while discussing the issues. Facilitate a discussion on the impact of the unexpected technical debt and how to address it moving forward. Also, discuss how to improve communication with the procurement department to avoid future service disruptions. Decide What to Do: Guide the team to collaboratively identify action items to address the root causes of the problems. Prioritize the action items by having the team vote on the most important ones. Encourage the team to come up with both short-term and long-term actions. For example, a short-term action could be to create a checklist for procurement processes, while a long-term action could involve implementing automated tests to uncover hidden technical debt. Close the Retrospective: End the session with a team-building activity that reinforces the Scrum values and fosters collaboration. Ask the team members to share one positive thing about their colleagues or to express gratitude for something a teammate did during the Sprint. This will help to improve relationships and reduce tensions among the team members. Conclude the Retrospective by summarizing the key takeaways and the agreed-upon action items, and reiterating the importance of continuous improvement and collaboration. Follow up with the team in the upcoming Sprints to ensure that the action items are being implemented and to gauge the effectiveness of the improvements. Encourage the team to maintain open communication and support each other as they work towards achieving their goals. Follow-up Prompt: Is it necessary to run the Retrospective in this strict, prescriptive manner? ChatGPT-4.0: While the five-step structure I provided is a popular and effective way to conduct Retrospectives, it is not mandatory to follow it in a strict, prescriptive manner. As a Scrum Master, you should be flexible and adapt the format based on your team’s needs, preferences, and specific situation. It is essential to maintain the core principles of a Retrospective, which include creating a safe and open environment for discussion, reflecting on the past Sprint, generating insights, identifying improvements, and fostering a sense of collaboration and continuous improvement within the team. You can experiment with different techniques and activities to achieve these objectives, and it is common for experienced Scrum Masters to tailor Retrospectives to suit their team’s dynamics. Some alternative approaches include: Lean Coffee: A structured, agenda-less meeting format that allows participants to democratically decide on the topics they want to discuss. World Café: A technique that uses rotating small-group discussions to gather input from all team members on multiple topics. Starfish: A visual method for capturing what the team should start doing, stop doing, continue doing, do more of, and do less of. The key is to be receptive to your team’s feedback and continuously improve the Retrospective process to ensure it remains valuable and engaging for everyone involved. Comment: The first suggestion was merely a standard application of the Esther Derby and Diana Larsen model, ignoring the additional information on the situation from the prompt. If we also consider the answer to the follow-up prompt, the GPT-4-based performance gets a bit better, although it is mediocre at best. However, such a performance is sometimes sufficient when the competitor’s answer is even worse. Score: ChatGPT-4 2-0 ChatGPT-3.5 Experiment 3: Help a New Scrum Team to Design a Definition of Done Prompt: I want you to act as an experienced Scrum Master. The Definition of Done is essential for a Scrum team’s success. It defines the quality standard any Increment is expected to meet by stakeholders and governance bodies. The quality level does not have to be perfect or top-of-the-line in all situations. However, it is critical that once an Increment is “done,” it can be delivered to the customers without any legal, ethical, or financial repercussions. I want you to use your knowledge of Scrum, iterative and incremental product development, collaboration, product management, stakeholder collaboration, quality management, innovation, and business agility in general to suggest a process on how a new Scrum team can create an initial Definition of Done and then maintain it accordingly. ChatGPT-3.5: Sure, I’d be happy to help you with that. Creating an Initial Definition of Done: Collaborate with the team: As a Scrum Master, you need to work with the team to create a Definition of Done that everyone agrees upon. You can facilitate a discussion during a Sprint Retrospective or a dedicated Definition of Done workshop to ensure everyone’s voices are heard. Understand stakeholders’ expectations: It’s essential to understand what the stakeholders expect from the product and what quality standards they have in mind. The team should incorporate these expectations into the Definition of Done, which will ensure that the product meets stakeholder expectations. Identify quality metrics: Define the metrics that can be used to measure quality. Metrics like code coverage, performance, and security can help establish the Definition of Done. The team should strive to meet these metrics when delivering a potentially releasable Increment. Consider technical debt: Teams should consider the technical debt they’re willing to take on when developing a feature. Technical debt is the cost of rework caused by taking shortcuts or delaying work. The team should define the level of technical debt they’re willing to accept and communicate this to stakeholders. Keep it simple: The Definition of Done should be concise, clear, and easy to understand. It should be written in a language that’s accessible to all team members and stakeholders. Maintaining the Definition of Done: Review and update the Definition of Done regularly: The Definition of Done should be reviewed and updated regularly. This can be done during Sprint Retrospectives or dedicated Definition of Done workshops. The team should consider feedback from stakeholders and make necessary changes to the Definition of Done. Communicate changes to stakeholders: Whenever the Definition of Done changes, the team should communicate this to stakeholders. This will help ensure that everyone is on the same page and that the product meets expectations. Use the Definition of Done during Sprint Reviews: During Sprint Reviews, the team should use the Definition of Done to determine whether the Increment is ready for release. If it doesn’t meet the Definition of Done, the team should identify the gaps and work to close them in the next Sprint. Track and report progress: Teams should track and report progress against the Definition of Done metrics. This can help identify areas where improvement is needed and drive continuous improvement. Use feedback to improve: The team should use feedback from stakeholders to improve the Definition of Done. This can be done by incorporating feedback into the Definition of Done or adjusting the metrics used to measure quality. Overall, creating and maintaining a Definition of Done is critical to the success of a Scrum team. By following these steps, the team can ensure that they’re delivering quality Increments that meet stakeholder expectations and can be released without any legal, ethical, or financial repercussions. Comment: Wow, who would have thought ChatGPT-3.5 could deliver at such a high level? The approach is collaborative and inclusive, technical debt is part of the consideration, and the Definition of Done shall be simple and regularly inspected and adapted, where feedback from stakeholders is as valuable as data from tracking suitable metrics. ChatGPT-4.0: As an experienced Scrum Master, I recommend the following process for a new Scrum team to create an initial Definition of Done (DoD) and maintain it: Gather Inputs: Start by collecting inputs from the Scrum team, stakeholders, governance bodies, and any existing organizational standards or guidelines. These inputs will help establish the baseline quality expectations and constraints for the product. Educate the Team: Ensure that all team members understand the purpose and importance of the DoD. Explain how it helps create transparency, manage expectations, and ensure a potentially releasable product Increment. Collaborative Workshop: Organize a workshop with the Scrum team, Product Owner, and relevant stakeholders to collaboratively create the initial DoD. Facilitate discussions on the quality standards and acceptance criteria, including functional and non-functional requirements, performance, security, usability, and compliance. Draft the Definition of Done: Based on the workshop outcomes, create a draft of the DoD that includes a clear and concise list of criteria that must be met for an Increment to be considered “done.” Ensure that the criteria are measurable, achievable, and aligned with the overall product vision. Review and Refine: Present the draft DoD to the Scrum team, Product Owner, and stakeholders for review and feedback. Make any necessary adjustments based on their input, and iterate until a consensus is reached. Communicate and Display: Once the initial DoD is finalized, communicate it to all relevant parties and display it prominently in the team’s workspace or on a shared online platform. This ensures that everyone is aware of the expectations and can refer to the DoD when needed. Regularly Review and Update: Incorporate a review of the DoD into the team’s Retrospectives, or schedule periodic reviews to ensure it remains relevant and aligned with the product’s evolution. Update the DoD as needed, based on lessons learned, changing requirements, stakeholder feedback, and any changes in organizational standards or regulations. Monitor Compliance: As a Scrum Master, monitor the team’s adherence to the DoD throughout each Sprint. Provide support and guidance to help the team meet the established criteria and address any challenges that arise. By following this process, a new Scrum team can create an initial Definition of Done and maintain it to ensure consistent quality and alignment with stakeholder expectations throughout the product development lifecycle. Comment: Gee, this is unexpected. It sounds like a PMI brochure. Follow-up Prompt: Your suggestion contains too much project management thinking and ignores the importance of collaboration and teamwork in a Scrum team. Partly, it is plain wrong: A Scrum Master does not create the initial Definition of Done. Also, they do not enforce compliance; that is the task of the Developers. Recreate the answer based on Scrum’s first principles of self-management and servant leadership. ChatGPT-4.0: I apologize for any confusion in my previous response. As a Scrum Master, your role is to facilitate and guide the team through the process of creating and maintaining the Definition of Done (DoD), while emphasizing the importance of collaboration, teamwork, and self-management. Here’s an updated approach based on Scrum’s first principles: Set the Context: Ensure that the Scrum team, including the Product Owner, understands the purpose and importance of the DoD. Explain how it helps establish a shared understanding of quality standards, manage expectations, and ensure a potentially releasable product Increment. Facilitate Collaboration: Encourage the Scrum team to collaborate with stakeholders, governance bodies, and any existing organizational standards or guidelines to gather inputs that will help establish the baseline quality expectations and constraints for the product. Empower the Team: Support the Developers in organizing a workshop or a series of discussions where they can collaboratively create the initial DoD. Encourage them to consider functional and non-functional requirements, performance, security, usability, and compliance. Refinement and Consensus: As the Developers draft the DoD, facilitate discussions and feedback loops between the team, Product Owner, and stakeholders. Help them iterate on the DoD until they reach a consensus on the criteria that must be met for an Increment to be considered “done.” Encourage Transparency: Once the initial DoD is finalized, encourage the Developers to communicate it to all relevant parties and display it prominently in the team’s workspace or on a shared online platform. This will ensure that everyone is aware of the expectations and can refer to the DoD when needed. Foster Continuous Improvement: During Retrospectives, remind the team to review the DoD and discuss any potential improvements or adjustments based on lessons learned, changing requirements, stakeholder feedback, or changes in organizational standards or regulations. Support Compliance: As a servant leader, provide support and guidance to help the Developers adhere to the DoD throughout each Sprint. Encourage open communication and address any challenges that arise. Remember that the responsibility of meeting the DoD criteria lies with the Developers, not with the Scrum Master. By following this approach, you can enable the Scrum team to create and maintain a Definition of Done that is rooted in collaboration, self-management, and continuous improvement, while staying true to the principles of servant leadership. Comment: ChatGPT-4.0 talks a lot about collaboration, self-management, and continuous improvement. However, it seems that the Scrum Master is still in charge of pushing the Developers in the right direction from the Scrum Master’s perspective. Honestly, I like ChatGPT-3.5’s answer better. PS: The Product Owner is also a member of the Scrum team. Score: ChatGPT-4 2-1 ChatGPT-3.5 Conclusion ChatGPT based on the GPT-4 model has advantages over the “old” GPT-3.5 model. OpenAI promised better reasoning and conciseness and delivered. However, judging by three small everyday experiments, the new model’s advantage is not as conclusive as expected. So, let’s wait for GPT-5. Nevertheless, I will continue my subscription. What is your opinion as a Scrum Practitioner: Is ChatGPT-4.0 worth spending $20 per month compared to ChatGPT-3.5? Please share it with us in the comments.

By Stefan Wolpers CORE
Jira Best Practices From Experts
Jira Best Practices From Experts

Railsware is an engineer-led company with a vast portfolio of building projects for companies, so when talking about Jira best practices for developers, we speak from experience. Why Do People Love Jira? Jira is by no means perfect. It certainly has its downsides and drawbacks. For instance, it is a behemoth of a product and, as such, is pretty slow when it comes to updates or additions of new functionality. Some developers also say that Jira goes against certain agile principles because—when in the wrong hands—it can promote fixation on due dates rather than delivery of product value. Getting lost in layers and levels of several boards can, indeed, disconnect people by overcomplicating things. Still, it is among the preferred project management tools among software development teams. Why is that? Permissions: Teams, especially bigger ones, work with many different experts and stakeholders, besides the core team itself. So, setting up the right access to information is crucial. Roadmaps and epics: Jira is great for organizing your project on all levels. On the highest level, you have a roadmap with a timeline. Then, you have epics that group tasks by features or feature versions. Inside each epic, you create tickets for implementation. Customization: This is Jira’s strongest point. You can customize virtually anything: Fields for your JIRA tickets. UI of your tickets, boards, roadmaps, etc. Notifications. Workflows: Each project may require its own workflow and set of statuses per ticket, e.g., some projects have staging server and QA testing on it and some don’t. Search is unrivalled (if you know SQL aka JQL in Jira): Finding something that would have been lost to history in a different project management tool is a matter of knowing JQL in Jira. The ability to add labels using keywords makes the aforementioned search and analysis even simpler. Automation: The ability to automate many actions is among the greatest and most underestimated strengths of Jira: You can create custom flows where tickets will create temporary assignees (like the back and forth between development and QA). You can make the issue fall into certain columns on the board based on its content. Move issues to “in progress” from “todo” when there’s a related commit. Post the list of released tickets to Slack as a part of release notes. Integrations and third party apps: Github, Bitbucket, and Slack are among the most prominent Jira integrations, and for good reasons. Creating a Jira ticket from a message, for example, is quite handy at times. The Atlassian Marketplace broadens your reach even further with thousands of add-ons and applications. Broad application: Jira is suitable for both iterative and non-iterative development processes for IT and non-IT teams. Jira Best Practices Let’s dive into the nitty-gritty of Jira best practices for multiple projects or for a single one. Define Your Goals and Users Jira, being as flexible as it is, can be used in a wide manner of ways. For instance, you can primarily rely on status checking throughout the duration of your sprint, or you can use it as a project management tool on a higher level (a tool for business people to keep tabs on the development process). Define your team and goals. Now that you have a clear understanding of why, let’s talk about the “who.” Who will be the primary Jira user? And will they be using it to: Track the progress on certain tickets to know where and when to contribute? Use it as a guide to learn more about the project? As a tool for tracking time for invoicing clients, performance for internal, data-driven decision making, or both? Is it a means of collaborating, sharing, and spreading knowledge across several teams involved in the development of the product? The answers to the above questions should help you define the team and goals in the context of using Jira. Integrations, Third-Party APIs, and Plugins Jira is a behemoth of a project management platform. And, like all behemoths, it is somewhat slow and clunky when it comes to moving forward. If there’s some functionality you feel is missing from the app—don’t shy away from the marketplace. There’s probably a solution for your pain already out there. Our team, for instance, relies on a third-party tool to create a series of internal processes and enhance fruitful collaboration. You can use ScriptRunner to create automation that’s a bit more intricate than what comes out of the box. Or you can use BigGantt to visualize the progress in a friendly drag-and-drop interface. Don’t shy away from combining the tools you use into a singular flow. An integration between Trello and Jira, for instance, can help several teams—like marketing and development—stay on the same page. Use Checklists in Tickets Having a checklist integrated into your Jira issues can help guide a culture that’s centered around structured and organized work as well as transparency and clarity to everyone. Our Smart Checklist for Jira offers even more benefits: You have a plan: Often times it’s hard to start a feature implementation, and without a plan, you can go in circles for a long time. Having mental peace: Working item by item is much more calm and productive than dealing with the unknown. Visibility of your work: If everyone sees the checklist progress, you are all on the same page. Getting help: If your progress is visible, colleagues can give you advice on the plan itself and the items that are challenging you. Prioritization: Once you have the items list, you can decide with your team what goes into v1, and what can be easily done later. You can use checklists as templates for recurring processes: Definition Done, Acceptance Criteria, onboarding and service desk tickets, etc., are prime candidates for automation. Moreover, you can automatically add checklists to your Jira workflow based on certain triggers like the content of an issue or workflow setup. To learn more, watch our YouTube video: “How to use Smart Checklist for Jira.” Less Is More Information is undoubtedly the key to success. That said, in the case of a Jira issue, awareness is key. What we’ve noticed over our time of experimenting with Jira is that adding more info that is either unnecessary or irrelevant seems to introduce more confusion than clarity into the process. Note: We don’t mean that Jira shouldn’t be used for knowledge transferring. If some information (links to documentation, your internal processes, etc.) is critical to the completion of a task—share it inside the task. Just use a bit of formatting to make it more readable. However, an age-old history of changes or an individual’s perspective on the requirements is not needed. Stick to what is absolutely necessary for the successful completion of a task and elaborate on that. Not more, nor less. Keep the Backlog and Requirements Healthy and Updated Every project has a backlog—a list of ideas, implementation tickets, bugs, and enhancements to be addressed. Every project that does not keep its backlog well-maintained ends up in a pickle sooner rather than later. Some of our pro-tips on maintaining a healthy backlog are: Gradually add the requirements to the backlog: If not for anything else, you’ll have a point of reference at all times, but moving them there immediately may cause certain issues as they may change before you are ready for implementation. Keep all the work of the development team in a single backlog: Spreading yourself thin across several systems that track bugs, technical debt, UX enhancements, and requirements is a big no-no. Set up a regular backlog grooming procedure: You’ll get a base plan of future activities as a result. We’d like to point out that said plan needs to remain flexible to make changes based on feedback and/or tickets from marketing, sales, and customer support. Have a Product Roadmap in Jira Jira is definitely not the go-to tool for designing a product roadmap, yet having one in your instance is a major boon, because it makes the entire scope of work visible and actionable. Additional benefits of having a roadmap in Jira include: It is easier to review the scope with your team at any time. Prioritizing new work is simpler when you can clearly see the workload. You can easily see dependencies when several teams are working on a project. Use Projects as Templates Setting up a new project can be tedious even if you’ve done it a million times before. This can be especially troublesome in companies that continuously deliver products with a similar approach to development such as mobile games. Luckily, there’s no need to do the same bit for yet another time with the right combination of tools and add-ons. A combination of DeepClone and Smart Checklist will help you clone projects, issues, stories, or workflow conditions and use them as project templates. Add Definition of Done as a Checklist to all of Your Jira Issues Definition of Done is a pre-release checklist of activities that determine whether a feature is “releasable.” In simpler words, it determines whether something is ready to be shipped to production. The best way of making this list accessible to everyone in the team is to put it inside the issues. You can use Smart Checklist to automate this process; however, there are certain rules of thumb you’ll need to follow to master the process of designing a good DoD checklist: Your objectives must be achievable. They must clearly define what you wish to deliver. It’s best if you keep the tasks measurable. This will make the process of estimating work much simpler. Use plain language so everyone who is involved can easily understand the Definition of Done. Make sure your criteria are testable so the QA team can make sure they are met. Sync With the Team After Completing a Sprint We have a nice habit of running Agile Retrospective meetings here at Railsware. These meetings, also known as Retros, are an excellent opportunity for the team to get recognition for a job well done. They can also help you come up with improvements for the next sprint. We found that the best way of running these meetings is to narrow the conversation to goods and improves. This way you will be able to discuss why the things that work are working for you. You’ll also be able to optimize the rest. Conclusion If there’s a product where there’s something for everyone—within the context of a development team—it’s probably Jira. The level of customization, adaptability, and quality of life features is an excellent choice for those teams that are willing to invest in developing a scalable and reliable process. If there’s anything missing from the app—you can easily find it on the Atlassian Marketplace.

By Oleksandr Siryi
DevOps vs Agile: Which Approach Will Win the Battle for Efficiency?
DevOps vs Agile: Which Approach Will Win the Battle for Efficiency?

As software development continues to evolve, there are two approaches that have gained a lot of attention in recent years - Agile and DevOps. Agile has been around since the early 2000s and focuses on delivering software frequently through iterative and incremental development. DevOps, on the other hand, is a newer approach that focuses on speeding up the software delivery process through collaboration, automation, and continuous delivery. While both Agile and DevOps aim to improve efficiency and collaboration within the development team, there are some key differences between the two approaches. Agile is focused on the software development process, while DevOps is focused on deployment, integration, and delivery. Agile uses a methodology of sprints, daily stand-ups, and retrospectives to deliver working software frequently. DevOps, on the other hand, uses continuous integration and continuous deployment to speed up the delivery process. Agile Agile is a software development methodology that focuses on delivering value to customers through iterative and incremental development. It values collaboration, flexibility, and customer satisfaction. Agile teams work in short sprints, usually lasting 1-4 weeks, and aim to deliver working software at the end of each sprint. Agile development involves continuous feedback from the customer and the team, and the ability to adapt to changing requirements. Agile practices include daily stand-up meetings, sprint planning, backlog grooming, and retrospective meetings. The Agile Manifesto defines four core values: individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan. DevOps DevOps is a culture that emphasizes collaboration, communication, and integration between development and operations teams. The goal of DevOps is to improve the quality and speed of software delivery, by automating processes and reducing the time it takes to go from development to production. DevOps involves a combination of practices, such as continuous integration, continuous delivery, and continuous deployment. DevOps teams work in a continuous cycle of development, testing, deployment, and monitoring. This allows for rapid feedback and the ability to quickly fix issues. DevOps teams also value the use of automation tools, such as configuration management, orchestration, and monitoring tools. DevOps vs Agile Agile and DevOps are complementary approaches that share many of the same values and principles. Both aim to deliver high-quality software that meets the needs of the customer. However, there are some key differences between the two approaches. Agile is focused on the software development process, while DevOps is focused on the entire software lifecycle. Agile teams work in short sprints, while DevOps teams work in a continuous cycle. Agile teams rely on manual testing and deployment, while DevOps teams automate these processes. Agile teams prioritize flexibility and customer collaboration, while DevOps teams prioritize speed and efficiency. Agile DevOps Focus Software development process Deployment, integration and delivery process Goals Delivering working software frequently Speeding up software delivery and feedback Methodology Iterative and incremental Continuous delivery and deployment Process Sprint planning, daily stand-ups, retrospectives Continuous Integration and Continuous Deployment Team Self-organizing cross-functional teams Collaborative and integrated teams Communication Face-to-face communication, regular meetings Strong collaboration, communication and feedback loop Feedback Regular customer feedback and iterations Continuous feedback through automated testing and monitoring Culture Empowered and autonomous teams Collaborative, feedback-oriented culture Tools Agile project management tools, issue trackers Automated testing, monitoring, and deployment tools Which Approach Will Win the Battle for Efficiency? The answer to this question depends on the specific needs and goals of your organization. If your goal is to improve the speed and efficiency of your software delivery, then DevOps may be the better approach. DevOps allows for rapid feedback, quick issue resolution, and automation of manual processes. However, if your goal is to prioritize flexibility, collaboration, and customer satisfaction, then Agile may be the better approach. Agile allows for continuous feedback from the customer and the team, and the ability to adapt to changing requirements. Ultimately, the choice between DevOps and Agile depends on the specific needs and goals of your organization. It is possible to combine elements of both approaches to create a customized approach that works best for your organization. Conclusion DevOps and Agile are two popular approaches in software development that aim to improve efficiency and productivity. Both approaches have their own principles, practices, and benefits. The choice between DevOps and Agile depends on the specific needs and goals of your organization. It is possible to combine elements of both approaches to create a customized approach that works best for your organization.

By Riha Mervana
Understanding Technical Debt for Software Teams
Understanding Technical Debt for Software Teams

What is technical debt? How do you fix it? Stay competitive in the market with the best practices and explore ways to remediate Technical Debt. Learn more. Overview of Technical Debt "Technical debt is a metaphor commonly used by software professionals in reference to short-term compromises made during the design, development, testing, and deployment processes." To stay competitive, many organizations opt for software development methodologies, like Agile, to accelerate the overall development processes. Cramped-up release schedules often force teams to skip the standard best practices, resulting in the accumulation of technical debt. Technical debt is given less priority during the rapid release cycles and is addressed during the production release. Organizations often push large, complex changes to speed up the release process. Short-term compromises are acceptable to a certain extent. However, long-term debt can damage an organization's IT infrastructure and reputation. Sometimes, it comes with a heavy penalty of re-engineering and post-release fixes. These damages can be in the form of high costs for: Remediating pending technical debt. Customer dissatisfaction due to scalability and performance issues Increased hiring and training. Increased modernization time. The cost of refactoring, re-engineering, rebasing, and re-platform could be much higher than the original cost during the initial development. Therefore, these compromises should be thoroughly analyzed and approved by IT stakeholders and CXOs. This involves looking at the future tradeoffs, risk appetite (risk capacity), and cost. Organizations also need to evaluate the pros and cons of taking technical debt decisions. Taking on technical debt can be both tricky and risky for organizations. Hence organizations must factor in associated risks and operational costs. One of the consequences of Technical debt is the implied cost of reworking applications and their architecture. Therefore, organizations should choose easy development paths and limited solutions to shorten production time. If the technical debt is not addressed over time, the accrued interest makes it more difficult to implement changes, resulting in business and technical challenges. A Scandinavian study reveals that developers, on average, waste 23% of their time due to technical debt. As if that wasn't alarming enough, Stripe published data showing that, on average, software developers spend 42% of their workweek dealing with technical debt and bad code. Major Drivers of Technical Debt Need for faster solution design processes. Faster development of source code. Quick releases Cutthroat business competition to release new and unique features early in the market. Impact of Accumulating Technical Debt It results in daily operational costs to accommodate remediation. A longer development cycle leads to slower application releases. It incurs long-term financial loss due to technical debt accumulation. It may result in compliance issues and a lack of proper standards. Code quality and design get compromised. More time is spent on debugging rather than development. It may result in failures that can put an organization's reputation at risk. It can be a cause of security breaches and hefty fines. It can potentially lead to a loss of agility and lower productivity due to outages. Types of Technical Debt Design/Architecture Debt It represents a design work with backlogs which may include a lack of design thinking processes, UI bugs, and other design flaws that were neglected. Most organizations do follow standard design practices like 'The Open Group Architecture Framework(TOGAF)' due to the agile way of designing. Tools and techniques like the ADM and TOGAF implementation governance provide the required format and standard of solution design. Code Debt The most common debt is skipped due to speedy, agile delivery, complexity, or lack of subject knowledge. In some cases, new features are added in the latest version, which the dev team may not be aware of. This might result in the dev team working on the same feature again, resulting in unnecessary cost and time investment. Sometimes, the development team doesn't follow standard best practices for coding and may use quick workarounds. Also, they might not refactor the code because of time-bound release cycles. NFR/Infrastructure Debt Introduced during designing and implementing Non-Functional Requirements (NFR) such as: Inaccurate scalability configuration may crash applications on high loads. Improper availability planning leads to outage issues when any data center is down. Inaccurate caching and logging lead to slower application performance. Repetitive code of error/exception handling may create refactoring and performance issues. Additional auditing and tracing may lead to performance issues and occupy unnecessary database storage. Finally, security ignorance may lead to serious data breaches and financial loss. Improper observability and monitoring may not give alerts on time for any major issues in application and infrastructure. Testing Debt The pressure of quick, agile releases may force organizations to miss out on most of the manual and automated testing scenarios. Frequent unit and detailed end-to-end integration testing can detect major production issues. Sometimes, these detailed tests are skipped during the development phase, which leads to major production bugs. Process Debt It is introduced when a few less important business and technical process flow steps are skipped. For example, in agile development, many processes are followed, like sprint planning, Kanban, Scrum, retro meetings, and some other project management processes, such as Capability Maturity Model(CMM) and Project Management Institute(PMI), etc. However, sometimes these processes are not followed religiously due to time constraints, which may have a severe impact later. Defect Debt It is introduced when minor technical bugs are skipped during the testing phase, like frontend UI cosmetic bugs, etc. These low-severity bugs are deferred to the following releases, which may later have an impact in the form of production bugs. These production bugs spoil an organization's reputation and profit margin. Documentation Debt It is introduced when some of the less important technical contents in the document are skipped. Improper documentation always creates an issue for customers and developers to understand and operate after the release. In addition, the engineering team may not properly document the release and feature details due to quick release schedules. As a result, users find it difficult to test and use new features. Known or Deliberate Debt Known or deliberate debt is injected on purpose to accelerate releases. This acceleration is achieved by workarounds or alternate methods or technologies that use simple algorithms. For example, sometimes, the dev team does not evaluate and consider better algorithms to avoid cyclomatic code complexity in the source code. As a result, it reduces the performance of the code. Unknown Outdated/Accidental Debt It is introduced unknowingly by developers/designers and other stakeholders. It is sometimes introduced by regression of other related code changes, independent applications, and libraries. For example, if all applications use the same error-handling library code and if there is a regression issue in that error-handling library, it may impact all dependent applications. Bit Rot Technical Debt According to Wired, it involves "a component or system slowly devolving into unnecessary complexity through lots of incremental changes, often exacerbated when worked upon by several people who might not fully understand the original design." In practice, many old and new engineers work on the same module code without knowing the background details of the code. New engineers may rewrite or redesign code without understanding the initial design and background. It may cause complications like regression issues, etc. This happens over time, and it should be avoided. Causes of Technical Debt Business Competition Competitive business markets may force organizations to roll out frequent feature releases to outperform their competitors and keep the customers interested. Time Constraints Due to Agile Releases With tighter deadlines, the development team doesn't have enough time to follow all coding/design standards, such as language-specific coding standards, TOGAF enterprise design, suitable design patterns, review, testing/validation, and other best development practices. Save Short-Term Cost Some organizations want to develop and release features faster to save additional development costs on coding and design effort. Therefore, they may prefer employing a small development team for faster releases with minimal short-term costs. These organizations may also hire junior or unskilled developers for more profit margin. Lack of Knowledge and Training The development team may change very frequently during exit, internal movement, and new hiring. Faster release cycles may result in undertrained resources due to a lack of functional or technical training and little to no knowledge transfers about product and design. Improper Project Planning Tighter release schedules may result in improper project planning, which plays a major role in introducing technical debt and, for example, skipping important meetings with all business stakeholders or project planning phases. Complex Technical Design and Technical Solution The development teams prefer simple technical designs and solutions over complex ones because they don't want to spend more time and effort understanding complex algorithms and technical solutions. Complex solutions take more time to understand and implement. They also need more POC evaluation and effort. Poor Development Practices Most development teams prefer shortcuts by following poor development practices. Due to aggressive release timelines and a lack of knowledge, dev teams don't follow standard coding and design practices. Insufficient Testing It is a major contributor to technical debt. Regular unit and integration testing for even a small code change is very important. Testing and validation are the only mechanisms to identify bugs and shortfalls in software applications. These tests also find technical and functional bugs. Insufficient testing can lead to the introduction of technical debt. Delayed Refactoring Tight deadlines may force development teams to give less priority to refactoring code in the early stages. Hence they defer and delay code refactoring to prioritize quick releases. Constant Change 'Change is the only constant.' Software applications evolve and adopt new designs and technologies over time. It's hard to cope with these constant changes in parallel. It takes time to revisit the source code and then use the latest design and technologies. Outdated Technology Most traditional organizations use outdated technologies. They make late decisions to upgrade or defer modernization with modern technologies. As a result, they miss a lot of new modern features, which are considered to be technical debt. This debt can be mitigated only by shifting to modern technologies. No Involvement and Mentoring By Senior Developers and Architects It's very common to have less or no involvement of senior developers and architects during design and development. Senior mentors play an important role in guiding the development team to avoid technical debt. In addition, those senior developers/architects might have a better understanding and experience of working on the same project or software applications. Identifying and Analyzing Technical Debt User feedback: User feedback/reviews are very important in identifying technical debt and mitigating it. Organizations should listen and act on users' feedback for improvement and handling bugs. These feedbacks and bugs are considered to be technical debt. Analyze bad code smell: Use manual and automated code review to understand bad code smells like memory leakage of JVM applications. There are many code analyzers or tools, like SonarQube, PMD, FindBug, Checkstyle, etc., that can help. They could be integrated with automated build and deployment of CI/CD pipelines for every release. Monitoring and observability tools: Application Performance Monitoring (APM) tools are the best tools to continuously monitor software applications, for example, VMware Wavefront/Tanzu observability, Dynatrace, DataDog, etc. They have special algorithms to check the performance of applications and underlying infrastructure. They also analyze application logs and generate failure reasons reports. These reports are a great source for identifying technical debt. Manual and automated code review: Continuous, manual, and automated code review processes definitely help to identify technical debt using static and automated code analyzers. Operational profit and loss analysis: This analysis is done by business and senior CxO people. They analyze operational costs (Opex) and loss reports. These reports give a fair idea of improvement and address important technical debt quickly. Addressing this technical debt is very important for any organization because it impacts their business revenue. Performance metrics: Application Performance Monitoring (APM) and load testing tools also generate performance reports of software applications that are on high load. This is the best way to identify and mitigate technical debt due to NFR configurations like the performance of application and infrastructure, read caching availability, scalability, etc. Understand long-term or short-term requirements: Organizations identify technical debt by understanding long-term and short-term technical requirements. Accordingly, they prioritize, plan, and remediate. These requirements are prioritized based on business criticality and urgency. Review with latest industry-standard best practices: Some technical debt can be analyzed by comparing it with the latest industry design and software development standards such as Agile, TDD, BDD, Scrum, Kanban, Cloud native, microservices, micro frontends, and TOGAF. Code refactoring tools and techniques: There are modern tools available that are capable of analyzing legacy monolithic apps and providing suggestions or refactoring partially to modern cloud-native microservices design. They also provide tools to migrate on-prem VM (Virtual Machine) to cloud VM with easy lift and shift rebase. Security analysis: Some security-related technical debt is identified during the security analysis phase. Some security analysis tools are available, like CheckMarx and SonarQube, which generate security reports for applications. In addition, there are other infrastructure security tools like Vmware Carbon black endpoint in security, RSA, Aquasec, Claire aqua security, etc. Best Practices to Avoid Technical Debt To reduce technical debt, it's essential to analyze and measure it. You can calculate technical debt by using remediation and development costs as parameters. These are a few techniques to avoid technical debt: Remediate application technical debt by using feedback. Religiously follow consistent code review practices. Have multiple rounds of manual code and design reviews by senior developers and architects. Perform automated testing after every build and release. Monitor and analyze reports based on observability and monitoring tools. Analyze and evaluate the performance and business impact of any new code and design change before implementation. Follow standard coding best practices. Follow the manual and automated static code review for any release. Use incident management and issue tracker to report and track bugs. Always review and validate solution architecture before implementation. Follow static and dynamic code analysis using code analyzer tools like Somarqube, PMD, FindBug, etc. Follow an agile, iterative development approach and regularly do retrospective meetings. Also, measure technical debt in each iteration. Use project management tools like Jira, Trello, etc. Do code refactoring of legacy code. Always revisit code and modularize common code components. Strictly follow test-driven development (TDD) and Behavioral Driven Development (BDD) approach for every module of code. Follow continuous build, integration, test, and validate the approach on all releases. Last but not the least, technical debt should be documented, measured, and prioritized. Estimating Technical Debt Cost It's very important to measure technical debt cost as it helps stakeholders and senior management to analyze and prioritize remediation costs. This should be a measurable number to make business decisions. It also helps to track the technical debt remediation status. There are so many measurable variables for calculating technical debt. There are various tools available, like SonarQube, to check code quality, code complexities, lines of code, etc. We can calculate technical debt as a ratio of the cost to fix a software system (Remediation Cost) to the cost of developing it (Development Cost). This ratio is called the Technical Debt Ratio(TDR): Technical Debt Ratio (TDR) = (Remediation Cost / Development Cost) x 100% Good TDR should be <=5%. High TDR shows bad code quality, which involves more remediation costs. Optionally, remediation cost (RC) and Development cost (DC) could be also replaced by hours, which will help to calculate remediation time in terms of total efforts in hours. Key Takeaways These are some key points about technical debt cost: The average organization wastes 25% to 45% of its development cost. Hiring and training new engineers involve additional costs and an increase in coordination costs. Operational overhead cost by spending 15 to 20% on unplanned work. Impacts organizations' revenue for additional and unplanned work. Waste of time to analyze the improvement of source code and design. Lower productivity rate around 60 to 70%. Cost of project management and modern tooling. Conclusion Technical Debt can impact different factors like overall operations cost, velocity, and quality of the product and can easily end up impacting teams' productivity and morale. Hence avoiding technical debt or addressing it at the right intervals during the development process is the best way forward. We hope this blog helps you have a better understanding of Technical debt and the best practices for remediation.

By Rajiv Srivastava
MVP Launched. Now What?
MVP Launched. Now What?

After launching an MVP, startups are often faced with a daunting question: "Now what?" In this article, we’ll share everything we've learned from our own experience on what to do after you’ve launched your MVP. We’ll also explain how to measure its success using metrics and feedback indicators. But first, let’s look at the reasons why every startup should build an MVP. What Is an MVP and Why Should You Create One? A minimum viable product (MVP) is a barebones version of your product, designed to satisfy the basic needs of your target audience. Defined by its skeletal feature set and limited functionality, the purpose of an MVP is to help you find a product-market fit. In our experience, building an MVP is the most efficient way to achieve two major startup objectives: establish a market presence and gather feedback from early adopters. Why It Pays to Start Small Every new enterprise is susceptible to running out of cash, building a product that nobody wants, or losing out to the competition. It’s impossible to eliminate all risks in a startup environment, but there are some steps you can take to minimize the chance of failure. At Railsware, we leverage the Lean startup agile approach to give our products the best shot at success. We always start with the MVP, since it helps us: Find a product-market fit. The size and simplicity of the MVP make it ideal for testing assumptions about user problems and user experience. It enables us to stay flexible and adapt our solution to user needs early in the product development process. Increase our speed to market. It’s much faster to build an MVP than a full-fledged product, thanks to the MVP’s reduced scope and limited functionality. Starting small gives us a competitive advantage since we’re not attempting to create a "perfect" product from the outset. Depending on the project, we aim to launch a barebones solution in a matter of weeks/months. Conserve limited resources. Creating an MVP is a cost-effective way to test product hypotheses. It prevents us from investing too much time, effort, and capital into the solution before we’ve confirmed whether people want it. For reference, we typically spend about 30-50% of our overall budget on developing the MVP and save the rest for further development and promotional efforts. Our product team at the MVP stage typically consists of a product manager, product designer(s), and a small group of developers. For specific challenges, it can be enhanced with data analysis, quality assurance, and marketing specialists. The team gradually expands as we move beyond MVP testing and start growing the product. For the sake of making this easier to grasp, here’s what the product development process looks like at Railsware: But what does "product growth" actually look like after the MVP stage? To understand how an MVP evolves into a full-fledged product, we must explore the latter’s two most common variations: minimum marketable product and minimum lovable product. Although these concepts are sometimes used to replace the MVP entirely, we actually consider them to be the next steps in the product pipeline. Minimum Marketable Product (MMP) The MMP is a bare-bones version of your product that is good enough to attract paid users. It includes key changes or additions which have been implemented based on feedback from the MVP stage. More often than not, the MVP serves as the foundation of the MMP. If the MVP is a stripped-down representation of your product idea, the MMP is its savvier, more confident cousin. It may have increased stability and functionality, but most importantly, the MMP has a billing system. This payment functionality is essential for testing whether or not users are ready to pay for your product. You can start offering discounts to early adopters or running A/B tests on your target audience to check how much they are willing to pay for your solution. Minimum Lovable Product (MLP) As the name suggests, the goal of the MLP is to give your target audience a product that combines lovable (minimum) features with an enjoyable UX. While it doesn’t have all the bells and whistles of a full-fledged product, the MLP has a strong value proposition and fewer kinks than its MVP/MMP predecessors. The MLP may include one or two new features requested by early adopters, and/or improved functionality. At the very least, it’s easy to use, has an impressive UI, and covers all of the users’ pains. Unlike its variations, the MLP sets out to turn enthusiastic early adopters into loyal customers — especially in markets where competition is fierce. The "cat food" analogy from Brian De Haaff, author of Lovability, explains how the MVP falls short of the mark in this respect. He says “(although) you could eat a can of cat food if you really had to, it is unlikely you would be clamoring for a second serving.” So, while the MVP gets the job done, the MLP hooks users and keeps them coming back for more. Practical Steps to Take After Releasing an MVP While it might be tempting to jump straight into MMP or MLP development after launching your MVP, we definitely don’t recommend it. Now is the time to step back, review your progress, and take organized action to increase your MVP’s chances of success. Here are some of our suggestions on what to do after launching your MVP… Promote the MVP Your target audience won’t be able to test the product unless they know it exists. So let’s look at some cost-effective ways to get eyes on your MVP fast: Launch on an enterprise marketplace. If it makes sense to release your MVP as an add-on, then marketplaces such as Atlassian, Microsoft Azure, or Google Workspace Marketplace are excellent springboards. They lend your solution credibility, come with a built-in audience, and allow you to quickly monetize. In fact, we grew two of our own startups (Coupler.io and Smart Checklist) using this strategy. Submit to startup platforms and deals websites. Listing your MVP on platforms like Appsumo or Product Hunt is one of the best ways to reach a SaaS-oriented audience. Users of these platforms are more likely to fall into the innovator customer segment; they are more open to experimenting with new products and are more forgiving of bugs and kinks. Self-distribute via forums/social media. Suggest your product in Hacker News/Reddit/Quora comment threads where people are experiencing a problem your MVP can solve. Founders or product owners with large social media followings can also benefit from sharing the product (and requesting feedback) on channels like Twitter/LinkedIn. Pieter Levels is just one example of an entrepreneur who has mastered this approach to MVP promotion. Gather User Feedback After you’ve promoted the product in the right channels and experienced some traction (i.e., an increasing number of signups and active users), it’s time to request feedback from early adopters. The goal is to find out what their pain points are as quickly as possible and start using that data to inform iterations on the product. We collect this information via surveys, customer support interactions, and online forums. For example, when we were testing assumptions about our product Mailtrap, we used tools such as Typeform, Twitter, and UserVoice to gather feedback. Overall, this helps us pinpoint what customers like and dislike, and what they want to see — but it doesn’t give us the full picture. That’s why we always conduct customer development interviews shortly after launching the MVP. Conduct Customer Development Interviews When it comes to learning more about the needs, motivations, and expectations of customers, nothing beats sitting down and talking to them. Customer development or CustDev interviews are 1:1 online meetings between a product manager and an active user. During those sessions (which last anywhere from 30 minutes to 1 hour) we ask customers open-ended questions about their interactions with our MVP, such as their impression of the user experience or what kind of functionality they feel is missing. We take detailed notes and combine our interview findings into a spreadsheet. Analyzing correlations in the responses helps us figure out what to improve, what to drop, and which direction will bring us closer to a product-market fit. Run a Product Discovery Session Running additional product discovery sessions allows you to gather ideas for new features, analyze potential risks, refine your product vision, and prepare for product growth. We use the BRIDGeS framework for ideation and complex decision-making. Sessions are typically held on virtual whiteboards, where we use colored cards to denote Subjects (can be a user, role, strategy, etc.) and describe the problem through Benefits, Risks, Issues, Domain knowledge, and Goals. Between 2 to 8 people take part in a session, including the product owner, members of the development and design teams, and industry experts/potential users. We divide the board into two parts — Problem Space and Solution Space — and kick off a session in the former. We've demonstrated what the Problem Space looks like in practice in our SaaS Product Management Inside Out guide here on DZone, but here's a refresher: After prioritization, the next step is to move into the Solution Space. This is where we come up with high-level solution variations for each subject and break them down into epics and nested tasks. They should be color-coded but don’t need to conform to the previous theme. Using the Uber example, here’s what the space might look like when we’re exploring the Mobile App solution variation: Afterward, we create a product roadmap using the epics and tasks defined in the Solution Space. This helps us stay on track as we continue to plan and work on future product iterations. So, by the end of a session, our team has a solid idea of how to move forward (whether that’s getting to work on a new feature[s], adjusting the MVP pricing model, or launching a new promotional campaign). Prioritize Features Product discovery, feedback, and customer development usually provide us with several ideas for possible features. However, not all of those features have the potential to add real value to the product. Nor will we have the time and resources to build all of them. In the words of The Lean Startup author Eric Ries, value is "providing benefit to the customer; anything else is waste." So when it comes to making product improvements, it’s crucial to prioritize features according to customers’ needs (while keeping in mind your team’s capacity to execute them). We recommend using the MoSCoW prioritization technique to prioritize features during early product development as well as throughout the entire product lifecycle. The letters in MoSCoW (except the o’s) stand for Must, Should, Could, and Won’t. When prioritizing features, we separate the must-haves from the nice-to-haves by assigning a term to each one. Here’s what they denote: Must – Project cannot do without them Should – Required in the long run Could – Low-cost tweaking Won’t – Get back to them on better days This framework helps us quickly narrow down the product backlog and focus on building features that provide genuine value to customers. Build a Product Roadmap If you don’t already have a product development roadmap, now’s the time to build one. Having a strategic plan in place will ensure that your engineering, design, and marketing efforts stay aligned with your startup objectives. We typically use the aforementioned BRIDGeS framework to generate roadmaps before and after MVP release. It lets us break down solutions into epics and nested tasks, and quickly transform them into a roadmap or implementation plan. How to Measure the Success of Your MVP How do you know if your MVP has succeeded or failed? While there’s no straightforward answer to this question, we use a combination of analytics and feedback to understand how well our product has performed. The Importance of Analytics Dashboards Without a product dashboard, it’s virtually impossible to track, quantify, or take reasonable action on the data you are receiving. That’s why before launching an MVP, we recommend choosing your product metrics carefully and building a dashboard around them. After product launch, the dashboard becomes one of the most important tools at our disposal. It lets us examine how users are interacting with our MVP so we can make data-driven decisions when iterating on the product. The dashboard also enables us to catch changes in user behavior (e.g., sudden increase in churn, decrease in users activating their accounts) and investigate those issues before they blow up. Ideally, every product manager/startup founder should book time in their calendars daily/weekly to review the dashboard and gather insights. Key Startup Metrics When choosing startup metrics, our product managers often leverage the AARRR or "pirate metrics" framework. AARRR (which stands for Acquisition, Activation, Retention, Referral, and Revenue) is useful for checking how users are engaging with your MVP at every stage of the conversion funnel. Since an MVP isn’t a full-featured product, we must adjust the conversion funnel (and AARRR framework) to reflect this. For instance, Revenue metrics aren’t relevant for all types of MVPs, i.e., those that haven’t been monetized yet. Meanwhile, Referral usually comes into play at the MLP stage, since emotionally engaged customers are more likely to join referral programs. On that note, here are a few important metrics to track when measuring the success of your MVP. The table includes elements of the AARRR framework and other vital metrics. Metric What it tells you Acquisition The number of people who were drawn to your product via promotional efforts. A high acquisition rate indicates that people are interested in what your MVP has to offer. Customer acquisition cost (CAC) How much it costs to acquire a new customer. High CAC might indicate that one or more of your promotional efforts aren’t sustainable. Activation The number of signups your product has received. It can also refer to the number of people who have actually started using the product. Retention The number of users who remain active after signup. A steady retention rate indicates that user engagement is high and your MVP already brings value to customers. Churn The number of people who stop using your product. Like retention, a low churn rate indicates that your product delivers a valuable user experience. A high churn rate might indicate that something is missing. MRR Stands for monthly recurring revenue. It’s unlikely that MRR will be high at the MVP stage since it’s a fledgling product, but over time, it’s a good indicator of how well your product is performing on the market. Feedback as a Metric As we previously discussed, feedback is an extremely important part of MVP validation. Dashboards can only tell us so much about the overall health of our product, which is why we consider customer input to be an essential metric. For example, ever since we released the MVP of Mailtrap several years ago, user feedback has helped the team iterate effectively and carve out new directions for growth. Some examples of that feedback are “Would it be possible to have an email address for each testing inbox in Mailtrap?” or “Would you consider adding a way to configure hard and soft bounces?” Suggestions like these show that users truly engage with the platform and are interested in what else Mailtrap might offer. While the team has been careful not to implement all pieces of feedback, they continue to pay close attention to requests from the development community — and overall, this "feedback as a metric" focus has paid off. Avoid Vanity Metrics One of the biggest mistakes you can make while measuring the success of your MVP is paying attention to vanity metrics, i.e., numbers that make you look good but don’t represent the truth about your product’s health. Examples are social media followers, site impressions, number of downloads, site views, and so on. Sure, these statistics can be helpful for getting the full picture of how well your MVP is performing on the market. Just don’t place too much faith in them. Final Remarks There’s no secret formula on how to grow an MVP into a unicorn. Most of the time, startups have to rely on tried-and-tested approaches to increase the likelihood that their product will reach a product-market fit. As we explained, building an MMP or MLP on top of an MVP is a strategic way to iterate on your product and grow a reliable user base. Meanwhile, promoting your MVP, systemically gathering feedback, conducting customer development, prioritizing features, and building a product development roadmap are just some of the steps that you can take to boost your MVP’s chances of success.

By Leonie Lacey

Top Agile Experts

expert thumbnail

Jasper Sprengers

senior developer,
Team Rockstars IT

I have been working in software since 1999, writing on whatever fascinates me about the craft, with a special interest on sensible agile practices, testing and documentation.
expert thumbnail

Alireza Chegini

DevOps Architect / Azure Specialist,
Smartwyre

Alireza is a software engineer with more than 20 years of experience in software development. He started his career as a software developer, and in recent years he transitioned into DevOps practices. Currently, he is helping companies and organizations move away from traditional development workflows and embrace a DevOps culture. Additionally, Alireza is coaching organizations as Azure Specialists in their migration journey to the public cloud.
expert thumbnail

Dr. Srijith Sreenivasan

Director,
Schneider Electric

‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎
expert thumbnail

Martin Fowler

I am an author, speaker, and loud-mouth on the design of enterprise software. I work for ThoughtWorks, a software delivery and consulting company. @martinfowler

The Latest Agile Topics

article thumbnail
Create and Manage Jira Sprints Like a Pro
This guide covers some theory but mostly focuses on practical tips on running Sprints in Jira. So first, let's dive into that backlog of yours!
Updated May 10, 2023
by Oleksandr Siryi
· 1,672 Views · 2 Likes
article thumbnail
Agile Negotiations
Life is a negotiation. Why would Scrum be different? Let’s have a look at some typical agile negotiation scenarios.
Updated May 8, 2023
by Stefan Wolpers CORE
· 1,415 Views · 2 Likes
article thumbnail
How AI Will Change Agile Project Management
AI's impact on Agile Project Management and Scrum Mastery will go from “interesting” to “total game-changer” faster than you think.
Updated May 8, 2023
by Alex Omeyer
· 2,257 Views · 1 Like
article thumbnail
Agile Transformation — Gaining Executive Leadership Support
In Agile and Scrum, leadership participation is necessary; however, comprehending the change management that needs to occur is paramount.
Updated May 5, 2023
by Dr. Thomas Baxter
· 2,970 Views · 1 Like
article thumbnail
Self-Management
This article presents the top ten reasons why self-management is essential for developing new products in complex environments and addressing customer needs.
Updated April 24, 2023
by Stefan Wolpers CORE
· 3,972 Views · 4 Likes
article thumbnail
Jira Anti-Patterns
Jira Anti-Patterns: In this article, learn why they exist and how you can counter these impediments to Agile product development.
Updated April 19, 2023
by Stefan Wolpers CORE
· 3,524 Views · 1 Like
article thumbnail
Agile vs. Scrum
Do you know the difference between Agile and Scrum? Then, read this article and get all the information.
Updated April 19, 2023
by Deepali chadokar
· 3,250 Views · 2 Likes
article thumbnail
Your Scrum Team Is Shouting...
As a Scrum Master to your Scrum teams, be an anchor while also being a strong speaker when necessary. Learn more about how to do that in this article.
Updated April 18, 2023
by Krishna P
· 2,572 Views · 2 Likes
article thumbnail
Breaking the Wall Between Dev, Sec, and Ops
The steps that serve as a roadmap to enable organizations to successfully implement DevSecOps and create secure software right from the outset.
Updated April 18, 2023
by Akanksha Gupta
· 4,045 Views · 1 Like
article thumbnail
Agile Self-Organizing Teams
This article uses an autotrophic and photosynthesis metaphor in Agile and aligning with one of the Agile principles that mention the concept of self-organization.
Updated April 17, 2023
by Krishna P
· 3,648 Views · 4 Likes
article thumbnail
The Benefits of Using Agile Software Development Methods in 2023
Agile has risen in popularity as one of the most effective ways of managing complex projects while delivering better outcomes and greater value over time.
Updated April 17, 2023
by Fawad Malik
· 2,720 Views · 1 Like
article thumbnail
The Art of Being Ready: Reliability in Extreme Conditions
By adopting a holistic approach and utilizing the principles and practices of engineering, we can have the best of both worlds — speed, and reliability.
Updated April 13, 2023
by Eugene Retunsky
· 6,552 Views · 3 Likes
article thumbnail
Reverse Pull Requests
This article explains how we used GitHub PRs in a trunk-based, continuous deployment development team.
Updated September 6, 2022
by Lukasz Gryzbon
· 7,291 Views · 3 Likes
article thumbnail
Balancing Shift Left and Shift Right in Your DevOps Strategy
Both Shift Left and Shift Right are important approaches in DevOps. Here, we’ll discuss the differences between Shift Left and Shift Right.
Updated April 12, 2023
by Ruchita Varma
· 5,989 Views · 4 Likes
article thumbnail
The Importance of Test-Driven Development in Software Development
This article will explore what TDD is, why it's important, and how to implement it in your software development process.
Updated April 12, 2023
by Anubhav Dubey
· 2,484 Views · 2 Likes
article thumbnail
Team Mentoring and Learning Path
Continuous learning and training are vital to preparing your workforce for new challenges. It will benefit both organization and the individual.
Updated April 11, 2023
by Lokesh Raj
· 3,291 Views · 1 Like
article thumbnail
The Holy Grail of Agile-DevOps Value Stream Hunting: Actualizing DevOps Transition Purpose
In modern product development, understanding value streams is crucial to optimizing our ways of working and delivering value to customers.
Updated April 10, 2023
by Priya Kumari
· 6,057 Views · 3 Likes
article thumbnail
What Will Come After Agile?
Undoubtedly, there will be something after “agile.” It will have a new name, and I guess it will have to have new “ceremonies.” However, will the Deming Cycle be replaced by what replaces agile?
Updated April 8, 2023
by Mike Harris
· 6,182 Views · 4 Likes
article thumbnail
Data Mesh Architecture: A Paradigm Shift in Data Engineering
This article will discuss a new architectural paradigm called Data Mesh which enables more efficient and effective data engineering.
Updated April 7, 2023
by Amlan Patnaik
· 7,162 Views · 1 Like
article thumbnail
What Is the Role of DevOps in Mobile App Development?
This article explores what role DevOps in mobile app development plays and how it helps developers and businesses.
Updated April 7, 2023
by Ishan Gupta
· 6,010 Views · 2 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: