Software design and architecture focus on the development decisions made to improve a system's overall structure and behavior in order to achieve essential qualities such as modifiability, availability, and security. The Zones in this category are available to help developers stay up to date on the latest software design and architecture trends and techniques.
Cloud architecture refers to how technologies and components are built in a cloud environment. A cloud environment comprises a network of servers that are located in various places globally, and each serves a specific purpose. With the growth of cloud computing and cloud-native development, modern development practices are constantly changing to adapt to this rapid evolution. This Zone offers the latest information on cloud architecture, covering topics such as builds and deployments to cloud-native environments, Kubernetes practices, cloud databases, hybrid and multi-cloud environments, cloud computing, and more!
Containers allow applications to run quicker across many different development environments, and a single container encapsulates everything needed to run an application. Container technologies have exploded in popularity in recent years, leading to diverse use cases as well as new and unexpected challenges. This Zone offers insights into how teams can solve these challenges through its coverage of container performance, Kubernetes, testing, container orchestration, microservices usage to build and deploy containers, and more.
Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.
A microservices architecture is a development method for designing applications as modular services that seamlessly adapt to a highly scalable and dynamic environment. Microservices help solve complex issues such as speed and scalability, while also supporting continuous testing and delivery. This Zone will take you through breaking down the monolith step by step and designing a microservices architecture from scratch. Stay up to date on the industry's changes with topics such as container deployment, architectural design patterns, event-driven architecture, service meshes, and more.
Performance refers to how well an application conducts itself compared to an expected level of service. Today's environments are increasingly complex and typically involve loosely coupled architectures, making it difficult to pinpoint bottlenecks in your system. Whatever your performance troubles, this Zone has you covered with everything from root cause analysis, application monitoring, and log management to anomaly detection, observability, and performance testing.
The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
Microservices and Containerization
According to our 2022 Microservices survey, 93% of our developer respondents work for an organization that runs microservices. This number is up from 74% when we asked this question in our 2021 Containers survey. With most organizations running microservices and leveraging containers, we no longer have to discuss the need to adopt these practices, but rather how to scale them to benefit organizations and development teams. So where do adoption and scaling practices of microservices and containers go from here? In DZone's 2022 Trend Report, Microservices and Containerization, our research and expert contributors dive into various cloud architecture practices, microservices orchestration techniques, security, and advice on design principles. The goal of this Trend Report is to explore the current state of microservices and containerized environments to help developers face the challenges of complex architectural patterns.
In today's fast-paced digital world, application performance has become critical in delivering a seamless user experience. Users expect applications to be lightning-fast and responsive, no matter the complexity of the task at hand. To meet these expectations, developers constantly look for ways to improve their application's performance. One solution that has gained popularity in recent years is the integration of MicroStream and Redis. By combining these two cutting-edge technologies, developers can create ultrafast applications that deliver better results. In this post, we will explore the benefits of this integration and how developers can get started with this powerful combination. MicroStream is a high-performance, in-memory persistence engine designed to improve application performance. MicroStream can store data in memory without needing a mapper or conversion process. It means that developers can work with objects directly without worrying about the mapping process, saving around 90% of the computer power that would have been consumed in the mapping process. One of the critical advantages of MicroStream is its speed. By storing data in memory, MicroStream allows faster read and write operations, resulting in improved application performance. MicroStream's data structure is optimized for in-memory storage, enhancing its speed and efficiency. It makes it an ideal solution for applications that require fast response times and high throughput. Another advantage of MicroStream is its simplicity. With MicroStream, developers can work with objects directly without dealing with the complexities of SQL databases or other traditional persistence solutions. It makes development faster and more efficient, allowing developers to focus on creating great applications instead of struggling with complex data management. MicroStream's speed, simplicity, and efficiency make it an ideal solution for modern application development. By eliminating the need for a mapper or conversion process, MicroStream saves valuable computer power and resources, resulting in significant cost savings for developers. And with its optimized data structure and in-memory storage capabilities, MicroStream delivers fast and reliable performance, making it a powerful tool for building high-performance applications. We have enough of the theory: let's move to the next session, where we can finally see both databases working together to impact my application. Database Integratrion In an upcoming article, we will explore the integration of MicroStream and Redis by creating a simple project using Jakarta EE. With the new Jakarta persistence specifications for data and NoSQL, it is now possible to combine the strengths of MicroStream and Redis in a single project. Our project will demonstrate how to use MicroStream as the persistence engine for our data and Redis as a cache for frequently accessed data. Combining these two technologies can create an ultrafast and scalable application that delivers better results. We will walk through setting up our project, configuring MicroStream and Redis, and integrating them with Jakarta EE. We will also provide tips and best practices for working with these technologies and demonstrate how they can be used to create powerful and efficient applications. Overall, this project will serve as a practical example of using MicroStream and Redis together and combining them with Jakarta EE to create high-performance applications. Whether you are a seasoned developer or just starting, this project will provide valuable insights and knowledge for working with these cutting-edge technologies. The project is a Maven project where the first step is to put the dependencies besides the CDI and MicroStream: XML <code> <dependency> <groupId>expert.os.integration</groupId> <artifactId>microstream-jakarta-data</artifactId> <version>${microstream.data.version}</version> </dependency> <dependency> <groupId>one.microstream</groupId> <artifactId>microstream-afs-redis</artifactId> <version>${microstream.version}</version> </dependency> </code> The next step is creating both entity and repository; in our scenario, we'll create a Book entity with Library as a repository collection. Java @Entity public class Book { @Id private String isbn; @Column("title") private String title; @Column("year") private int year; } @Repository public interface Library extends CrudRepository<Book, String> { List<Book> findByTitle(String title); } The final step before running is to create the Redis configuration where we'll overwrite the default StorageManager to use the Redis integration, highlighting MicroStream can integrate with several databases such as MongoDB, Hazelcast, SQL, etc. Java @Alternative @Priority(Interceptor.Priority.APPLICATION) @ApplicationScoped class RedisSupplier implements Supplier<StorageManager> { private static final String REDIS_PARAMS = "microstream.redis"; @Override @Produces @ApplicationScoped public StorageManager get() { Config config = ConfigProvider.getConfig(); String redis = config.getValue(REDIS_PARAMS, String.class); BlobStoreFileSystem fileSystem = BlobStoreFileSystem.New( RedisConnector.Caching(redis) ); return EmbeddedStorage.start(fileSystem.ensureDirectoryPath("microstream_storage")); } public void close(@Disposes StorageManager manager) { manager.close(); } } Done, we're ready to go! For this sample, we'll use a simple Java SE; however, you can do it with MicroProfile and Jakarta EE with microservices. Java try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { Book book = new Book("123", "Effective Java", 2002); Book book2 = new Book("1234", "Effective Java", 2019); Book book3 = new Book("1235", "Effective Java", 2022); Library library = container.select(Library.class).get(); library.saveAll(List.of(book, book2, book3)); List<Book> books = library.findByTitle(book.getTitle()); System.out.println("The books: " + books); System.out.println("The size: " + books.size()); } Conclusion In conclusion, MicroStream integration with multiple databases is a promising approach to designing high-performance data management systems. This project explores various integration techniques to connect microstream with databases such as MySQL, MongoDB, Oracle, and PostgreSQL. The system will be designed and implemented using a combination of programming languages such as Java, Python, and JavaScript. The project will also provide documentation, training materials, and benchmark tests to ensure the system meets the specified requirements and delivers user value. By leveraging the power of MicroStream technology and integrating it with different databases, organizations can build robust, scalable, and efficient data management systems that can handle large amounts of data and complex data structures. This approach can give organizations a competitive edge by enabling them to process data faster, make better-informed decisions, and enhance operational efficiency. Overall, MicroStream integration with multiple databases is a promising approach that can benefit organizations in various industries. With the right design, implementation, and testing, organizations can leverage this approach to build data management systems that meet their unique business needs and drive success. Reference: Source code
A few years in software development do not equate to a linear amount of knowledge and information. So is true for the .NET ecosystem. The first release of .NET 1.0 saw the light of day in January 2002. The significance of the event was not the palindrome year. The new paradigm is offered to the traditional C++ MFC, VB, and classic ASP developers, supporting two main new languages, C#, and VB.NET. It started as a proprietary Windows technology, closed source language, primarily appealing to the companies on Microsoft Windows stack, tied to the sole IDE, Visual Studio. While .NET was a step forward, the pace and the options outside the designated use were abysmal. The situation took a sharp turn, with a substantial change in 2016 with the introduction of .NET Core. Starting from the heart of the .NET ecosystem, ASP.NET, the change has spread through the entire platform, leading to a complete re-imagination and makeover of runtime and language. Open source, cross-platform, and free for commercial use, .NET and C# became viable options for many projects that traditionally would go with another platform and languages. From web to mobile, from desktop to backend systems, in any cloud environment, .NET is a solid and viable option with outstanding experience and rich offerings. Common Challenges When Uncle Ben told Pete, "With great power comes great responsibility" in the 2002 Spiderman movie, he was not referring to the newly emerging .NET platform. This phrase could be applied to the challenges any developer on any platform using any language will eventually face. And .NET is not an exception. Here are just a few things that can go wrong. Cross-platform software development is often taking place on a single platform. It could be Windows, Linux, or Mac operating systems. The software then needs to be deployed and executed on a platform that might not be a one-to-one match with the development or testing environment. And while technologies such as Docker containers have made it simpler to "ship your development environment to production," it is still not a bulletproof solution. The differences between environments still pose a risk of running into environmental discrepancies, leading to bugs. Cloud environments pose even more significant challenges, with infrastructure and code executed remotely on the environment outside our reach and control. It could be Azure App Service, AWS Fargate, or GCP Cloud Functions. These services provide foundational troubleshooting but cannot cater to the specifics of the application and its use cases, usually involving additional intrusive services required for troubleshooting. Troubleshooting Options .NET developers are offered a few options to troubleshoot the problems faced in production running .NET-based systems. Here are a few of those: Crash dumps. Analyzing crash dumps is a skill that only a few possess. A more significant challenge is that the analysis can pinpoint why the code was crushed but not what led to that critical moment. Metric and counters. Emitting metrics and counter values from the code, collected and visualized, allows better insights into the remotely executing code. But it lacks the necessary dials and knobs to dynamically adjust the scope of focus. For example, emitting the value of a specific one or more variables within a particular method is not an option. Logs. Logging is one of the oldest and the most common techniques to help the developer identify issues in the running code. The code is decorated with additional instructions that emit information into a sink, such as a file, a remote service, or any other destination where the logs are retained for a defined period. This option provides a lot of information, but it also has drawbacks. Unnecessary amounts of data irrelevant to the outage or the bug being investigated are stored and muddy the water. In a scaled-out solution, these logs are multiplied. And when consolidated into a single service, such as Application Insights, it requires accurate filtering to separate the wheat from the chaff. Not to mention the price tag associated with processing and storing those logs. But one of the most significant drawbacks of static logging is the inability to adjust what is logged and the duration of the logging session. Snapshot debugging. Ability to debug remotely executed code within the IDE. Upon exceptions thrown, a debug snapshot is collected from the running application and sent to Application Insights. The stored information includes the stack trace. In addition, a minidump can be obtained in Visual Studio to enhance visibility into why an exception occurred in the first place. But this is still a reactive solution to a problem that requires a more proactive approach. Dynamic Logging and Observability Meet dynamic logging with Lightrun. With dynamic logging, the code does not have to be instrumented at every spot. Instead, we want to gain visibility at run time. So instead, the code is left as-is. The magic ingredient is the agent enabling dynamic logging and observability that is added to the solution in the form of a NuGet package. Once included, it is initialized once in your application starting code. And that is it. From there, the Lightrun agent takes care of everything else. And by that, it means connecting to your application at any time to retrieve logs on an ad-hoc basis, without the unnecessary code modification/instrumentation, going through the rigorous code changes approval process, testing, and deployment. Logs, metrics, and snapshots can be collected and presented on demand, all without wracking a substantial bill otherwise incurred with static logging and metrics storage or leaving the comfort of the preferable .NET IDE of your choice — VS Code, Rider, or Visual Studio (coming soon). The results are immediately available in the IDE or streamed to your observability services such as Sentry, New Relic, DataDog, and many other services in the application performance monitoring space. Summary .NET offers a range of software development options with rich support for libraries and platforms. With extra help from Lightrun, a symbiotic relationship takes code troubleshooting to the next level, where investigating and resolving code deficiencies does not have to be a lengthy and costly saga.
Microservices architecture has become increasingly popular in recent years due to its ability to enable flexibility, scalability, and rapid deployment of applications. However, designing and implementing microservices can be complex, and it requires careful planning and architecture to ensure the success of the system. This is where design patterns for microservices come in. Design patterns provide a proven solution to common problems in software architecture. They help to establish best practices and guidelines for designing and implementing microservices, making it easier to create scalable and maintainable systems. In this article, we will focus on three design patterns for microservices: Ambassador, Anti-Corruption Layer, and Backends for Frontends. We will discuss their definitions, implementation, advantages, and disadvantages, as well as their use cases. By the end of this article, you will have a solid understanding of these three design patterns, as well as other popular patterns for microservices. Additionally, we will provide best practices and guidelines for designing microservices with these patterns, along with common pitfalls to avoid. So, let's dive in and explore the world of design patterns for microservices. Ambassador Pattern Definition and Purpose of the Ambassador Pattern The Ambassador Pattern is a design pattern for microservices that allows communication between microservices while minimizing the complexity of that communication. The pattern involves the use of a separate service, called the "ambassador," that sits between the client and the microservices. The ambassador is responsible for handling the communication between the client and the microservices, which reduces the complexity of the client's requests. The main purpose of the Ambassador Pattern is to reduce the complexity of communication between microservices. This pattern is particularly useful when microservices use different protocols or when microservices are updated at different times, as it allows for flexibility in communication without requiring changes to the microservices themselves. Another key benefit of the Ambassador Pattern is that it can help to improve the reliability and scalability of microservices. By separating the communication responsibilities from the microservices, the pattern allows for better fault tolerance and easier scaling of individual services. In summary, the Ambassador Pattern is a design pattern that uses a separate service to handle communication between microservices and clients. Its main purpose is to reduce the complexity of communication and improve the reliability and scalability of microservices. Implementation of the Ambassador Pattern To implement the Ambassador Pattern, you must create an ambassador service between the client and the microservices. The ambassador service will act as a proxy for the microservices, handling communication and translating requests between the client and the microservices. The following steps can be followed to implement the Ambassador Pattern: Define the APIs: First, define the APIs that will be exposed to the client and implemented by the microservices. This will ensure that the client and the microservices agree on the data formats and the communication protocols. Create the Ambassador Service: The Ambassador Service should be a separate service that handles communication between the client and the microservices. This service should be responsible for routing requests from the client to the appropriate microservice and translating the data formats, if necessary. Deploy the Ambassador Service: Once the Ambassador Service is created, it should be deployed to a separate container or server that is easily accessible to the client and the microservices. Route Requests: The Ambassador Service should route requests from the client to the appropriate microservice. This can be done using various techniques, such as routing based on the request URL or using a service registry to locate the microservice. Translate Data Formats: If the microservices use different data formats, the Ambassador Service should translate the data formats as necessary. This can be done using a data transformation tool or by writing custom code. Implement Fault Tolerance: The Ambassador Service should implement fault tolerance to ensure that it can handle failures in the microservices or the client. This can be done using techniques such as retries, circuit breakers, or fallbacks. In summary, implementing the Ambassador Pattern involves creating a separate service that acts as a proxy for the microservices and handles communication between the client and the microservices. The Ambassador Service should route requests, translate data formats, and implement fault tolerance to ensure the reliability and scalability of the microservices. Advantages and Disadvantages of the Ambassador Pattern Like any design pattern, the Ambassador Pattern has its advantages and disadvantages. Understanding these pros and cons can help you determine whether the pattern is a good fit for your microservices architecture. Advantages: Reduced Complexity: The Ambassador Pattern reduces the complexity of communication between microservices by providing a separate service to handle the communication. Protocol Agnostic: The Ambassador Pattern can be used with any protocol, allowing microservices to use different protocols as needed. Better Reliability: The Ambassador Pattern can improve the reliability of microservices by providing fault tolerance and better handling of failures. Easier Scalability: The Ambassador Pattern makes it easier to scale individual microservices, as the communication between them is handled by a separate service. Disadvantages: Added Complexity: While the Ambassador Pattern reduces the complexity of communication between microservices, it does add an extra layer of complexity to the overall architecture. Performance Overhead: The use of an additional service can result in a performance overhead for the microservices architecture. Additional Management: The Ambassador Service requires additional management and maintenance, which can increase the overall cost and complexity of the microservices architecture. Potential Single Point of Failure: The Ambassador Service can become a single point of failure for the microservices architecture, which can have significant impacts on the system's reliability. In summary, the Ambassador Pattern offers several advantages, such as reduced complexity, protocol agnosticism, and better reliability and scalability. However, it also has some disadvantages, such as added complexity, potential performance overhead, additional management, and the potential for a single point of failure. Understanding these pros and cons can help you determine whether the pattern is a good fit for your microservices architecture. Use Cases for the Ambassador Pattern The Ambassador Pattern can be used in a variety of scenarios within a microservices architecture. Here are some common use cases: Service Discovery: The Ambassador Pattern can be used to help microservices discover each other in a decentralized and scalable manner. The Ambassador Service can act as a service registry, allowing microservices to register themselves and discover other services. Protocol Translation: The Ambassador Pattern can be used to translate between different protocols used by microservices. The Ambassador Service can act as a protocol translator, allowing microservices to use different protocols as needed. Load Balancing: The Ambassador Pattern can be used to balance the load across multiple instances of a microservice. The Ambassador Service can route requests to the least busy instance of a microservice, improving performance and reliability. Security: The Ambassador Pattern can be used to implement security measures such as authentication and authorization. The Ambassador Service can act as a security gateway, enforcing security policies and protecting microservices from unauthorized access. API Management: The Ambassador Pattern can be used to manage APIs exposed by microservices. The Ambassador Service can act as an API gateway, providing a single entry point for clients to access microservices APIs and enforcing API policies such as rate limiting and request throttling. In summary, the Ambassador Pattern can be used in a variety of scenarios within a microservices architecture, including service discovery, protocol translation, load balancing, security, and API management. Understanding these use cases can help you determine whether the pattern is a good fit for your microservices architecture. Anti-Corruption Layer Pattern Definition and Purpose of the Anti-Corruption Layer Pattern The Anti-Corruption Layer (ACL) Pattern is a design pattern used to isolate and insulate a microservice from another service with a different domain model or communication protocol. This pattern is used to prevent the spread of "corruption" from one service to another, where corruption refers to the introduction of foreign concepts or terminology into a microservice's domain model. The purpose of the Anti-Corruption Layer Pattern is to provide a translation layer between microservices that have different domain models or communication protocols. This layer allows microservices to communicate with each other without having to understand each other's domain models, reducing complexity and increasing maintainability. The Anti-Corruption Layer Pattern is especially useful in large-scale microservices architectures where services may be developed by different teams using different technologies and domain models. By providing a translation layer, the ACL pattern allows these services to communicate with each other without having to be modified or refactored to accommodate each other's, domain models. In summary, the Anti-Corruption Layer Pattern is a design pattern used to isolate and insulate a microservice from another service with a different domain model or communication protocol. The purpose of the ACL pattern is to provide a translation layer between microservices that have different domain models or communication protocols, reducing complexity and increasing maintainability. Implementation of the Anti-Corruption Layer Pattern The implementation of the Anti-Corruption Layer Pattern involves creating an intermediary layer between two microservices that have different domain models or communication protocols. This layer acts as a translator, allowing the two microservices to communicate with each other without having to understand each other's, domain models. To implement the Anti-Corruption Layer Pattern, you can follow these steps: Identify the services that need to communicate with each other but have different domain models or communication protocols. Create an intermediary layer between the two services. This layer can be implemented as a separate microservice, a library, or a set of classes within a microservice. Define a translation layer within the intermediary layer that maps data between the two services. This translation layer should be designed to isolate the microservices from each other, preventing corruption from spreading between them. Implement the translation layer using a combination of techniques such as data mapping, data transformation, and data validation. Test the intermediary layer to ensure that it is functioning as expected and handling all possible scenarios. Deploy the intermediary layer and configure the microservices to use it for communication. By following these steps, you can successfully implement the Anti-Corruption Layer Pattern in your microservices architecture. The ACL pattern can be implemented in various ways, and the specific implementation details may vary depending on your use case and the technologies you're using. In summary, implementing the Anti-Corruption Layer Pattern involves creating an intermediary layer that acts as a translator between microservices with different domain models or communication protocols. To implement the ACL pattern, you need to identify the services that need to communicate, create an intermediary layer, define a translation layer, implement the translation layer, test the intermediary layer, and deploy and configure the microservices to use it. Advantages and Disadvantages of the Anti-Corruption Layer Pattern Like any design pattern, the Anti-Corruption Layer Pattern has its advantages and disadvantages. Here are some of them: Advantages: Isolation: The ACL pattern provides a way to isolate microservices from each other, preventing the spread of corruption between them. This helps to maintain the integrity of each microservice's domain model. Flexibility: The ACL pattern provides a way to integrate microservices with different domain models or communication protocols. This flexibility allows you to develop microservices using different technologies and frameworks. Maintainability: By isolating microservices from each other, the ACL pattern makes it easier to maintain each microservice independently. This reduces the risk of unintended changes and makes it easier to update or replace microservices. Scalability: The ACL pattern can help to improve the scalability of your microservices architecture by allowing you to add or remove microservices without affecting the rest of the system. Disadvantages: Complexity: The ACL pattern adds an additional layer of complexity to your microservices architecture. This can make it harder to understand and maintain, especially if the translation layer is not well-designed. Performance: The ACL pattern can introduce additional latency and overhead to your microservices communication, especially if the translation layer requires significant processing. Development Time: Implementing the ACL pattern requires additional development time and effort to design, implement, and test the intermediary layer. Additional Infrastructure: The ACL pattern may require additional infrastructure to support the intermediary layer, which can add to the cost and complexity of your microservices architecture. In summary, the Anti-Corruption Layer Pattern has several advantages, including isolation, flexibility, maintainability, and scalability. However, it also has several disadvantages, including complexity, performance, development time, and additional infrastructure requirements. When deciding whether to use the ACL pattern, you should carefully consider these factors and determine if the benefits outweigh the costs in your specific use case. Use Cases for the Anti-Corruption Layer Pattern The Anti-Corruption Layer Pattern can be useful in a variety of scenarios where microservices need to communicate with each other but have different domain models or communication protocols. Here are some examples of use cases for the ACL pattern: Legacy Systems Integration: When integrating microservices with legacy systems, it's common to encounter different domain models and communication protocols. The ACL pattern can be used to translate data between the microservices and the legacy systems, allowing them to communicate without having to understand each other's models. Multi-Tenant Applications: In multi-tenant applications, different tenants may have different data models or requirements. The ACL pattern can be used to translate data between the microservices and the tenants, allowing them to communicate with each other without affecting each other's data models. Service Reusability: In some cases, a microservice may need to be reused in different contexts with different domain models or communication protocols. The ACL pattern can be used to isolate the microservice from the different contexts, allowing it to be reused without modification. System Migration: When migrating from one system to another, it's common to encounter different domain models and communication protocols. The ACL pattern can be used to translate data between the old and new systems, allowing them to communicate during the migration process. Vendor Integration: When integrating with third-party services or vendors, it's common to encounter different domain models and communication protocols. The ACL pattern can be used to translate data between the microservices and the vendors, allowing them to communicate without having to understand each other's models. These are just a few examples of use cases for the Anti-Corruption Layer Pattern. In general, the ACL pattern is useful in any scenario where microservices need to communicate with each other but have different domain models or communication protocols. By using the ACL pattern, you can achieve greater flexibility, maintainability, and scalability in your microservices architecture. Backends for Frontends Pattern Definition and Purpose of the Backends for Frontends Pattern The Backends for Frontends (BFF) Pattern is a microservices architecture pattern that involves creating multiple backends to serve different client applications, such as web or mobile applications. Each backend is tailored to a specific client application's needs, providing optimized data and functionality for that application. The purpose of the BFF pattern is to improve the performance and user experience of client applications by providing them with optimized backends. By tailoring the backends to the needs of each client application, you can ensure that the application has access to the data and functionality it needs without having to make multiple requests or process unnecessary data. This can lead to faster load times, reduced latency, and a more responsive user experience. Another benefit of the BFF pattern is that it allows you to maintain the separation of concerns between client applications and microservices. Instead of exposing the entire microservices architecture to each client application, you can create tailored backends that provide only the necessary data and functionality. This can help to improve security and reduce the risk of unauthorized access to sensitive data. Overall, the BFF pattern can be a valuable tool for improving the performance and user experience of client applications while maintaining the separation of concerns and improving security. Implementation of the Backends for Frontends Pattern To implement the BFF pattern, you will need to create multiple backends that are tailored to the needs of each client application. Here are the key steps involved in implementing the BFF pattern: Identify the client applications: The first step is to identify the client applications that will be using your microservices architecture. For each client application, you will need to create a tailored backend that provides the necessary data and functionality. Define the backend APIs: Once you have identified the client applications, you will need to define the APIs for each backend. The APIs should be designed to provide the necessary data and functionality for the specific client application. Implement the backend services: Once you have defined the APIs, you will need to implement the backend services that provide the necessary data and functionality. Each backend service should be designed to provide the specific data and functionality required by the client application. Implement the BFF layer: The BFF layer is a middleware layer that sits between the client applications and the backend services. Its role is to receive requests from the client application, process them, and forward them to the appropriate backend service. The BFF layer should be designed to provide the necessary data and functionality to the client application without exposing the entire microservices architecture. Deploy and test the BFF layer: Once you have implemented the BFF layer, you will need to deploy it and test it to ensure that it is working as expected. You should test each backend service individually, as well as test the entire BFF layer in conjunction with the client application. By following these steps, you can implement the BFF pattern and create tailored backends that provide the necessary data and functionality for each client application. This can help to improve performance and user experience while maintaining separation of concerns and improving security. Advantages and Disadvantages of the Backends for Frontends Pattern Advantages: Tailored backends: By creating tailored backends for each client application, you can ensure that the application has access to the data and functionality it needs without having to make multiple requests or process unnecessary data. This can lead to faster load times, reduced latency, and a more responsive user experience. Separation of concerns: The BFF pattern allows you to maintain the separation of concerns between client applications and microservices. Instead of exposing the entire microservices architecture to each client application, you can create tailored backends that provide only the necessary data and functionality. This can help to improve security and reduce the risk of unauthorized access to sensitive data. Improved performance: By providing optimized backends, you can improve the performance and user experience of client applications. The BFF pattern can help to reduce load times, minimize network traffic, and improve response times. Scalability: The BFF pattern can be easily scaled horizontally by adding additional instances of the BFF layer. This can help to ensure that the client applications have access to the necessary resources, even during periods of high traffic. Disadvantages: Increased complexity: The BFF pattern can increase the complexity of the microservices architecture, as it involves creating multiple tailored backends and a middleware layer. This can make the architecture harder to manage and maintain, particularly if there are multiple client applications. Increased development time: Creating tailored backends for each client application can be time-consuming and can require additional development resources. This can increase the development time and cost of the microservices architecture. Increased testing requirements: With multiple backends and a middleware layer, testing requirements can be more complex and time-consuming. Each backend service and the BFF layer will need to be tested individually and in conjunction with the client applications. Increased infrastructure requirements: Creating multiple backends and a middleware layer can require additional infrastructure resources, such as servers and databases. This can increase the infrastructure requirements and cost of the microservices architecture. Overall, the Backends for Frontends pattern can provide significant benefits for improving the performance and user experience of client applications while maintaining separation of concerns and improving security. However, it can also introduce additional complexity, development time, testing requirements, and infrastructure requirements, which should be carefully considered before implementing the pattern. Use Cases for the Backends for Frontends Pattern The Backends for Frontends pattern can be particularly useful in microservices architectures that have multiple client applications with different requirements for data and functionality. Here are some use cases for the BFF pattern: Mobile applications: Mobile applications often have specific requirements for data and functionality, such as optimized performance and reduced data usage. By creating tailored backends for each mobile application, you can provide optimized access to the necessary data and functionality while maintaining security and separation of concerns. Additionally, the BFF pattern can help to reduce the load on mobile devices, as the backend can handle processing and optimization tasks. Web applications: Web applications can also have specific requirements for data and functionality, such as personalized content and real-time updates. By creating tailored backends for each web application, you can provide access to the necessary data and functionality while maintaining security and separation of concerns. Additionally, the BFF pattern can help to reduce the load on the web browser, as the backend can handle processing and optimization tasks. Third-party integrations: Microservices architectures often need to integrate with third-party services, such as payment gateways and social media platforms. By creating tailored backends for each third-party integration, you can provide access to the necessary data and functionality while maintaining security and separation of concerns. Additionally, the BFF pattern can help to reduce the risk of exposing the entire microservices architecture to third-party services. IoT applications: IoT applications often have specific requirements for data and functionality, such as real-time data processing and device management. By creating tailored backends for each IoT application, you can provide access to the necessary data and functionality while maintaining security and separation of concerns. Additionally, the BFF pattern can help to reduce the load on IoT devices, as the backend can handle processing and optimization tasks. Overall, the Backends for Frontends pattern can be a useful approach for creating tailored backends for different client applications and use cases. By maintaining the separation of concerns and providing optimized access to the necessary data and functionality, the BFF pattern can help to improve the performance, scalability, and security of microservices architectures. Other Design Patterns for Microservices In addition to the Ambassador, Anti-Corruption Layer, and Backends for Frontends patterns, there are several other design patterns that are commonly used in microservices architectures. Here's a brief overview of some of these patterns: Circuit Breaker: The Circuit Breaker pattern is used to prevent cascading failures in distributed systems. It works by monitoring the availability of a service and, if it detects a failure, temporarily halting requests to that service. This allows the system to recover and prevents it from being overwhelmed with failed requests. Service Registry: The Service Registry pattern is used to keep track of available services in a microservices architecture. It works by having each service register itself with a central registry, which can then be used to look up and locate services as needed. This allows for more flexible and dynamic communication between services. API Gateway: The API Gateway pattern provides a single point of entry for client applications to access a microservices architecture. It works by routing requests from client applications to the appropriate services and can also handle authentication, caching, and other cross-cutting concerns. Saga: The Saga pattern manages distributed transactions in microservices architectures. It works by breaking up a transaction into a series of smaller, local transactions that are managed by each individual service. If any part of the transaction fails, the Saga can be used to roll back or compensate for the changes made by the other services. Event Sourcing: The Event Sourcing pattern is used to capture and store all changes made to a system as a sequence of events. It works by recording each change as an event, which can then be used to reconstruct the state of the system at any point in time. This can be useful for auditing, debugging, and replaying events. These are just a few examples of the many design patterns that are used in microservices architectures. Each pattern has its own strengths and weaknesses, and the appropriate pattern(s) to use will depend on the specific requirements and constraints of your system. Tips and Guidelines for Effectively Designing Microservices Using Design Patterns Designing a microservices architecture using design patterns can be complex, but there are several tips and guidelines that can help ensure a successful implementation. Here are some best practices to keep in mind: Choose the right pattern(s) for your system: Each design pattern has its own strengths and weaknesses, so it's important to choose the pattern(s) that best fit the specific requirements and constraints of your system. Be sure to consider factors such as scalability, maintainability, and performance when selecting patterns. Use a consistent set of patterns: Using a consistent set of design patterns throughout your system can help make it easier to understand and maintain. This can also help with testing and debugging, as issues in one part of the system can often be traced to patterns used elsewhere. Use automated testing and monitoring: Automated testing and monitoring are crucial for ensuring the reliability and performance of a microservices architecture. Be sure to test each service and the system as a whole, and use monitoring tools to track performance and detect issues in real time. Avoid tightly-coupled services: Tightly-coupled services can be difficult to maintain and scale and can lead to cascading failures. Instead, use design patterns such as the Anti-Corruption Layer and Circuit Breaker to help decouple services and prevent failures from spreading. Design for resilience: Microservices architectures are inherently distributed and complex, so it's important to design for resilience. Use patterns such as the Circuit Breaker and Saga to help manage failures and design services to be resilient to network latency and other issues. Ensure security and privacy: Microservices architectures can create security and privacy challenges, so it's important to ensure that each service is secure and that sensitive data is protected. Use patterns such as the API Gateway and Access Token to help control access to services, and ensure that each service follows secure coding practices. By following these tips and guidelines, you can help ensure that your microservices architecture is reliable, scalable, and secure. Common Pitfalls to Avoid While design patterns can help address many of the challenges of designing microservices architectures, there are also several common pitfalls to be aware of. Here are some pitfalls to avoid: Over-engineering: It's easy to fall into the trap of over-engineering a microservices architecture by using too many patterns or building overly complex services. This can lead to reduced performance, increased maintenance costs, and a higher risk of failure. Underestimating testing and monitoring: Testing and monitoring are critical for ensuring the reliability and performance of a microservices architecture, but they can be time-consuming and complex. Don't underestimate the effort required for testing and monitoring, and be sure to use automated tools to help manage these tasks. Ignoring security and privacy: Security and privacy should be a top priority when designing microservices architectures, but they are often overlooked. Be sure to design services to be secure by default and use patterns such as the API Gateway and Access Token to help manage access to services. Failing to consider non-functional requirements: Non-functional requirements, such as scalability, maintainability, and performance, are just as important as functional requirements when designing microservices architectures. Be sure to consider these requirements when choosing design patterns and designing services. Choosing patterns based on hype: There are many design patterns available for microservices architectures, and it can be tempting to choose patterns based on hype or popularity. However, it's important to choose patterns based on the specific requirements and constraints of your system rather than blindly following trends. By being aware of these common pitfalls, you can help ensure that your microservices architecture is well-designed, reliable, and secure. Summary of Key Takeaways In this article, we have explored several key design patterns for microservices, including the Ambassador Pattern, Anti-Corruption Layer Pattern, and Backends for Frontends Pattern. Here are some of the key takeaways from our discussion: Design patterns can help address common challenges in microservices architectures, such as service discovery, communication, and scalability. The Ambassador Pattern can be used to provide a proxy for service discovery and to add additional functionality to services, while the Anti-Corruption Layer Pattern can help isolate services from external dependencies and legacy systems. The Backends for Frontends Pattern can be used to provide customized APIs for different types of clients and to simplify communication between services. While design patterns can offer many advantages, they also have some disadvantages, such as increased complexity and potential performance issues. When designing microservices architectures, it's important to consider non-functional requirements, such as scalability, maintainability, and security, and to avoid common pitfalls such as over-engineering and failing to consider testing and monitoring. Choosing the right design patterns depends on the specific requirements and constraints of your system and should not be based solely on hype or popularity. By keeping these key takeaways in mind, you can effectively design microservices architectures that are scalable, maintainable, and secure. Final Thoughts on the Importance of Design Patterns for Microservices In today's fast-paced software development world, microservices have emerged as a popular approach to building scalable, maintainable, and flexible software systems. However, implementing a microservices architecture can be challenging, particularly as the number of services and interdependencies increase. This is where design patterns can play a crucial role. By applying design patterns, you can address common challenges in microservices architectures and make your system more robust, scalable, and maintainable. Moreover, design patterns can provide a common language for developers to communicate and share best practices. However, it's important to remember that design patterns are not a silver bullet and should be applied judiciously based on the specific requirements and constraints of your system. In addition, design patterns should be adapted and customized to suit your unique needs rather than blindly applied without consideration of the larger context. In conclusion, design patterns can be a valuable tool for designing and implementing microservices architectures. Still, they should not be seen as a replacement for sound architectural principles and good engineering practices. By combining design patterns with solid engineering practices, you can build microservices architectures that are robust, scalable, and maintainable and that can evolve over time to meet changing business needs.
Using WireMock for integration testing of Spring-based (micro)services can be hugely valuable. However, usually, it requires significant effort to write and maintain the stubs needed for WireMock to take a real service’s place in tests. What if generating WireMock stubs was as easy as adding @GenerateWireMockStub to your controller? Like this: Kotlin @GenerateWireMockStub @RestController class MyController { @GetMapping("/resource") fun getData() = MyServerResponse(id = "someId", message = "message") } What if that meant that you then just instantiate your producer’s controller stub in consumer-side tests… Kotlin val myControllerStub = MyControllerStub() Stub the response… Kotlin myControllerStub.getData(MyServerResponse("id", "message")) And verify calls to it with no extra effort? Kotlin myControllerStub.verifyGetData() Surely, it couldn’t be that easy?! Before I explain the framework that does this, let’s first look at the various approaches to creating WireMock stubs. The Standard Approach While working on a number of projects, I observed that the writing of WireMock stubs most commonly happens on the consumer side. What I mean by this is that the project that consumes the API contains the stub setup code required to run tests. The benefit of it is that it's easy to implement. There is nothing else the consuming project needs to do. Just import the stubs into the WireMock server in tests, and the job is done. However, there are also some significant downsides to this approach. For example, what if the API changes? What if the resource mapping changes? In most cases, the tests for the service will still pass, and the project may get deployed only to fail to actually use the API — hopefully during the build’s automated integration or end-to-end tests. Limited visibility of the API can lead to incomplete stub definitions as well. Another downside of this approach is the duplicated maintenance effort — in the worst-case scenario. Each client ends up updating the same stub definitions. Leakage of the API-specific information, in particular, sensitive information from the producer to the consumer, leads to the consumers being aware of the API characteristics they shouldn’t be. For example, the endpoint mappings or, sometimes even worse — API security keys. Maintaining stubs on the client side can also lead to increased test setup complexity. The Less Common Approach A more sophisticated approach that addresses some of the above disadvantages is to make the producer of the API responsible for providing the stubs. So, how does it work when the stubs live on the producer side? In a poly-repo environment, where each microservice has its own repository, this means the producer generates an artifact containing the stubs and publishes it to a common repository (e.g., Nexus) so that the clients can import it and use it. In a mono-repo, the dependencies on the stubs may not require the artifacts to be published in this way, but this will depend on how your project is set up. The stub source code is written manually and subsequently published to a repository as a JAR file The client imports the JAR as a dependency and downloads it from the repository Depending on what is in the Jar, the test loads the stub directly to WireMock or instantiates the dynamic stub (see next section for details) and uses it to set up WireMock stubs and verify the calls This approach improves the accuracy of the stubs and removes the duplicated effort problem since there is only one set of stubs maintained. There is no issue with visibility either since the stubs are written while having full access to the API definition, which ensures better understanding. The consistency is ensured by the consumers always loading the latest version of the published stubs every time the tests are executed. However, preparing stubs manually on the producer's side can also have its own shortcomings. It tends to be quite laborious and time-consuming. As any handwritten code intended to be used by 3rd parties, it should be tested, which adds even more effort to the development and maintenance. Another problem that may occur is a consistency issue. Different developers may write the stubs in different ways, which may mean different ways of using the stubs. This slows development down when developers maintaining different services need to first learn how the stubs have been written, in the worst-case scenario, uniquely for each service. Also, when writing stubs on the consumer's side, all that is required to prepare are stubs for the specific parts of the API that the consumer actually uses. But providing them on the producer's side means preparing all of them for the entire API as soon as the API is ready, which is great for the client but not so great for the provider. Overall, writing stubs on the provider side has several advantages over the client-side approach. For example, if the stub-publishing and API-testing are well integrated into the CI pipeline, it can serve as a simpler version of Consumer Driven Contracts, but it is also important to consider the possible implications like the requirement for the producer to keep the stubs in sync with the API. Dynamic Stubbing Some developers may define stubs statically in the form of JSON. This is additional maintenance. Alternatively, you can create helper classes that introduce a layer of abstraction — an interface that determines what stubbing is possible. Usually, they are written in one of the higher-level languages like Java/Kotlin. Such stub helpers enable the clients to set up stubs within the constraints set out by the author. Usually, it means using various values of various types. Hence I call them dynamic stubs for short. An example of such a dynamic stub could be a function with a signature along the lines of: Kotlin fun get(url: String, response: String } One could expect that such a method could be called like this: Kotlin get(url = "/someResource", response = "{ \"key\" = \"value\" }") And a potential implementation using the WireMock Java library: Kotlin fun get(url: String, response: String) { stubFor(get(urlPathEqualTo(url)) .willReturn(aResponse().withBody(response))) } Such dynamic stubs provide a foundation for the solution described below. Auto-Generating Dynamic WireMock Stubs I have been working predominantly in the Java/Kotlin Spring environment, which relies on the SpringMVC library to support HTTP endpoints. The newer versions of the library provide the @RestController annotation to mark classes as REST endpoint providers. It's these endpoints that I tend to stub most often using the above-described dynamic approach. I came to the realization that the dynamic stubs should provide only as much functionality as set out by the definition of the endpoints. For example, if a controller defines a GET endpoint with a query parameter and a resource name, the code enabling you to dynamically stub the endpoint should only allow the client to set the value of the parameter, the HTTP status code, and the body of the response. There is no point in stubbing a POST method on that endpoint if the API doesn't provide it. With that in mind, I believed there was an opportunity to automate the generation of the dynamic stubs by analyzing the definitions of the endpoints described in the controllers. Obviously, nothing is ever easy. A proof of concept showed how little I knew about the build tool that I have been using for years (Gradle), the SpringMVC library, and Java annotation processing. But nevertheless, in spite of the steep learning curve, I managed to achieve the following: parse the smallest meaningful subset of the relevant annotations (e.g., a single basic resource) design and build a data model of the dynamic stubs generate the source code of the dynamic stubs (in Java) and make Gradle build an artifact containing only the generated code and publish it (I also tested the published artifact by importing it into another project) In the end, here is what was achieved: The annotation processor iterates through all relevant annotations and generates the dynamic stub source code. Gradle compiles and packages the generated source into a JAR file and publishes it to an artifact repository (e.g., Nexus) The client imports the JAR as a dependency and downloads it from the repository The test instantiates the generated stubs and uses them to set up WireMock stubs and verify the calls made to WireMock With a mono-repo, the situation is slightly simpler since there is no need to package the generated code and upload it to a repository. The compiled stubs become available to the depending subprojects immediately. These end-to-end scenarios proved that it could work. The Final Product I developed a library with a custom annotation @GenerateWireMockStub that can be applied to a class annotated with @RestController. The annotation processor included in the library generates the Java code for dynamic stub creation in tests. The stubs can then be published to a repository or, in the case of a mono-repo, used directly by the project(s). For example, by adding the following dependencies (Kotlin project): Groovy kapt 'io.github.lsd-consulting:spring-wiremock-stub-generator:2.0.3' compileOnly 'io.github.lsd-consulting:spring-wiremock-stub-generator:2.0.3' compileOnly 'com.github.tomakehurst:wiremock:2.27.2' and annotating a controller having a basic GET mapping with @GenerateWireMockStub: Kotlin @GenerateWireMockStub @RestController class MyController { @GetMapping("/resource") fun getData() = MyServerResponse(id = "someId", message = "message") } will result in generating a stub class with the following methods: Java public class MyControllerStub { public void getData(MyServerResponse response) ... } public void getData(int httpStatus, String errorResponse) { ... } public void verifyGetData() { ... } public void verifyGetData(final int times) { ... } public void verifyGetDataNoInteraction() { ... } } The first two methods set up stubs in WireMock, whereas the other methods verify the calls depending on the expected number of calls — either once or the given number of times, or no interaction at all. That stub class can be used in a test like this: Kotlin //Create the stub for the producer’s controller val myControllerStub = MyControllerStub() //Stub the controller method with the response myControllerStub.getData(MyServerResponse("id", "message")) callConsumerThatTriggersCallToProducer() myControllerStub.verifyGetData() The framework now supports most HTTP methods, with a variety of ways to verify interactions. @GenerateWireMockStub makes maintaining these dynamic stubs effortless. It increases accuracy and consistency, making maintenance easier and enabling your build to easily catch breaking changes to APIs before your code hits production. More details can be found on the project’s website. A full example of how the library can be used in a multi-project setup and in a mono-repo: spring-wiremock-stub-generator-example spring-wiremock-stub-generator-monorepo-example Limitations The library’s limitations mostly come from the WireMock limitations. More specifically, multi-value and optional request parameters are not quite supported by WireMock. The library uses some workarounds to handle those. For more details, please check out the project’s README. Note The client must have access to the API classes used by the controller. Usually, it is achieved by exposing them in separate API modules that are published for consumers to use. Acknowledgments I would like to express my sincere gratitude to the reviewers who provided invaluable feedback and suggestions to improve the quality of this article and the library. Their input was critical in ensuring the article’s quality. A special thank you to Antony Marcano for his feedback and repeated reviews, and direct contributions to this article. This was crucial in ensuring that the article provides clear and concise documentation for the spring-wiremock-stub-generator library. I would like to extend my heartfelt thanks to Nick McDowall and Nauman Leghari for their time, effort, and expertise in reviewing the article and providing insightful feedback to improve its documentation and readability. Finally, I would also like to thank Ollie Kennedy for his careful review of the initial pull request and his suggestions for improving the codebase.
Vulnerability management is a proactive approach to identifying, managing, and mitigating network vulnerabilities to improve the security of an enterprise's applications, software, and devices. It includes identifying vulnerabilities in IT assets, assessing risks, and taking appropriate action on systems and networks. Organizations worldwide invest in vulnerability management to protect systems and networks against security breaches and data theft. Often combined with risk management and other security measures, vulnerability management has become an integral part of today's computer and network security practices to prevent the exploitation of IT vulnerabilities, such as code and design flaws, to compromise the security of the entire enterprise network. The Importance of Vulnerability Management Despite the effectiveness of vulnerability management for many cybersecurity risks, organizations often overlook the implementation of robust vulnerability management processes, as evidenced by the sheer number of data breaches, and are, therefore, unknowingly compromised by patches and misconfigurations. Vulnerability management is designed to investigate an organization's security posture and detect such vulnerabilities before a malicious hacker discovers them. This is why implementing a vulnerability management program is essential for companies of all sizes. Powerful vulnerability management leverages threat intelligence and IT team knowledge to rank risks and respond quickly to security vulnerabilities. Four Stages of Vulnerability Management Several steps must be considered when creating a vulnerability management program. Incorporating these steps into the management process can help prevent the vulnerabilities from being overlooked. It can also correctly address any vulnerabilities found. Identify Vulnerabilities Vulnerability scanners are at the core of a standard vulnerability management solution. The scan consists of four stages. Scan systems that have access to the network by sending Ping or TCP/UDP packets. Identify open ports and services running on the scanned system. Log in to the system remotely and gather detailed system information. Associating System Information with Known Vulnerabilities Vulnerability scanners can identify various systems running on a network, including laptops, desktops, virtual and physical servers, databases, firewalls, switches, and printers. The recognized methods are investigated for various attributes, such as operating system, open port, installed software, user account, file system structure, and system configuration. This information is used to associate known vulnerabilities with the scanned system. To make this association, the vulnerability scanner uses a vulnerability database that contains a list of commonly known vulnerabilities. Evaluation Scans have discovered all the potential known cybersecurity vulnerabilities, so it's time to evaluate and prioritize them. The scan may have found thousands of possible weaknesses, some of which pose a greater risk than others. To sort them out, vulnerability assessments must be conducted to evaluate or score all vulnerabilities in terms of the threat to the company if they are exploited. Many systems can be used for prioritization, but Common Vulnerability Scoring System (CVSS) is one of the most referenced. It's essential to repeat this prioritization process every time you run a scan and discover new vulnerabilities to find those that are most critical to IT security. Vulnerability Remediation If the vulnerability is verified and identified as a risk, the next step is to prioritize how it should be handled among the primary stakeholders of the business and network. The vulnerability can be addressed in the following ways: Rectification: Either completely fix the vulnerability or apply a patch to prevent it from being exploited. It is the ideal treatment the organization is aiming for. Mitigation: Mitigate vulnerabilities to reduce the possibility and impact of a vulnerability Exploited. It may be necessary if appropriate fixes or patches are not provided for the identified vulnerabilities. This option is ideally used to allow time for an organization to fix the vulnerability eventually. Vulnerability managing solutions deliver advised remediation techniques for vulnerabilities. However, there may be better ways to repair the exposure than the recommended repair method. Reporting and Follow-Up Once you have addressed the published vulnerabilities, it's time to take advantage of the reporting tools in our vulnerability management solution. It gives the security team an overview of the effort required by each remediation technique. In addition, it allows them to determine the most efficient way to address the vulnerability issue in the future. Actions to take at this point include: Setting up patching tools. Scheduling automatic updates. Coordinating with your cyber-IT security staff. Setting up a ticketing system in case of a security issue. These reports can also be used to ensure compliance with any regulatory agency in the industry by showing the level of risk of a breach and the actions taken to reduce that risk. Cybercriminals are constantly evolving, so vulnerability management assessments must be conducted regularly to reduce the number of vulnerabilities and keep network security up to date. Ways to Integrate Security 1. Application security scan to secure CI/CD pipeline Continuous Integration and Continuous Delivery (CI/CD) pipelines are the foundation of every modern software organization that builds software. Combined with DevOps practices, the CI/CD pipeline allows your company to deliver software faster and more often. However, great power carries great responsibility. While everyone concentrates on writing secure applications, many people overlook the security of the CI/CD pipeline. However, there are legitimate reasons to pay close attention to the configuration of CI/CD. 2. Importance of CI/CD security CI/CD pipelines usually require a lot of permissions to do their job. You also need to deal with application and infrastructure secrets. Anyone with unauthorized access to the CI/CD pipeline has almost unlimited power to compromise all infrastructure or deploy malicious code. Therefore, securing the CI/CD pipeline should be a high-priority task. Unfortunately, statistics show that there has been a significant increase in attacks on the software supply chain in recent years. 3. Static Application Security Testing (SAST) Static Application Security Testing (SAST) complements SCA by assessing potential vulnerabilities in your source code. In other words, SCA can be based on a database of known vulnerabilities to identify vulnerabilities in third-party code. At the same time, SAST does its analysis of custom code to detect potential security issues such as improper input validation. In this way, by running SAST at the beginning of the CI/CD pipeline in addition to SCA, you can gain a second layer of protection against the risks inherent in your source code. 4. Vulnerability scanning Vulnerability scanning is an automated process that energetically determines network, application, and shield vulnerabilities. Vulnerability scans are typically performed by an organization's IT department or a third-party security service provider. Unfortunately, this scan is also served by attackers looking for entry points into the network. Scanning involves detecting and classifying system weaknesses in networks, communications equipment, and computers. Vulnerability scanning identifies security holes and predicts how effective measures are in the event of a threat or attack. In the vulnerability diagnosis service, the software is operated from the standpoint of a diagnosing side, and an attack target area to be diagnosed is diagnosed. The vulnerability scanner utilizes a database to correspond to the details of the targeted attack. The database references known defects, coding bugs, anomalies in packet construction, default settings, and routes to sensitive data that an attacker may exploit. 5. Software composition analysis (SCA) Software configuration analysis (SCA) is the process of automatically visualizing the use of open-source software (OSS) for risk management, security, and license compliance purposes. Open source (OS) is used by software across all industries, and the need to track components to protect companies from problems and open-source vulnerabilities is growing exponentially. However, since most software production involves operating systems, manual tracking is complex and requires automation to scan source code, binaries, and dependencies. SCA tools are becoming an integral part of application security, enabling organizations to use code scanning to discover evidence of OSS, to create an environment that reduces the cost of fixing vulnerabilities and licensing issues early, and to use automated scanning to find and fix problems with less effort. In addition, SCA continuously monitors security and vulnerability issues to manage workloads better and increase productivity, enabling users to create actionable alerts for new vulnerabilities in current and shipping products. 6. Dynamic Application Security Test (DAST) The DAST solution identifies potential input fields in your application and sends them various abnormal and malicious inputs. It can include an attempt to exploit common vulnerabilities, such as SQL injection commands, cross-site scripting (XSS) vulnerabilities, long input strings, and abnormal input that could reveal input validation and memory management issues within the application. The DAST tool identifies whether an application contains a specific vulnerability based on the application's response to various inputs. For example, if a SQL injection attack attempts to gain unauthorized access to data or an application crashes due to invalid or unauthorized input, this indicates an exploitable vulnerability. 7. Container Security The process of securing containers is continuous. It must be integrated into the development process and automated to reduce manual touchpoints and extend to maintaining and operating the underlying infrastructure. It means protecting the build pipeline's container image and runtime host, platform, and application layers. Implementing safety as part of the constant delivery lifecycle reduces risk and vulnerability to growing attacks in your business. Containers have security benefits, such as excellent application separability, but they also extend the scope of your organization's threats. A significant increase in the deployment of containers in a production environment makes them attractive targets for malicious actors and increases the system's workload. In addition, a single container that is vulnerable or compromised can be an entry point into the entire organization's environment. 8. Infrastructure Security Vulnerability scanning is a complex topic, and organizations evaluating vulnerability scanning solutions often need clarification. Infrastructure vulnerability scanning is the process of running a series of automated checks against a target or range of targets in the infrastructure to detect whether there are potentially malicious security vulnerabilities. A target is specified as a fully qualified domain name (FQDN) that resolves to one or more IP addresses, IP address ranges, or IP addresses to be scanned. An infrastructure vulnerability scan is performed across a network, such as the Internet. The scan runs on a dedicated scan hub and originates from it. The scan hub runs a scan engine to connect to the scanned target to evaluate the vulnerability. Conclusion Vulnerability management is a proactive approach to identifying, managing, and mitigating network vulnerabilities to improve the security of an enterprise's applications, software, and devices hosted in the cloud. It includes identifying vulnerabilities in IT assets, assessing risks, and taking appropriate action on systems and networks. Implementing a vulnerability management program is essential for companies of all sizes, as it leverages threat intelligence and IT team knowledge to rank risks and respond quickly to security vulnerabilities. The vulnerability management program consists of four stages: identifying vulnerabilities, evaluating them, remediating them, and reporting and follow-up. Integrating security measures, such as securing CI/CD pipelines, using vulnerability scanning tools, and implementing SCA, SAST, DAST, etc., can complement the vulnerability management program to provide a robust security approach.
REST APIs are the heart of any modern software application. Securing access to REST APIs is critical for preventing unauthorized actions and protecting sensitive data. Additionally, companies must comply with regulations and standards to operate successfully. This article describes how we can protect REST APIs using Role-based access control (RBAC) in the Quarkus Java framework. Quarkus is an open-source, full-stack Java framework designed for building cloud-native, containerized applications. The Quarkus Java framework comes with native support for RBAC, which will be the initial focus of this article. Additionally, the article will cover building a custom solution to secure REST endpoints. Concepts Authentication: Authentication is the process of validating a user's identity and typically involves utilizing a username and password. (However, other approaches, such as biometric and two-factor authentication, can also be employed). Authentication is a critical element of security and is vital for protecting systems and resources against unauthorized access. Authorization: Authorization is the process of verifying if a user has the necessary privileges to access a particular resource or execute an action. Usually, authorization follows authentication. Several methods, such as role-based access control and attribute-based access control, can be employed to implement authorization. Role-Based Access Control: Role-based access control (RBAC) is a security model that grants users access to resources based on the roles assigned to them. In RBAC, users are assigned to specific roles, and each role is given permissions that are necessary to perform their job functions. Gateway: In a conventional software setup, the gateway is responsible for authenticating the client and validating whether the client has the necessary permissions to access the resource. Gateway authentication plays a critical role in securing microservices-based architectures, as it allows organizations to implement centralized authentication. Token-based authentication: This is a technique where the gateway provides an access token to the client following successful authentication. The client then presents the access token to the gateway with each subsequent request. JWT: JSON Web Token (JWT) is a widely accepted standard for securely transmitting information between parties in the form of a JSON object. On successful login, the gateway generates a JWT and sends it back to the client. The client then includes the JWT in the header of each subsequent request to the server. The JWT can include required permissions that can be used to allow or deny access to APIs based on the user's authorization level. Example Application Consider a simple application that includes REST APIs for creating and retrieving tasks. The application has two user roles: Admin — allowed to read and write. Member — allowed to read-only. Admin and Member can access the GET API; however, only Admins are authorized to use the POST API. Java @Path("/task") public class TaskResource { @GET @Produces(MediaType.TEXT_PLAIN) public String getTask() { return "Task Data"; } @POST @Produces(MediaType.TEXT_PLAIN) public String createTask() { return "Valid Task received"; } } Configure Quarkus Security Modules In order to process and verify incoming JWTs in Quarkus, the following JWT security modules need to be included. For a maven-based project, add the following to pom.xml XML <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-jwt</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-jwt-build</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-security-jwt</artifactId> <scope>test</scope> </dependency> For a gradle-based project, add the following: Groovy implementation("io.quarkus:quarkus-smallrye-jwt") implementation("io.quarkus:quarkus-smallrye-jwt-build") testImplementation("io.quarkus:quarkus-test-security-jwt") Implementing RBAC Quarkus provides built-in RBAC support to protect REST APIs based on user roles. This can be done in a few steps. Step 1 The first step in utilizing Quarkus' built-in RBAC support is to annotate the APIs with the roles that are allowed to access them. The annotation to be added is @RolesAllowed, which is a JSR 250 security annotation that indicates that the given endpoint is accessible only if the user belongs to the specified role. Java @GET @RolesAllowed({"Admin", "Member"}) @Produces(MediaType.TEXT_PLAIN) public String getTask() { return "Task Data"; } @POST @RolesAllowed({"Admin"}) @Produces(MediaType.TEXT_PLAIN) public String createTask() { return "Valid Task received"; } Step 2 The next step is to configure the issuer URL and the public key. This enables Quarkus to verify the JWT and ensure it has not been tampered with. This can be done by adding the following properties to the application.properties file located in the /resources folder. Properties files mp.jwt.verify.publickey.location=publicKey.pem mp.jwt.verify.issuer=https://myapp.com/issuer quarkus.native.resources.includes=publicKey.pem mp.jwt.verify.publickey.location - This configuration specifies the location of the public key to Quarkus, which must be located in the classpath. The default location Quarkus looks for is the /resources folder. mp.jwt.verify.issuer - This property represents the issuer of the token, who created it and signed it with their private key. quarkus.native.resources.includes - this property informs quarks to include the public key as a resource in the native executable. Step 3 The last step is to add your public key to the application. Create a file named publicKey.pem, save the public key in it. Copy the file to the /resources folder located in the /src directory. Testing Quarkus offers robust support for unit testing to ensure code quality, particularly when it comes to RBAC. Using the @TestSecurity annotation, user roles can be defined, and a JWT can be generated to call REST APIs from within unit tests. Java @Test @TestSecurity(user = "testUser", roles = "Admin") public void testTaskPostEndpoint() { given().log().all() .body("{id: task1}") .when().post("/task") .then() .statusCode(200) .body(is("Valid Task received")); } Custom RBAC Implementation As the application grows and incorporates additional features, the built-in RBAC support may become insufficient. A well-written application allows users to create custom roles with specific permissions associated with them. It is important to decouple roles and permissions and avoid hardcoding them in the code. A role can be considered as a collection of permissions, and each API can be labeled with the required permissions to access it. To decouple roles and permissions and provide flexibility to users, let’s expand our example application to include two permissions for tasks. task:read — permission would allow users to read tasks task:write — permission would allow users to create or modify tasks. We can then associate these permissions with the two roles: "Admin" and "Member" Admin: assigned both read and write. ["task:read", "task:write"] Member: would only have read. ["task:read"] Step 1 To associate each API with a permission, we need a custom annotation that simplifies its usage and application. Let's create a new annotation called @Permissions, which accepts a string of permissions that the user must have in order to call the API. Java @Target({ ElementType.METHOD }) @Retention(RetentionPolicy.RUNTIME) public @interface Permissions { String[] value(); } Step 2 The @Permissions annotation can be added to the task APIs to specify the required permissions for accessing them. The GET task API can be accessed if the user has either task:read or task:write permissions, while the POST task API can only be accessed if the user has task:write permission. Java @GET @Permissions({"task:read", "task:write"}) @Produces(MediaType.TEXT_PLAIN) public String getTask() { return "Task Data"; } @POST @Permissions("task:write") @Produces(MediaType.TEXT_PLAIN) public String createTask() { return "Valid Task received"; } Step 3 The last step involves adding a filter that intercepts API requests and verifies if the included JWT has the necessary permissions to call the REST API. The JWT must include the userID as part of the claims, which is the case in a typical application since some form of user identification is included in the JWT token The Reflection API is used to determine the method and its associated annotation that is invoked. In the provided code, user -> role mapping and role -> permissions mapping are stored in HashMaps. In a real-world scenario, this information would be retrieved from a database and cached to allow for faster access. Java @Provider public class PermissionFilter implements ContainerRequestFilter { @Context ResourceInfo resourceInfo; @Inject JsonWebToken jwt; @Override public void filter(ContainerRequestContext requestContext) throws IOException { Method method = resourceInfo.getResourceMethod(); Permissions methodPermAnnotation = method.getAnnotation(Permissions.class); if(methodPermAnnotation != null && checkAccess(methodPermAnnotation)) { System.out.println("Verified permissions"); } else { requestContext.abortWith(Response.status(Response.Status.FORBIDDEN).build()); } } /** * Verify if JWT permissions match the API permissions */ private boolean checkAccess(Permissions perm) { boolean verified = false; if(perm == null) { //If no permission annotation verification failed verified = false; } else if(jwt.getClaim("userId") == null) { // Don’t support Anonymous users verified = false; } else { String userId = jwt.getClaim("userId"); String role = getRolesForUser(userId); String[] userPermissions = getPermissionForRole(role); if(Arrays.asList(userPermissions).stream() .anyMatch(userPerm -> Arrays.asList(perm.value()).contains(userPerm))) { verified = true; } } return verified; } // role -> permission mapping private String[] getPermissionForRole(String role) { Map<String, String[]> rolePermissionMap = new HashMap<>(); rolePermissionMap.put("Admin", new String[] {"task:write", "task:read"}); rolePermissionMap.put("Member", new String[] {"task:read"}); return rolePermissionMap.get(role); } // userId -> role mapping private String getRolesForUser(String userId) { Map<String, String> userMap = new HashMap<>(); userMap.put("1234", "Admin"); userMap.put("6789", "Member"); return userMap.get(userId); } } Testing In a similar way to testing the built-in RBAC, the @TestSecurity annotation can be utilized to create a JWT for testing purposes. Additionally, the Quarkus library offers the @JwtSecurity annotation, which enables the addition of extra claims to the JWT, including the userId claim. Java @Test @TestSecurity(user = "testUser", roles = "Admin") @JwtSecurity(claims = { @Claim(key = "userId", value = "1234") }) public void testTaskPosttEndpoint() { given().log().all() .body("{id: task1}") .when().post("/task") .then() .statusCode(200) .body(is("Task edited")); } @Test @TestSecurity(user = "testUser", roles = "Admin") @JwtSecurity(claims = { @Claim(key = "userId", value = "6789") }) public void testTaskPostMember() { given().log().all() .body("{id: task1}") .when().post("/task") .then() .statusCode(403); } Conclusion As cyber-attacks continue to rise, protecting REST APIs is becoming increasingly crucial. A potential security breach can result in massive financial losses and reputational damage for a company. While Quarkus is a versatile Java framework that provides built-in RBAC support for securing REST APIs, its native support may be inadequate in certain scenarios, particularly for fine-grained access control. The above article covers both the implementation of the built-in RBAC support in Quarkus, as well as the development and testing of a custom role-based access control solution in Quarkus.
If you're not a REST expert, you probably use the same HTTP codes over and over in your responses, mostly 200, 404, and 500. If using authentication, you might perhaps add 401 and 403; if using redirects 301 and 302, that might be all. But the range of possible status codes is much broader than that and can improve semantics a lot. While many discussions about REST focus on entities and methods, using the correct response status codes can make your API stand out. 201: Created Many applications allow creating entities: accounts, orders, what have you. In general, one uses HTTP status code 200 is used, and that's good enough. However, the 201 code is more specific and fits better: The HTTP 201 Created success status response code indicates that the request has succeeded and has led to the creation of a resource. The new resource is effectively created before this response is sent back. and the new resource is returned in the body of the message, its location being either the URL of the request, or the content of the Location header. - MDN web docs 205: Reset Content Form-based authentication can either succeed or fail. When failing, the usual behavior is to display the form again with all fields cleared. Guess what? The 205 status code is dedicated to that: The HTTP 205 Reset Content response status tells the client to reset the document view, so for example to clear the content of a form, reset a canvas state, or to refresh the UI. - MDN web docs 428: Precondition Required When using optimistic locking, validation might fail during an update because data has already been updated by someone else. By default, frameworks (such as Hibernate) throw an exception in that case. Developers, in turn, catch it and display a nice information box asking to reload the page and re-enter data. Let's check the 428 status code: The origin server requires the request to be conditional. Intended to prevent the 'lost update' problem, where a client GETs a resource's state, modifies it, and PUTs it back to the server, when meanwhile a third party has modified the state on the server, leading to a conflict. - MDN web docs The code describes exactly the conflict case in optimistic locking! Note that RFC 6585 mentions the word conditional and shows an example using the If-Match header. However, it doesn't state exactly how to achieve that condition. 409: Conflict Interestingly enough, the 409 code states: The HTTP 409 Conflict response status code indicates a request conflict with current state of the server. - MDN web docs It can also apply to the previous case but is more general. For example, a typical use case would be to update a resource that has been deleted. 410: Gone Most of the time, when you GET a resource that is not found, the server returns a 404 code. What if the resource existed before but doesn't anymore? Interestingly enough, there's an alternative for a particular use case: The semantics of the returned HTTP code could tell that. That is precisely the reason for 410. The HTTP 410 Gone client error response code indicates that access to the target resource is no longer available at the origin server and that this condition is likely to be permanent. If you don't know whether this condition is temporary or permanent, a 404 status code should be used instead. - MDN web docs 300: Multiple Choices WARNING: This one seems a bit far-fetched, but the IETF specification fits the case. HATEOAS-driven applications offer a root page, which is an entry point allowing navigating further. For example, this is the response when accessing the Spring Boot actuator: JSON "_links": { "self": { "href": "http://localhost:8080/manage", "templated": false }, "beans": { "href": "http://localhost:8080/manage/beans", "templated": false }, "health": { "href": "http://localhost:8080/manage/health", "templated": false }, "metrics": { "href": "http://localhost:8080/manage/metrics", "templated": false }, } } No regular resource is present at this location. The server provides a set of resources, each with a dedicated identifier. It looks like a match for the 300 status code: [... ] the server SHOULD generate apayload in the 300 response containing a list of representationmetadata and URI reference(s) from which the user or user agent canchoose the one most preferred. - IETF HTTP 1.1: Semantics and Content Conclusion Generally, specific HTTP statuses only make sense when having a REST backend accessed by a JavaScript frontend. For example, resetting the form (205) doesn't make sense if the server generates the page. The issue with those codes is about the semantics: they are subject to a lot of interpretation. Why would you choose to use 409 over 428? It may be a matter of my interpretation over yours in the end. If you offer a REST public API, you'll have a combination of those codes (and others) and headers. You'll need full-fledged detailed documentation in all cases to refine the general semantics in your context. That shouldn't stop you from using them, as they offer a rich set from which you can choose. To Go Further: HTTP response status codes List of HTTP status codes Series of posts on HTTP status codes The HTTP Status Codes Problem
Motivation In the beginning, given a service that can serve requests fast (for example, as it handles a low amount of data) and serves a relatively small amount of clients. As time passes, it is a common phenomenon that both the number of clients and the response time start to grow. In this case, one of the possible changes to make is not to serve the requests in a synchronous wayanymore but to let the requests trigger asynchronous jobs. With this change, the client does not receive the result data immediately but only an identifier to the job, and the client is free to pool the status of the job anytime. When this change is made, it is worth considering the impact on the clients: changing from one synchronous call to polling is a typical example of both service and client code changing (and gaining extra complexity). Therefore, if the number of clients is high (or not even exactly known, in the case of public service), it might be adequate to look for a way that impacts only the service. One of the possible ways can be introducing long-polling combined with HTTP redirects. A complete example of all three stages is provided on GitHub. State 0: Synchronous Service As the initial status, consider the following synchronous service controller method: Java @GetMapping public BusinessObject getTheAnswer() { final BusinessObject result = businessService.doInternalCalculation(); return result; } Let's assume that the execution time of businessService.doInternalCalculation() grows as time passes by. At this stage, the clients are simply firing a REST call to the endpoint of the service.Invoking the example service (OriginalController), the network traffic pattern is simple: Only one REST call is being made, which returns HTTP 200, and the result can be found in the response body. State 1: Introducing (Long) Polling As stated above, the main change is to let the service start (or even just queue up) a job upon a received REST call. After this change, the controller's above-presented method is changed, and the controller gains a new method (see LongPollingController for actual implementation details): Java @GetMapping public JobStatus getTheAnswer() { final Future<BusinessObject> futureResult = ... submit it to a task executor ...; final var id = ... assign an ID to the task ...; return JobStatus( ... which should contain ID at least, in this case ... ); } @GetMapping("/{id}") public JobStatus getJobStatus(@PathVariable int id) { // Find the job based on the ID final Future<BusinessObject> futureResult = ...; // Wait for result (but wait no longer as the defined timeout) final BusinessObject result = futureResult.get(... timeout ...); // 1. If timeout happened return JobStatus( ... containing the fact, that the job is still running ... ); // 2. If result is there return JobStatus( ... containing the business result ...); } Note the following: The client has to adapt to use not only the first but also the new rest endpoint The client has to be aware of the structure of JobStatus Based on the JobStatus attributes, the client has to poll for the result This is also reflected in the network traffic pattern: The client had to fire up multiple requests against multiple endpoints. All the requests returned with HTTP 200, regardless of whether the result is already available. State 2: Introducing Redirects In order to avoid leveraging the client with the points above, the service can utilize HTTP redirect responses. To do so, the service shall change both methods as follows (see the concrete changes in RedirectLongPollingController): Java @GetMapping public RedirectView getTheAnswer() { final Future<BusinessObject> futureResult = ... submit it to a task executor ...; final var id = ... assign an ID to the task ...; return new RedirectView(... pointing to the other endpoint defined in this controller ...); } @GetMapping("/{id}") public Object getJobStatus(@PathVariable int id) { // Find the job based on the ID final Future<BusinessObject> futureResult = ...; // Wait for result (but wait no longer as the defined timeout) final BusinessObject result = futureResult.get(... timeout ...); // 1. If timeout happened return RedirectView(... pointing to this endpoint...); // 2. If result is there return result; } Note that in this variant, the client — given that the REST client that is used follows the redirects by default - has nothing to change in its code. The client does not have to introduce new DTOs (such as JobStatus in State 1), nor explicitly firing new REST calls. The network pattern shows a different flow as in State 1: In this case, the client had to fire up one request, which lead to a chain of redirects, which were followed automatically. HTTP 200 response code is used only when the actual business result is available; as long as it is not available, HTTP 302 is returned. Further Notes About Other Possibilities It is important to mention that this solution should not be automatically preferred over performance optimization (as it is common that, in the beginning, the new services are not optimized for response time) and scaling. Still, in given circumstances (especially when the use case enables longer response times), it can be valid to prefer change to long-polling over performance optimization (which might mean a deep reworking of a complex logic) and scaling (as that can lead to a relevant increase of operation costs). About Limitations The idea behind the shown solution is based on two assumptions: That every network element is going to support redirections (which might not be true in the case of firewalls) That the client is going to follow redirections (which is a common default setting in most of the REST clients, but the service can not actually probe nor enforce it) When these assumptions are not true for an actual setup, the presented approach can not be followed. About Architecture (Direction of Dependency) Although it is true that in this way, the service is gaining extra complexity in order to help the clients adapt to the new way of communication (which means, in the best case, that the client has nothing to change at all), the architecture is not changing in the manner of directions of dependencies. The service is still not going to depend on the clients, just enabling them to hold less complexity. About HTTP Verbs In the example above, we could point out that in Stages 1 and 2, a GET REST request leads to a new job submitted and administrated by the server, which is not perfectly passing to HTTP GET anymore. The followings should still be taken into consideration: It is true that the technical state of the service has changed. Still, the business state has not (at least, given that the actual business process does not change the business state) The usage can be and should be idempotent: the server should recognize the case when two requests with the same business meaning have arrived and should not queue the second task This would also give a performance benefit for all the clients who are requesting something which is already on the way to being calculated This also decreases the load on the server, as the same process will not be triggered twice Conclusion In conclusion, we can state that the demonstrated way can free the clients from applying any change as well as can potentially lead to better average response times and lower server utilization (by avoiding running the same flow multiple times).
In this blog post, you will be using the aws-lambda-go library along with the AWS Go SDK v2 for an application that will process records from an Amazon SNS topic and store them in a DynamoDB table. You will also learn how to use Go bindings for AWS CDK to implement “Infrastructure-as-code” for the entire solution and deploy it with the AWS CDK CLI. The code is available on GitHub. Introduction Amazon Simple Notification Service (SNS) is a highly available, durable, and scalable messaging service that enables the exchange of messages between applications or microservices. It uses a publish/subscribe model where publishers send messages to topics, and subscribers receive messages from topics they are interested in. Clients can subscribe to the SNS topic and receive published messages using a supported endpoint type, such as Amazon Kinesis Data Firehose, Amazon SQS, AWS Lambda, HTTP, email, mobile push notifications, and mobile text messages (SMS). AWS Lambda and Amazon SNS integration enable developers to build event-driven architectures that can scale automatically and respond to changes in real time. When a new message is published to an SNS topic, it can trigger a Lambda function (Amazon SNS invokes your function asynchronously with an event that contains a message and metadata) which can perform a set of actions, such as processing the message, storing data in a database, sending emails or SMS messages, or invoking other AWS services. Prerequisites Before you proceed, make sure you have the Go programming language (v1.18 or higher) and AWS CDK installed. Clone the project and change it to the right directory: Shell git clone https://github.com/abhirockzz/sns-lambda-events-golang cd sns-lambda-events-golang Use CDK To Deploy the Solution To start the deployment, simply invoke cdk deploy and wait for a bit. You will see a list of resources that will be created and will need to provide your confirmation to proceed. Shell cd cdk cdk deploy # output Bundling asset SNSLambdaGolangStack/sns-function/Code/Stage... ✨ Synthesis time: 5.94s This deployment will make potentially sensitive changes according to your current security approval level (--require-approval broadening). Please confirm you intend to make the following modifications: //.... omitted Do you wish to deploy these changes (y/n)? y This will start creating the AWS resources required for our application. If you want to see the AWS CloudFormation template which will be used behind the scenes, run cdk synth and check the cdk.out folder. You can keep track of the progress in the terminal or navigate to AWS console: CloudFormation > Stacks > SNSLambdaGolangStack Once all the resources are created, you can try out the application. You should have: A Lambda function A SNS topic A DynamoDB table Along with a few other components (like IAM roles etc.) Verify the Solution You can check the table and SNS info in the stack output (in the terminal or the Outputs tab in the AWS CloudFormation console for your Stack): Send few messages to the SNS topic. For the purposes of this demo, you can use the AWS CLI: Shell export SNS_TOPIC_ARN=<enter the queue url from cloudformation output> aws sns publish --topic-arn $SNS_TOPIC_ARN --message "user1@foo.com" --message-attributes 'name={DataType=String, StringValue="user1"}, city={DataType=String,StringValue="seattle"}' aws sns publish --topic-arn $SNS_TOPIC_ARN --message "user2@foo.com" --message-attributes 'name={DataType=String, StringValue="user2"}, city={DataType=String,StringValue="new delhi"}' aws sns publish --topic-arn $SNS_TOPIC_ARN --message "user3@foo.com" --message-attributes 'name={DataType=String, StringValue="user3"}, city={DataType=String,StringValue="new york"}' You can also use the AWS console to send SQS messages. Check the DynamoDB table to confirm that the file metadata has been stored. You can use the AWS console or the AWS CLI aws dynamodb scan --table-name <enter the table name from cloudformation output> Don’t Forget To Clean Up Once you’re done, to delete all the services, simply use: Shell cdk destroy #output prompt (choose 'y' to continue) Are you sure you want to delete: SQSLambdaGolangStack (y/n)? You were able to setup and try the complete solution. Before we wrap up, let’s quickly walk through some of important parts of the code to get a better understanding of what’s going the behind the scenes. Code Walk Through Some of the code (error handling, logging etc.) has been omitted for brevity since we only want to focus on the important parts. CDK You can refer to the CDK code here. We start by creating a DynamoDB table: Shell table := awsdynamodb.NewTable(stack, jsii.String("dynamodb-table"), &awsdynamodb.TableProps{ PartitionKey: &awsdynamodb.Attribute{ Name: jsii.String("email"), Type: awsdynamodb.AttributeType_STRING}, }) table.ApplyRemovalPolicy(awscdk.RemovalPolicy_DESTROY) Then, we handle the Lambda function (CDK will take care of building and deploying the function) and make sure we provide it appropriate permissions to write to the DynamoDB table. Shell function := awscdklambdagoalpha.NewGoFunction(stack, jsii.String("sns-function"), &awscdklambdagoalpha.GoFunctionProps{ Runtime: awslambda.Runtime_GO_1_X(), Environment: &map[string]*string{"TABLE_NAME": table.TableName()}, Entry: jsii.String(functionDir), }) table.GrantWriteData(function) Then, we create the SNS topic and add that as an event source to the Lambda function. Shell snsTopic := awssns.NewTopic(stack, jsii.String("sns-topic"), nil) function.AddEventSource(awslambdaeventsources.NewSnsEventSource(snsTopic, nil)) Finally, we export the SNS topic and DynamoDB table name as CloudFormation outputs. Shell awscdk.NewCfnOutput(stack, jsii.String("sns-topic-name"), &awscdk.CfnOutputProps{ ExportName: jsii.String("sns-topic-name"), Value: snsTopic.TopicName()}) awscdk.NewCfnOutput(stack, jsii.String("dynamodb-table-name"), &awscdk.CfnOutputProps{ ExportName: jsii.String("dynamodb-table-name"), Value: table.TableName()}) Lambda Function You can refer to the Lambda Function code here. The Lambda function handler iterates over each SNS topic, and for each of them: Stores the message body in the primary key attribute (email) of the DynamoDB table Rest of the message attributes are stored as is. Shell func handler(ctx context.Context, snsEvent events.SNSEvent) { for _, record := range snsEvent.Records { snsRecord := record.SNS item := make(map[string]types.AttributeValue) item["email"] = &types.AttributeValueMemberS{Value: snsRecord.Message} for attrName, attrVal := range snsRecord.MessageAttributes { fmt.Println(attrName, "=", attrVal) attrValMap := attrVal.(map[string]interface{}) dataType := attrValMap["Type"] val := attrValMap["Value"] switch dataType.(string) { case "String": item[attrName] = &types.AttributeValueMemberS{Value: val.(string)} } } _, err := client.PutItem(context.Background(), &dynamodb.PutItemInput{ TableName: aws.String(table), Item: item, }) } } Wrap Up In this blog, you saw an example of how to use Lambda to process messages sent to SNS and store them in DynamoDB, thanks to the SNS and Lamdba integration. The entire infrastructure life-cycle was automated using AWS CDK. All this was done using the Go programming language, which is well-supported in DynamoDB, AWS Lambda, and AWS CDK. Happy building!
Kong API Gateway is a cloud-native API gateway that is based on the Nginx reverse proxy. It is a simple, fast, and lightweight solution that enables you to control, set up, and direct application-to-dedicated server request routing. The Kong API Gateway helps regulate who can access the services and data that are managed behind it. It ensures data security by allowing only authorized users and apps to access the data. The Kong API Gateway is highly performant and offers the following features: Request/Response Transformation: Kong can transform incoming and outgoing API requests and responses to conform to specific formats. Authentication and Authorization: Kong supports various authentication methods, including API key, OAuth 2.0, and JWT, and can enforce authorization policies for APIs. Traffic Management: Kong provides traffic management features, such as rate limiting, request throttling, and IP whitelisting, to maintain the reliability and stability of APIs. Monitoring and Logging: Kong offers detailed metrics and logs to help monitor API performance and identify issues. Plugins: Kong has a vast and continuously growing ecosystem of plugins that provide additional functionality, such as security, transformations, and integrations with other tools. Microservice Architecture: Kong is designed to work with microservice architecture, providing a central point of control for API traffic and security. Scalability: Kong is designed to scale horizontally, allowing it to handle large amounts of API traffic. Advantages of Using Kong as an API Gateway Kong API Gateway is an efficient solution for managing APIs that offer advanced routing and management capabilities. With flexible request routing, automatic service discovery, advanced load balancing, comprehensive API management, and real-time analytics and monitoring, organizations can effectively route API traffic, discover and register APIs, distribute traffic across backend services, manage APIs throughout their lifecycle, and gain insight into API performance and usage. These capabilities make Kong a highly effective solution for managing APIs at scale and are essential for organizations looking to build and maintain a robust API infrastructure. One of the key benefits of Kong is access to a wide range of plugins that can be easily added to the gateway, such as authentication, rate limiting, and transformations. Another advantage of Kong is its flexible deployment options, which allow it to be deployed on-premises, in the cloud, or as a managed service, depending on the organization’s needs. Additionally, Kong API Gateway provides improved security with features like authentication and authorization, encryption, and rate limiting that help protect sensitive data and prevent attacks on APIs. Use Cases for Kong Kong API Gateway can be used for the following purposes: API Gateway: To manage and orchestrate microservices, providing a centralized management layer for APIs and ensuring that requests are routed to the appropriate services. API security: To provide robust security features, such as authentication, authorization, and encryption, which can be used to secure APIs and protect sensitive data. Rate limiting and traffic control: To control the rate at which API requests are processed, preventing APIs from becoming overwhelmed under heavy load. Analytics and monitoring: To provide real-time analytics and monitoring capabilities, including request and response logging, API usage tracking, and error reporting, which can be used to gain insight into API performance and usage. Service discovery and load balancing: To support advanced load balancing algorithms, such as round-robin, least connections, and IP hashing, enabling organizations to distribute API traffic across multiple backend services for improved reliability and performance. Getting Started With Kong To install the Kong API gateway and experience it, we will use our local Kubernetes cluster set up with Kind. Alternatively, you can use minikube to set up your local Kubernetes cluster. Installing Kong API Gateway on Kind Cluster Create Kind Cluster Create a kind cluster using the below configuration in your local system. bash -c "cat <<EOF > /tmp/kind-config.yaml && kind create cluster --config /tmp/kind-config.yaml apiVersion: kind.x-k8s.io/v1alpha4 kind: Cluster name: kong-quick-start networking: apiServerAddress: "0.0.0.0" apiServerPort: 16443 nodes: - role: control-plane extraPortMappings: - listenAddress: "0.0.0.0" protocol: TCP hostPort: 80 containerPort: 80 - listenAddress: "0.0.0.0" protocol: TCP hostPort: 443 containerPort: 443 EOF" Set your kubeconfig context to use the kind-quick-start cluster: $ kubectl config use-context kind-kong-quick-start $ kubectl cluster-info Create Kong Gateway secrets that are required for installation. $ kubectl create namespace kong $ kubectl create secret generic kong-config-secret -n kong \ --from-literal=portal_session_conf='{"storage":"kong","secret":"super_secret_salt_string","cookie_name":"portal_session","cookie_same_site":"off","cookie_secure":false}' \ --from-literal=admin_gui_session_conf='{"storage":"kong","secret":"super_secret_salt_string","cookie_name":"admin_session","cookie_same_site":"off","cookie_secure":false}' \ --from-literal=pg_host="enterprise-postgresql.kong.svc.cluster.local" \ --from-literal=kong_admin_password=kong \ --from-literal=password=kong $ kubectl create secret generic kong-enterprise-license --from-literal=license="'{}'" -n kong --dry-run=client -o yaml | kubectl apply -f - We will also install Cert Manager because Kong uses Cert Manager to provide required certificates. $ helm repo add jetstack https://charts.jetstack.io ; helm repo update $ helm upgrade --install cert-manager jetstack/cert-manager --set installCRDs=true --namespace cert-manager --create-namespace $ bash -c "cat <<EOF | kubectl apply -n kong -f - apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: quickstart-kong-selfsigned-issuer-root spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: quickstart-kong-selfsigned-issuer-ca spec: commonName: quickstart-kong-selfsigned-issuer-ca duration: 2160h0m0s isCA: true issuerRef: group: cert-manager.io kind: Issuer name: quickstart-kong-selfsigned-issuer-root privateKey: algorithm: ECDSA size: 256 renewBefore: 360h0m0s secretName: quickstart-kong-selfsigned-issuer-ca --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: quickstart-kong-selfsigned-issuer spec: ca: secretName: quickstart-kong-selfsigned-issuer-ca EOF" Now we will install Kong using it Helm chart. $ helm repo add kong https://charts.konghq.com ; helm repo update $ helm install quickstart kong/kong --namespace kong --values https://bit.ly/KongGatewayHelmValuesAIO Once all pods are running, try opening kong manager on your web browser using its domain. For example https://kong.127-0-0-1.nip.io/ $ open "https://$(kubectl get ingress --namespace kong quickstart-kong-manager -o jsonpath='{.spec.tls[0].hosts[0]}')" You can also try to Curl Kong URL on your terminal. $ curl --silent --insecure -X GET https://kong.127-0-0-1.nip.io/api -H 'kong-admin-token:kong' Conclusion Kong API Gateway is a cloud-native API gateway that uses the Nginx reverse proxy to provide advanced routing and management capabilities. It helps organizations efficiently manage their APIs by offering flexible request routing, automatic service discovery, advanced load balancing, comprehensive API management, and real-time analytics and monitoring. Kong also provides improved security with features such as authentication and authorization, encryption, and rate limiting, as well as access to a large plugin ecosystem. Additionally, Kong can be deployed on-premises, in the cloud, or as a managed service, making it an effective solution for managing APIs at scale.