Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.
In the previous post, we discussed AWS S3 service and its various use cases. We then set up an AWS S3 bucket with configurations and access for our web application data storage requirements. We created a .NET6 WebAPI project and some basic wiring/configuration to allow our application to access S3. However, we still have to write application code that will allow our user to store notes data (files) in an S3 bucket and read the information from these files as well for processing (CRUD operations). In this post, we will look into how to perform these operations from our .NET application code, along with the use of AWS SDK for S3. If you haven’t already, I will suggest you read the previous post, as we will be building on that foundation. AWSSDK.S3 nuget Packages AWS SDK, available as nuget package, can simplify the code that is needed to interact with Amazon S3 service. We can add it to our solution using the package manager, as shown below: You can notice that SDK is very modular, and this structure helps us to import only the package we need for our requirements instead of importing a lot of non-related code. Domain Model We have a very simple domain model for our application. The main entity is Note which represents an individual note from a user. We also have another entity NoteSummary which, as the name implies, stores summary information about notes. Here are the model class for these entities: Storage Service Next step, to store and retrieve domain models to and from the S3 bucket, our application needs some code, and that service is defined as the following interface: Following is the S3NoteStorageService implementation for this interface: As you can see, this implementation uses the IAmazonS3 object, which is the abstraction provided by AWSSDK.S3 nuget package we added earlier. We’ll not go into the details of these methods' code. The code is self-explanatory, and you can check it from this GitHub repository. Also, I added the following line in the Program.cs file to enable Dependency injection of this service in our application: Next, let's focus on the Controller side to wire all this up. API Controller The following screenshot shows the NotesController code. As you can see that INotesStorageService is injected via controller injection, and we can now use this service in the various action methods as needed to interact with the S3 service. My plan is to later update this application with users' logins, but for the sake of simplicity, here I’ve hard-coded the user, so all the notes will be saved under this user for now. Let's see the code for getting a list of notes for a user: Again, the code here is self-explanatory and very typical controller code, and you can check the code of all the methods in detail from the GitHub repository. Testing the REST API Ok, with all these in place, I started the application and used the Postman client tool to test the application. Adding Note This is the request payload to add a note. Once executed, we shall have the following files created in the S3 bucket. So, here we have one file for the note itself and the other file for the summary. Try adding more notes using Postman and see more files will be created in your bucket: List Notes Next, we can test the API for List notes action as shown below: As you can see, we are getting data from all the notes. Delete Note In a similar way, we can test the delete operation by providing NoteID, as shown below: Get Note Here is the API call to get single Note details by providing a NoteId: You can check the source code from this git repository. Summary In this post, we covered the .NET application code for our notes application and how it uses AWSSDK to interact with S3. AWSDK simplifies our code by providing us abstractions for easy use in our application code. We build a very simple REST API that allows us to perform CRUD operations on our domain model. Our application can now benefit from all the great features of the S3 storage service, and we can easily integrate it with other AWS services for more advanced use cases. Let me know if you have some comments or questions. Till next time, Happy Coding.
Application Programming Interfaces (APIs) are the backbone of modern software development and are now vital strategic assets for large enterprises. However, with increasing API proliferation and subsequent sprawl, APIs can also pose significant security risks for enterprises. Shadow or zombie APIs running in production that your teams have forgotten about leave you vulnerable to attack. There's growing discussion on the importance of shifting left to incorporate design-time API security strategy, but how do you minimize the risk of those already lurking in your IT infrastructure? API Security Attacks Are Exponentially Rising API security attacks are a growing issue for enterprises. And breaches are often only discovered after the fact. Take T-Mobile's high-profile API breach; in the 41 days it took for the security team to catch and resolve, 37 million records of personally identifiable information (PII) had been seized. Even without any immediate fines (unlike a 2021 breach they've recently had to pay $350 million for), such security issues cause huge dents in customer trust, something large enterprises have worked hard to build and maintain. More alarming still, unlike T-Mobile's breach, 78% of API attacks are now coming from authenticated users, and overall attacks have grown by over 400% in the last six months. Hackers are also specifically targeting publicly available repos and instances, such as this recently observed a spike in attacks on public Postman collections. So what's at fault here? Hint: it's not bad for security teams. Shadow and Zombie APIs Are Low-Hanging Fruit for Attackers Shadow and zombie APIs are a primary cause as they're easy targets for attackers, and they're both growing in prevalence. Shadow APIs exist within your IT infrastructure but are not under the control of any IT or API team. These APIs are often created as one-off integrations by developers without any centralized knowledge or permission. Almost all aren't documented or managed properly. On the other hand, Zombie APIs are APIs that have been retired or abandoned but are still active and accessible. These APIs can create a significant security threat as they are no longer maintained, updated, or monitored, making them vulnerable to attacks. The rise in developer productivity without effective API governance has worsened API sprawl and subsequent shadow and zombie APIs. And with APIs already deemed as low-hanging fruit for attackers, it's safe to say these two types of lurking assets are the easiest fruit of them all. This is especially true for large enterprises, who are now working with 1000s of APIs (of different types and categories) across disparate and distributed runtime environments. Practical Steps to Minimize Risk I'm sure you've read endless information telling you to take API security measures such as: Regular auditing Security education and training for developers and teams. Implementing robust security policies. While these are all important and should not be overlooked, I want to provide an actionable example of how to identify and tackle zombie and shadow APIs. 1. Catalog and Identify You first need to catalog all APIs across all gateways and runtimes into one place. This should be done in a way that's abstracted from the runtimes so you can unify them into one view and assess governance compliance and conformance but connected to them so you can rationalize against consumption metrics. This will help you quickly identify the following: Shadow APIs: these will be failing your governance checks. Zombie APIs: these will have no consumers. 2. Make Shadow APIs Conformant, Compliant, and Secure API governance still gets a bad rep with developers, but modern governance tactics can be used to quickly tackle obvious shadow APIs. Using automated, flexible, and democratized API governance, those APIs that are failing governance checks can be quickly rectified and secured without developers having to spend hours manually manipulating code against a rigid and centralized style guide. Any APIs that are identified as no longer needed as part of these checks can be quickly and safely retired. 3. Deprecate Potential Zombie APIs Using consumption metrics from across your runtimes (ideally unified into the same catalog), APIs with no active consumers can be regularly deprecated from your runtimes after x days (however many you choose). This ensures zombie APIs are not lurking around and growingly forgotten about. Bonus: Do So Without Impacting Your API Consumers! Ensuring a great consumer experience for your APIs is simultaneously important. Your internal consumers, partners, and/or customers might be needing the exact API you've just identified as a zombie and deprecated. The goal state here is to have your catalog directly connected to your consumer portals, developer portals, and marketplaces; authenticated consumers can still see, request, and have these APIs redeployed. You're minimizing security risks without impacting your consumers' experience. In conclusion, shadow and zombie APIs, as a result of ungoverned and unmanaged API proliferation, pose a significant security threat to enterprises. Therefore, organizations need to be proactive in their approach to API security by implementing appropriate measures and best practices to mitigate the risks. Of course, choosing the right tooling to deliver these measures in a frictionless way is vital too. By doing so, organizations can ensure that their APIs are secure and their sensitive data is protected.
This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report In recent years, the rise of microservices has drastically changed the way we build and deploy software. The most important aspect of this shift has been the move from traditional API architectures driven by monolithic applications to containerized microservices. This shift not only improved the scalability and flexibility of our systems, but it has also given rise to new ways of software development and deployment approaches. In this article, we will explore the path from APIs to containers and examine how microservices have paved the way for enhanced API development and software integration. The Two API Perspectives: Consumer and Provider The inherent purpose of building an API is to exchange information. Therefore, APIs require two parties: consumers and providers of the information. However, both have completely different views. For an API consumer, an API is nothing more than an interface definition and a URL. It does not matter to the consumer whether the URL is pointing to a mainframe system or a tiny IoT device hosted on the edge. Their main concern is ease of use, reliability, and security. An API provider, on the other hand, is more focused on the scalability, maintainability, and monetization aspects of an API. They also need to be acutely aware of the infrastructure behind the API interface. This is the place where APIs actually live, and it can have a lot of impact on their overall behavior. For example, an API serving millions of consumers would have drastically different infrastructure requirements when compared to a single-consumer API. The success of an API offering often depends on how well it performs in a production-like environment with real users. With the explosion of the internet and the rise of always-online applications like Netflix, Amazon, Uber, and so on, API providers had to find ways to meet the increasing demand. They could not rely on large monolithic systems that were difficult to change and scale up as and when needed. This increased focus on scalability and maintainability, which led to the rise of microservices architecture. The Rise of Microservices Architecture Microservices are not a completely new concept. They have been around for many years under various names, but the official term was actually coined by a group of software architects at a workshop near Venice in 2011/2012. The goal of microservices has always been to make a system flexible and maintainable. This is an extremely desirable target for API providers and led to the widespread adoption of microservices architecture styles across a wide variety of applications. The adoption of microservices to build and deliver APIs addressed several challenges by providing important advantages: Since microservices are developed and deployed independently, they allow developers to work on different parts of the API in parallel. This reduces the time to market for new features. Microservices can be scaled up or down to meet the varying demands of specific API offerings. This helps to improve resource use and cost savings. There is a much better distribution of API ownership as different teams can focus on different sets of microservices. By breaking down an API into smaller and more manageable services, it becomes theoretically easier to manage outages and downtimes. This is because one service going down does not mean the entire application goes down. The API consumers also benefit due to the microservices-based APIs. In general, consumer applications can model better interactions by integrating a bunch of smaller services rather than interfacing with a giant monolith. Figure 1: APIs perspectives for consumer and provider Since each microservice has a smaller scope when compared to a monolith, there is less impact on the client application in case of changes to the API endpoints. Moreover, testing for individual interactions becomes much easier. Ultimately, the rise of microservices enhanced the API-development landscape. Building an API was no longer a complicated affair. In fact, APIs became the de facto method of communication between different systems. Nonetheless, despite the huge number of benefits provided by microservices-based APIs, they also brought some initial challenges in terms of deployments and managing dependencies. Streamlining Microservices Deployment With Containers The twin challenges of deployment and managing dependencies in a microservices architecture led to the rise in container technologies. Over the years, containers have become increasingly popular, particularly in the context of microservices. With containers, we can easily package the software with its dependencies and configuration parameters in a container image and deploy it on a platform. This makes it trivial to manage and isolate dependencies in a microservices-based application. Containers can be deployed in parallel, and each deployment is predictable since everything that is needed by an application is present within the container image. Also, containers make it easier to scale and load balance resources, further boosting the scalability of microservices and APIs. Figure 2 showcases the evolution from monolithic to containerized microservices: Figure 2: Evolution of APIs from monolithic to containerized microservices Due to the rapid advancement in cloud computing, container technologies and orchestration frameworks are now natively available on almost all cloud platforms. In a way, the growing need for microservices and APIs boosted the use of containers to deploy them in a scalable manner. The Future of Microservices and APIs Although APIs and microservices have been around for numerous years, they have yet to reach their full potential. Both are going to evolve together in this decade, leading to some significant trends. One of the major trends is around API governance. Proper API governance is essential to make your APIs discoverable, reusable, secure, and consistent. In this regard, OpenAPI, a language-agnostic interface to RESTful APIs, has more or less become the prominent and standard way of documenting APIs. It can be used by both humans and machines to discover and understand an API's capabilities without access to the source code. Another important trend is the growth in API-powered capabilities in the fields of NLP, image recognition, sentiment analysis, predictive analysis, chatbot APIs, and so on. With the increased sophistication of models, this trend is only going to grow stronger, and we will see many more applications of APIs in the coming years. The rise of tools like ChatGPT and Google Bard shows that we are only at the beginning of this journey. A third trend is the increased use of API-driven DevOps for deploying microservices. With the rise of cloud computing and DevOps, managing infrastructure is an extremely important topic in most organizations. API-driven DevOps is a key enabler for Infrastructure as Code tools to provision infrastructure and deploy microservices. Under the covers, these tools rely on APIs exposed by the platforms. Apart from major ones, there are also other important trends when it comes to the future of microservices and APIs: There is a growing role of API enablement on the edge networks to power millions of IoT devices. API security practices have become more important than ever in a world of unprecedented integrations and security threats. API ecosystems are expanding as more companies develop a suite of APIs that can be used in a variety of situations to build applications. Think of API suites like Google Maps API. There is an increased use of API gateways and service meshes to improve reliability, observability, and security of microservices-based systems. Conclusion The transition from traditional APIs delivered via monolithic applications to microservices running on containers has opened up a world of possibilities for organizations. The change has enabled developers to build and deploy software faster and more reliably without compromising on the scalability aspects. They have made it possible to build extremely complex applications and operate them at an unprecedented scale. Developers and architects working in this space should first focus on the key API trends such as governance and security. However, as these things become more reliable, they should explore cutting-edge areas such as API usage in the field of artificial intelligence and DevOps. This will keep them abreast with the latest innovations. Despite the maturity of the API and microservices ecosystem, there is a lot of growth potential in this area. With more advanced capabilities coming up every day and DevOps practices making it easier to manage the underlying infrastructure, the future of APIs and microservices looks bright. References: "A Brief History of Microservices" by Keith D. Foote "The Future of APIs: 7 Trends You Need to Know" by Linus Håkansson "Why Amazon, Netflix, and Uber Prefer Microservices over Monoliths" by Nigel Pereira "Google Announces ChatGPT Rival Bard, With Wider Availability in 'Coming Weeks'" by James Vincent "Best Practices in API Governance" by Janet Wagner "APIs Impact on DevOps: Exploring APIs Continuous Evolution," xMatters Blog This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report
This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report As organizations look to enable integration, innovation, and digital experiences through their IT teams, they often build APIs and expose them by leveraging a full-lifecycle API management system. Historically, these API management systems provided tooling such as: Defining an API (e.g., Swagger, OpenAPI Spec, RAML) API testing API scaffolding and implementation Specifications for quota and usage policies/plans Documentation An API portal These API management systems were often delivered as a fully integrated stack with a fancy UI, role-based access control, and push-button mechanisms to accomplish the lifecycle management functions. While this all sounds very nice, there are some realities we face as organizations look to modernize their application and API delivery engines. An API management platform does not exist in a vacuum. DevOps philosophies have influenced organizational structures, automation, and self-service. Any API management system must fit within a modern development environment that is often multi-language, multi-platform, and multi-cloud. This infrastructure must also fit natively with Git-based deployment workflows (GitOps), including systems built for CI/CD. Avoid Yet Another Silo (YAS) Although developer productivity can be difficult to measure, proxy metrics that can be useful include things like the following: Lead time to make code changes in production Number of deployments to production per week Traditionally, developers write code, create services, build APIs, and then hand them off to operations to deploy and operate those services and APIs. The silos between development, infrastructure, security, and network teams often leads to complex synchronization points, handoffs, and a lot of waiting. This slows down code changes and deployments to production. Figure 1: Siloed handoffs between teams cause a slowdown in delivery to production Large monolithic software systems can further this problem by forcing their own silos within each of the organizational silos. They have their own proprietary UIs, require specialized skills or privilege to operate, and are often owned by specific teams. If you need something from the large monolithic software system, you typically need to open a ticket to signal to the team who owns the system that they need to make a change. In practice, traditional full lifecycle API management systems create silos by forcing users into an all-or-nothing set of tools for defining, implementing, testing, and exposing APIs even if these differ from what a development team wants to use. These systems are very difficult to automate and integrate with other parts of the software delivery systems, and they are usually guarded by some API management team that is responsible for configuring and deploying APIs. This centralization from both a technology and organizational standpoint creates bottlenecks that slow down delivery in a modern DevOps-minded organization. Favor Automation Over Point-and-Click UIs Most traditional full lifecycle API management systems do have some role-centric capabilities, like role-based UIs and tools for specific personas. One principle prevalent in modern DevOps implementations is around eliminating manual or repetitive tasks using automation. We cannot expect users to log into a system that runs tests, a totally different system to manage APIs, and yet another system to do a deployment. Figure 2: We should reduce multiple, manual, point-and-click UIs in favor of automation Ideally, we would automate a lot of these steps so a developer can go to a single self-service UI for anything related to software development and deployment. Any functionality we would like, including traditional API management and each of its "full lifecycle" functionalities, should be automatable. With a lot of the functionality in modern API management locked into proprietary UIs, automation is often very challenging and brittle, if accomplished at all. The API Lifecycle Is The Software Development Lifecycle The API lifecycle is often centered around design, implementation, testing, control, and consumption. Does this sound familiar? It should — because it's exactly what we do with any software we write. When developers create APIs, they use software to do so. The API lifecycle is the software development lifecycle. Trying to treat the lifecycle of APIs differently from the rest of our software development practices creates inconsistencies, fragmentation, and friction. For example, when we create an API, we may need to develop it, test it, and will probably eventually need to notify users when we need to retire it. We need the same capabilities for internal services, libraries, and other system components. Although there may be some slight differences, should these be separate and different processes? Should these be completely different sets of tools? Trying to duplicate what is already necessary for the software development lifecycle with substandard and proprietary tools specific for API management causes adoption, governance, and bifurcation issues. Use an Internal Developer Platform As organizations attempt to improve developer productivity by shifting left and giving developers more responsibility and control over building and running their services and APIs, we've seen an emergence in platform teams responsible for building workflows and toolchains that enable self-service. These workflows get boiled down to "golden paths" that developers can easily follow and that automate a lot of the tasks around bootstrapping new projects, documenting their software, enforcing access/security policies, and controlling deployment rollouts. This developer-focused self-service platform is known as an Internal Developer Platform (IDP) and aims to cover the operational necessities of the entire lifecycle of a service. Although many teams have built their own platforms, there are some good open-source frameworks that go a long way to building an IDP. For example, Backstage is a popular open-source project used to build IDPs. Platform engineering teams typically have a lot of flexibility picking the best of breed tools for developers that support multiple types of languages and developer frameworks. Plus, these tools can be composed through automation and don't rely on proprietary vendor UIs. Platform engineering teams also typically build their platform around container technology that can be used across multiple clusters and stretch into on-premises deployments as well as the public cloud. These IDPs insulate from vendor lock-in whether that's a particular public cloud or vendor. For example, here's a very common scenario that I've run into numerous times: An organization bought into a full-lifecycle API management vendor and finds itself in a situation where their modernization efforts are centered around containers and Kubernetes, GitOps, and CI/CD. They find the API management vendor may have strong tools around API design; however, runtime execution, the API portal, and analytics features are lagging, outdated, or cannot be automated with the rest of the container platform via GitOps. They often wish to use a different API gateway technology based on more modern open-source proxies like Envoy Proxy but are locked into a tightly integrated yet outdated gateway technology with their current vendor. Instead, these organizations should opt to use newer proxy technologies, select more developer-friendly API testing tools, tie API analytics into their existing streaming and analytics efforts, and rely on tools like Backstage to tie all of this together. Doing so, they would reduce silos centered around vendor products, leverage best-of-breed tools, and automate these tools in a way that preserves governance and prescribed guard rails. These platforms can then support complex deployment strategies like multi-cluster, hybrid, and multi-cloud deployments. Conclusion Managing APIs will continue to be an important aspect of software development, but it doesn't happen in a vacuum. Large monolithic full lifecycle API management stacks are outdated, don't fit in with modern development practices, and cause silos when we are trying to break down silos. Choosing the best-of-bread tools for API development and policy management allows us to build a powerful software development platform (an IDP) that improves developer productivity, reduces lock-in, and allows organizations to deploy APIs and services across containers and cloud infrastructure whether on-premises or any public cloud. This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report
This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report A microservices architecture is an established pattern for building a complex system that consists of loosely coupled modules. It is one of the most talked-about software architecture trends in the last few years. It seems to be a surprisingly simple idea to break a large, interdependent system into many small, lightweight modules that can make software management easier. Here's the catch: After you have broken down your monolith application into small modules, how are you supposed to connect them together in a meaningful way? Unfortunately, there is no single right answer to this question, but as is so often the case, there are a few approaches that depend on the application and the individual use case. Two common protocols used in microservices are HTTP request/response with resource APIs and lightweight asynchronous messaging when communicating updates across several microservices. Let's explore these protocols. Types of Communication Microservices can communicate through many different modes of communication, each one targeting a different use case. These types of communications can be primarily classified in two dimensions. The first dimension defines if the communication protocol is synchronous or asynchronous: SYNCHRONOUS vs. ASYNCHRONOUS COMMUNICATION Synchronous Asynchronous Communication pattern The client sends a request and waits for a response from the server. Communication is not in sync, which means it does not happen in real time. Protocols HTTP/HTTPS AMQP, MQTT Coupling The client code can only continue its task further when it receives the server response. In the context of distributed messaging, coupling implies that request processing will occur at an arbitrary point in time. Failure isolation It requires the downstream server to be available or the request fails. If the consumer fails, the sender can still send messages. The messages will be picked up when the consumer recovers. Table 1 The second dimension defines if the communication has a single receiver or multiple receivers: COMMUNICATION VIA SINGLE vs. MULTIPLE RECEIVERS Single Receiver Multiple Receivers Communication pattern It implies that there is point-to-point communication that delivers a message to exactly one consumer that is reading from the channel, and that the message is processed only once. Communication from the sender is available to multiple receivers. Example It is well-suited for sending asynchronous commands from one microservice to another. The publish/subscribe mechanism is where a publisher publishes a message to a channel and the channel can be subscribed by multiple subscribers/receivers to receive the message asynchronously. Table 2 The most common type of communication between microservices is single-receiver communication with a synchronous protocol like HTTP/HTTPS when invoking a REST API. Microservices typically use messaging protocols for asynchronous communication between microservices. This asynchronous communication may involve a single receiver or multiple receivers depending on the application's needs. Representational State Transfer Representational state transfer (REST) is a popular architectural style for request and response communication, and it can serve as a good example for the synchronous communication type. This is based on the HTTP protocol, embracing verbs such as GET, POST, PUT, DELETE, etc. In this communication pattern, the caller waits for a response from the server. Figure 1: REST API-based communication REST is the most commonly used architectural style for communication between services, but heavy reliance on this type of communication has some negative consequences when it comes to a microservices architecture: Multiple round trips (latency) – The client often needs to execute multiple trips to the server to fetch all the data the client requires. Each endpoint specifies a fixed amount of data, and in many cases, that data is only a subset of what a client needs to populate their page. Blocking – When invoking a REST API, the client is blocked and is waiting for a server response. This may hurt application performance if the application thread is processing other concurrent requests. Tight coupling – The client and server need to know about each other. It increases complexity over time and reduces portability. Messaging Messaging is widely used in a microservices architecture, which follows the asynchronous protocol. In this pattern, a service sends a message without waiting for a response, and one or more services process the message asynchronously. Asynchronous messaging provides many benefits but also brings challenges such as idempotency, message ordering, poison message handling, and complexity of message broker, which must be highly available. It is important to note the difference between asynchronous I/O and the asynchronous protocol. Asynchronous I/O means that the calling thread is not blocked while the I/O operations are executed. This is an implementation detail in terms of the software design. The asynchronous protocol means the sender does not wait for a response. Figure 2: Messaging-based communication Asynchronous messaging has some advantages over synchronous messaging: Loose coupling – The message producer does not need to know about the consumer(s). Multiple subscribers – Using a publisher/subscriber (pub/sub) model, multiple consumers can subscribe to receive events. Resiliency or failure isolation – If the consumer fails, the producer can still send messages. The messages will be picked up when the consumer recovers from failure. This is especially useful in a microservices architecture because each microservice has its own lifecycle. Non-blocking – The producers and consumers can send and process messages at their own pace. Though asynchronous messaging has many advantages, it comes with some tradeoffs: Tight coupling with the messaging infrastructure – Using a particular vendor/messaging infrastructure may cause tight coupling with that infrastructure. It may become difficult to switch to another vendor/messaging infrastructure later. Complexity – Handling asynchronous messaging may not be as easy as designing a REST API. Duplicate messages must be handled by de-duplicating or making the operations idempotent. It is hard to implement request-response semantics using asynchronous messaging. To send a response, another queue and a way to correlate request and response messages are both needed. Debugging can also be difficult as it is hard to identify which request in Service A caused the wrong behavior in Service B. Asynchronous messaging has matured into a number of messaging patterns. These patterns apply to scenarios when several parts of a distributed system must communicate with one another in a dependable and scalable way. Let's take a look at some of these patterns. Pub/Sub Pattern The pub/sub pattern implies that a publisher sends a message to a channel on a message broker. One or more subscribers subscribe to the channel and receive messages from the channel in an asynchronous manner. This pattern is useful when a microservice needs to broadcast information to a significant number of consumers. Figure 3: Pub/sub pattern The pub/sub pattern has the following advantages: It decouples publishers and subscribers that need to communicate. Publishers and subscribers can be managed independently, and messages can be managed even if one or more subscribers are offline. It increases scalability and improves responsiveness of the publisher. The publisher can quickly publish a message to the input channel, then return to its core processing responsibilities. The messaging infrastructure is responsible for ensuring messages are delivered to interested subscribers. It provides separation of concerns for microservices. Each microservice can focus on its core responsibilities, while the message broker handles everything required to reliably route messages to multiple subscribers. There are a few disadvantages of using this pattern: The pub/sub pattern introduces high semantic coupling in the messages passed by the publishers to the subscribers. Once the structure of the data is established, it is often difficult to change. To change the message structure, all subscribers must be altered to accept the changed format. This can be difficult or impossible if the subscribers are external. Another drawback of the pub/sub pattern is that it is difficult to gauge the health of subscribers. The publisher does not have knowledge of the health status of the systems listening to the messages. As a pub/sub system scales, the broker often becomes a bottleneck for message flow. Load surges can slow down the pub/sub system, and subscribers can get a spike in response time. Queue-Based Pattern In the queue-based pattern, a sender posts a message to a queue containing the data required by the receiver. The queue acts as a buffer, storing the message until it is retrieved by the receiver. The receiver retrieves messages from the queue and processes them at its own pace. This pattern is useful for any application that uses services that are subject to overloading. Figure 4: Queue-based pattern The queue-based pattern has the following advantages: It can help maximize scalability because both the number of queues and the number of services can be scaled to meet demand. It can help maximize availability. Delays arising in the producer or consumer won't have an immediate or direct impact on the services, which can continue to post messages to the queue even when the consumer isn't available or is under heavy load to process messages. There are some disadvantages of using this pattern: When a consumer receives a message from the queue, the message is no longer available in the queue. If a consumer fails to process the message, the message is lost and may need a rollback in the consumer. Message queues do not come out of the box. We need to create, configure, and monitor them. It can cause operational complexity when systems are scaled up. Keys To Streamlined Messaging Infrastructure Asynchronous communication is usually managed through a message broker. There are some factors to consider when choosing the right messaging infrastructure for asynchronous communication: Scalability – the ability to scale automatically when there is a load surge on the message broker Data persistency – the ability to recover messages in case of reboot/failure Consumer capability – whether the broker can manage one-to-one and/or one-to-many consumers Monitoring – whether monitoring capabilities are available Push and pull queue – the ability to handle push and pull delivery by message queues Security – proper authentication and authorization for messaging queues and topics Automatic failover – the ability to connect to a failover broker automatically when one broker fails without impacting publisher/consumer Conclusion More and more, microservices are becoming the de facto approach for designing scalable and resilient systems. There is no single approach for all communications between microservices. While RESTful APIs provide a request-response model to communicate between services, asynchronous messaging offers a more scalable producer-consumer relationship between different services. And although microservices can communicate with each other via both messaging and REST APIs, messaging architectures are ideal for improving agility and moving quickly. They are commonly found in modern applications that use microservices or any application that has decoupled components. When it comes to choosing a right style of communication for your microservices, be sure to match the needs of the consumer with one or more communication types to offer a robust interface for your services. This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report
This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report Our approach to scalability has gone through a tectonic shift over the past decade. Technologies that were staples in every enterprise back end (e.g., IIOP) have vanished completely with a shift to approaches such as eventual consistency. This shift introduced some complexities with the benefit of greater scalability. The rise of Kubernetes and serverless further cemented this approach: spinning a new container is cheap, turning scalability into a relatively simple problem. Orchestration changed our approach to scalability and facilitated the growth of microservices and observability, two key tools in modern scaling. Horizontal to Vertical Scaling The rise of Kubernetes correlates with the microservices trend as seen in Figure 1. Kubernetes heavily emphasizes horizontal scaling in which replications of servers provide scaling as opposed to vertical scaling in which we derive performance and throughput from a single host (many machines vs. few powerful machines). Figure 1: Google Trends chart showing correlation between Kubernetes and microservice (Data source: Google Trends ) In order to maximize horizontal scaling, companies focus on the idempotency and statelessness of their services. This is easier to accomplish with smaller isolated services, but the complexity shifts in two directions: Ops – Managing the complex relations between multiple disconnected services Dev – Quality, uniformity, and consistency become an issue. Complexity doesn't go away because of a switch to horizontal scaling. It shifts to a distinct form handled by a different team, such as network complexity instead of object graph complexity. The consensus of starting with a monolith isn't just about the ease of programming. Horizontal scaling is deceptively simple thanks to Kubernetes and serverless. However, this masks a level of complexity that is often harder to gauge for smaller projects. Scaling is a process, not a single operation; processes take time and require a team. A good analogy is physical traffic: we often reach a slow junction and wonder why the city didn't build an overpass. The reason could be that this will ease the jam in the current junction, but it might create a much bigger traffic jam down the road. The same is true for scaling a system — all of our planning might make matters worse, meaning that a faster server can overload a node in another system. Scalability is not performance! Scalability vs. Performance Scalability and performance can be closely related, in which case improving one can also improve the other. However, in other cases, there may be trade-offs between scalability and performance. For example, a system optimized for performance may be less scalable because it may require more resources to handle additional users or requests. Meanwhile, a system optimized for scalability may sacrifice some performance to ensure that it can handle a growing workload. To strike a balance between scalability and performance, it's essential to understand the requirements of the system and the expected workload. For example, if we expect a system to have a few users, performance may be more critical than scalability. However, if we expect a rapidly growing user base, scalability may be more important than performance. We see this expressed perfectly with the trend towards horizontal scaling. Modern Kubernetes systems usually focus on many small VM images with a limited number of cores as opposed to powerful machines/VMs. A system focused on performance would deliver better performance using few high-performance machines. Challenges of Horizontal Scale Horizontal scaling brought with it a unique level of problems that birthed new fields in our industry: platform engineers and SREs are prime examples. The complexity of maintaining a system with thousands of concurrent server processes is fantastic. Such a scale makes it much harder to debug and isolate issues. The asynchronous nature of these systems exacerbates this problem. Eventual consistency creates situations we can't realistically replicate locally, as we see in Figure 2. When a change needs to occur on multiple microservices, they create an inconsistent state, which can lead to invalid states. Figure 2: Inconsistent state may exist between wide-sweeping changes Typical solutions used for debugging dozens of instances don't apply when we have thousands of instances running concurrently. Failure is inevitable, and at these scales, it usually amounts to restarting an instance. On the surface, orchestration solved the problem, but the overhead and resulting edge cases make fixing such problems even harder. Strategies for Success We can answer such challenges with a combination of approaches and tools. There is no "one size fits all," and it is important to practice agility when dealing with scaling issues. We need to measure the impact of every decision and tool, then form decisions based on the results. Observability serves a crucial role in measuring success. In the world of microservices, there's no way to measure the success of scaling without such tooling. Observability tools also serve as a benchmark to pinpoint scalability bottlenecks, as we will cover soon enough. Vertically Integrated Teams Over the years, developers tended to silo themselves based on expertise, and as a result, we formed teams to suit these processes. This is problematic. An engineer making a decision that might affect resource consumption or might impact such a tradeoff needs to be educated about the production environment. When building a small system, we can afford to ignore such issues. Although as scale grows, we need to have a heterogeneous team that can advise on such matters. By assembling a full-stack team that is feature-driven and small, the team can handle all the different tasks required. However, this isn't a balanced team. Typically, a DevOps engineer will work with multiple teams simply because there are far more developers than DevOps. This is logistically challenging, but the division of work makes more sense in this way. As a particular microservice fails, responsibilities are clear, and the team can respond swiftly. Fail-Fast One of the biggest pitfalls to scalability is the fail-safe approach. Code might fail subtly and run in non-optimal form. A good example is code that tries to read a response from a website. In a case of failure, we might return cached data to facilitate a failsafe strategy. However, since the delay happens, we still wait for the response. It seems like everything is working correctly with the cache, but the performance is still at the timeout boundaries. This delays the processing. With asynchronous code, this is hard to notice and doesn't put an immediate toll on the system. Thus, such issues can go unnoticed. A request might succeed in the testing and staging environment, but it might always fall back to the fail-safe process in production. Failing fast includes several advantages for these scenarios: It makes bugs easier to spot in the testing phase. Failure is relatively easy to test as opposed to durability. A failure will trigger fallback behavior faster and prevent a cascading effect. Problems are easier to fix as they are usually in the same isolated area as the failure. API Gateway and Caching Internal APIs can leverage an API gateway to provide smart load balancing, caching, and rate limiting. Typically, caching is the most universal performance tip one can give. But when it comes to scale, failing fast might be even more important. In typical cases of heavy load, the division of users is stark. By limiting the heaviest users, we can dramatically shift the load on the system. Distributed caching is one of the hardest problems in programming. Implementing a caching policy over microservices is impractical; we need to cache an individual service and use the API gateway to alleviate some of the overhead. Level 2 caching is used to store database data in RAM and avoid DB access. This is often a major performance benefit that tips the scales, but sometimes it doesn't have an impact at all. Stack Overflow recently discovered that database caching had no impact on their architecture, and this was because higher-level caches filled in the gaps and grabbed all the cache hits at the web layer. By the time a call reached the database layer, it was clear this data wasn't in cache. Thus, they always missed the cache, and it had no impact. Only overhead. This is where caching in the API gateway layer becomes immensely helpful. This is a system we can manage centrally and control, unlike the caching in an individual service that might get polluted. Observability What we can't see, we can't fix or improve. Without a proper observability stack, we are blind to scaling problems and to the appropriate fixes. When discussing observability, we often make the mistake of focusing on tools. Observability isn't about tools — it's about questions and answers. When developing an observability stack, we need to understand the types of questions we will have for it and then provide two means to answer each question. It is important to have two means. Observability is often unreliable and misleading, so we need a way to verify its results. However, if we have more than two ways, it might mean we over-observe a system, which can have a serious impact on costs. A typical exercise to verify an observability stack is to hypothesize common problems and then find two ways to solve them. For example, a performance problem in microservice X: Inspect the logs of the microservice for errors or latency — this might require adding a specific log for coverage. Inspect Prometheus metrics for the service. Tracking a scalability issue within a microservices deployment is much easier when working with traces. They provide a context and a scale. When an edge service runs into an N+1 query bug, traces show that almost immediately when they're properly integrated throughout. Segregation One of the most important scalability approaches is the separation of high-volume data. Modern business tools save tremendous amounts of meta-data for every operation. Most of this data isn't applicable for the day-to-day operations of the application. It is meta-data meant for business intelligence, monitoring, and accountability. We can stream this data to remove the immediate need to process it. We can store such data in a separate time-series database to alleviate the scaling challenges from the current database. Conclusion Scaling in the age of serverless and microservices is a very different process than it was a mere decade ago. Controlling costs has become far harder, especially with observability costs which in the case of logs often exceed 30 percent of the total cloud bill. The good news is that we have many new tools at our disposal — including API gateways, observability, and much more. By leveraging these tools with a fail-fast strategy and tight observability, we can iteratively scale the deployment. This is key, as scaling is a process, not a single action. Tools can only go so far and often we can overuse them. In order to grow, we need to review and even eliminate unnecessary optimizations if they are not applicable. This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report
This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report Designing an application architecture is never complete. Regularly, all decisions and components need to be reviewed, validated, and possibly updated. Stakeholders require that complex applications be delivered more quickly. It's a challenge for even the most senior technologists. A strategy is required, and it needs to be nimble. Strategy combines processes, which aid in keeping a team focused, and principles and patterns, which provide best practices for implementation. Regardless, it's a daunting task requiring organizational commitment. Development, Design, and Architectural Processes Applications developed without any process is chaos. A team that invents their own process and sticks to it is much better off than a team using no process. At the same time, holding a project hostage to a process can be just as detrimental. Best practices and patterns are developed over multiple years of teams looking for better ways to produce quality software in a timely manner. Processes are the codification of the best practices and patterns. By codifying best practices and patterns into processes, the processes can be scaled out to more organizations and teams. For example, when an organization selects a development process, a senior leader may ascribe to a test-first development pattern. It becomes much easier for an organization to adopt a pattern by finding a process that outlines how the pattern is organizationally implemented. In the case of the test-first development pattern, test-driven development (TDD) may be selected as the development process. Another technical leader in the same organization may choose to lead their team using domain-driven design (DDD), a pattern by which software design is communicated across technical teams as well as other stakeholders. Can these two design philosophies coexist? Yes. They can. Here, TDD defines how software is constructed while DDD defines the concepts that describe the software. Software architecture works to remain neutral to specific development and design processes, and it is the specification on how an abstract pattern is implemented. The term, "abstract pattern," is used as most software architecture patterns can be applied across any development process and across any tech stack. For example, many architectures employ the use of inversion of control (or dependency injection). How Java, JavaScript, C#, etc. implement inversion of control is specific to the tech stack, but it accomplishes the same goal. Avoiding Dogmatic Adherence Regardless of development, design, or architectural process, it's key that strict adherence to a given process does not become the end goal. Unfortunately, this happens more often than it should. Remember that the intent of a process is to codify best practices in a way that allows teams to scale using the same goals and objectives. To that end, when implementing processes, here are some points to consider: There's no one size fits all. Allow culture to mold the process. Maturity takes time. Keep focused on what you're really doing — building quality software in a timely manner. Cross-Cutting Concerns Software architecture can be designed, articulated, and implemented in several ways. Regardless of approach, most software architecture plans address two key points: simplicity and evolution. Simplicity is a relative term in that an architectural approach needs to be easily understood within the context of the business domain. Team members should look at an architectural plan and say, "Of course, that's the obvious design." It may have taken several months to develop the plan, but a team responding in this manner is a sign that the plan is on the right track. Evolution is very important and can be the trickiest aspect of an architectural plan. It may sound difficult, but an architectural plan should be able to last ten-plus years. That may be challenging to comprehend, but with the right design principles and patterns in place, it's not as challenging as one might think. At its core, good software architecture does its best to not paint itself into a corner. Figure 1 below contains no new revelations. However, each point is critical to a lasting software architecture: Building architecture that endures. This is the end goal. It entails using patterns that support the remaining points. Multiple platform and deployment support. The key here is that what exists today will very likely look different five years from now. An application needs to be readily able to adapt to changes in platform and deployment models, wherever the future takes it. Enforceable, standard patterns and compliance. Not that there's nothing new, but the software industry has decades of patterns to adopt and compliance initiatives to adhere to. Changes in both are gradual, so keeping an eye on the horizon is important. Reuse and extensibility from the ground up. Implementation patterns for reuse and extensibility will vary, but these points have been building blocks for many years. Collaboration with independent, external modules. The era of microservices helps enforce this principle. Watch for integrations that get convoluted. That is a red flag to the architecture. Evolutionary, module compatibility and upgrade paths. Everything in a software's architecture will evolve. Consider how compatibility and upgrades are managed. Design for obsolescence. Understand that many components within a software's architecture will eventually need to be totally replaced. At the beginning of each project or milestone, ask the question, "How much code are we getting rid of this release?" The effect of regular code pruning is no different than the effect of pruning plants. Figure 1: Key architectural principles Developing microservices is a combination of following these key architectural principles along with segmenting components into areas of responsibility. Microservices provide a unit of business functionality. Alone, they provide little value to a business. It's in the assembly of and integration with other microservices that business value is realized. Good microservices assembly and integration implementations follow a multi-layered approach. Horizontal and Vertical Slices Simply stated, slicing an application is about keeping things where they belong. In addition to adhering to relevant design patterns in a codebase, slicing an application applies the same patterns at the application level. Consider an application architecture as depicted by a Lego® brick structure in the figure below: Figure 2: Microservices architecture Each section of bricks is separated by that thin Lego® brick, indicating a strict separation of responsibility between each layer. Layers interact only through provided contracts/interfaces. Figure 2 depicts three layers with each having a distinct purpose. Whether it be integration with devices such as a laptop or tablet, or microservices integrating with other microservices, the point at which service requests are received remains logically the same. Here, there are several entry points ranging from web services and messaging services to an event bus. Horizontal Slices Horizontal slices of an application architecture are layers where, starting from the bottom, each layer provides services to the next layer. Typically, each layer of the stack refines the scope of underlying services to meet business use case logic. There can be no assumptions by services in lower layers on how above services interact with them. As mentioned, this is done with welldefined contracts. In addition, services within a layer interact with one another through that layer's contracts. Maintaining strict adherence to contracts allows components at each layer to be replaced with new or enhanced versions with no disruption in interoperability. Figure 3: Horizontal slices Vertical Slices Vertical slices are where everything comes together. A vertical slice is what delivers an application business objective. A vertical slice starts with an entry point that drills through the entire architecture. As depicted in Figure 4, business services can be exposed in multiple ways. Entry points are commonly exposed through some type of network protocol. However, there are cases where a network protocol doesn't suffice. In these cases, a business service may offer a native library supporting direct integration. Regardless of the use case, strict adherence to contracts must be maintained. Figure 4: Vertical slices Obvious, Yet Challenging Microservices have become a predominant pattern by which large applications are assembled. Each microservice is concerned with a very specific set of functionalities. By their very nature, microservices dictate that well-defined contracts are in place, with which other microservices and systems can integrate. Microservices that are designed and implemented for cloud-native deployments can leverage cloud-native infrastructure to support several of the patterns discussed. The patterns and diagrams presented here will look obvious to most. As mentioned, good architecture is "obvious." The challenge is adhering to it. Often, the biggest enemy to adherence is time. The pressure to meet delivery deadlines is real and where cracks in the contracts appear. Given the multiple factors in play, there are times when compromises need to be made. Make a note, create a ticket, add a comment, and leave a trail so that the compromise gets addressed as quickly as possible. Well-designed application architecture married with good processes supports longevity, which from a business perspective provides an excellent return on investment. Greenfield opportunities are fewer than evolving existing applications. Regardless, bringing this all to bear can look intimidating. The key is to start somewhere. As a team, develop a plan and "make it so"! This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report
This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report Multi-cloud integration strategies may sound like buzzwords and marketing slang, but in this article, I will demystify them. I will also dive deeply into on-premises and legacy systems and how we can integrate with them. Before we jump into the topic, I would like to define what integration means in a cloud context. Cloud integration is a process that allows organizations' applications, infrastructure, data, and components to properly work together within one or several cloud providers. It also includes connecting on-premises data centers to the cloud if migration can be done across the organization. Cloud Integrations An important part of cloud integration is understanding the strategies. Many medium- and enterprise-level companies choose multi-cloud and hybrid-cloud approaches. Why is successful integration important for companies? Most companies building solutions have to exchange data with on-premises or out-of-support solutions. Properly designed integration solutions will save a lot of time and money. We can see it in the example of a bank multi-cloud application at the end of the article. Hybrid vs. Multi-Cloud Below is a comparison table describing both strategies' pros and cons. Before we jump in, keep the differences between public and private clouds in mind. Remember that public clouds provide computing power, SaaS, and PaaS services for organizations that don't have (or where it is difficult to have) their data centers. A private cloud (on-premises) is an infrastructure the company maintains internally. HYBRID VS. MULTI-CLOUD PROS AND CONS Hybrid Cloud Multi-Cloud Description Hybrid clouds combine private clouds/on-prem data centers with a public cloud, an approach that companies usually take. For example, banks have secure on-prem environments that they won't move to the cloud. Meanwhile, they have other, less secure solutions that can be easily moved to a public cloud and have fewer connections to on-premises. Multi-cloud combines several public clouds without using a private cloud. Usually, companies choose a multi-cloud strategy to avoid vendor lock-in. Pros Flexibility to connect infrastructure that can't be moved to the public cloud. Increased security thanks to the on-prem component. Flexibility between using a legacy system and modern public cloud services. Flexible and scalable environments. You can choose the services in each cloud that work best for your company. Freedom to implement the solution across several clouds. Cons It can be difficult to maintain legacy, on-prem environments. Additional cost for companies because they need to maintain their hardware. The cost of maintaining different services on several cloud providers can be prohibitive. Complexity in managing and separating different services. Securing network communication between clouds can be difficult. Cloud Integration Patterns and Best Practices Applying a good integration strategy also requires knowing some integration best practices and patterns. Cloud Integration Patterns Understanding the main set of integration patterns is key to using existing integration solutions or designing a new one from scratch. Also, having the knowledge of these patterns provides a massive benefit during the integration of cloud applications and enterprise, on-premises infrastructure. Asynchronous Messaging Asynchronous messaging allows components and services to process data without waiting for each other. It also allows components to be decoupled from each other. Figure 1 Shared Databases This pattern uses a shared database to communicate and exchange data between enterprise applications and services. As part of a shared database and communication bus, we can also use an enterprise service bus that can save and exchange data between several components. Figure 2 Remote Procedure Call Remote procedure call (RPC) is an abstraction layer or protocol that allows one network component to communicate with another without knowing the whole network's complete functionality. Figure 3 File Transfer The file transfer pattern provides an interface to share files between cloud or application components. For example, file transfer is useful if an application produces CSV or XML reports — the integration service should adapt this file for other applications. Figure 4 Recommended Practices for Cloud Integration Here are three of the most important best practices for cloud integration: Use native SaaS tools that cloud providers offer. This approach always provides the best integration options between applications and components. There are even "no-code" tools for non-technical people. We will get into native Azure, AWS, and Google Cloud Services in the next section. Use an Integration Platform as a Service (iPaaS). Some services and components provide integration capabilities and are hosted as cloud services. For example, triggermesh and cenit.io are open-source integration platforms that allow building event-driven applications in Kubernetes, orchestrating data flow, and providing API management capabilities in cloud providers and on-premises Use a Function Platform as Service (FPaaS). These platforms provide huge customization levels of integration options, from which some organizations can benefit. This approach is intended for cloud solution architects and requires a knowledge of cloud architecture patterns and function-oriented software development skills. FPaaS tools include AWS Lambda, Azure Functions, Google Cloud Functions, and Apache OpenWhisk. Common Integration Services Knowing the general cloud integration best practices and patterns is crucial. However, knowing what exactly each cloud provider offers is also important. In this section, we will briefly touch upon common cloud integration services from providers such as AWS, Azure, and Google Cloud. Keep in mind: This section contains — but is not limited to — some of the most ubiquitous open-source integration services available. To learn more about the list below, common benefits, and drawbacks associated with each, check out this platform breakdown for more information. AWS AWS has several integration services that provide powerful features alongside simplicity. This list includes SNS (Simple Notification Service), SQS (Simple Queue Service), SWF (Simple Workflow Service), and AWS step functions. To learn more, visit the AWS Application Integration services page. Google Cloud Google Cloud has a vast integration ecosystem, also commonly referred to as Integration Platform as a Service (iPaaS). This provides a set of tools and services to manage and connect applications. The Google Cloud iPaaS contains the following core services: Integration designer, triggers, and tasks. Learn more about each Google Cloud integration service here. Azure Azure offers an Azure integration service set (also commonly referred to as Azure Integration Platform as a Service). This contains a variety of services set up to provide strong integration between applications. Some of the most powerful integration services Azure offers include API Management, Logic Apps, Service Bus, Event Grid, and Azure Arc. If you are interested in reading more on the various Azure integration services, check out this page to learn more. A Bank Multi-Cloud Application As mentioned, banking applications require a massive security layer. Also, many banks contain their own highly secure data centers, and migrating all secured data to the cloud may not even be an option. Figure 5: A banking multi-cloud integration application example In this example, we selected Azure as the cloud for the main application. The application is based on a microservices architecture and is deployed to several Kubernetes clusters. Azure stores secrets, a configuration in a Cosmos DB, and some files in Blob Storage. Azure also provides an observability platform with a service mesh. All secured data is stored on the on-premises data center, and the AWS Cloud part contains a workflow for the call center. Conclusion In this article, we've reviewed top cloud integration patterns and services that start the integration process from scratch or that consider an existing environment. Designing integrations of software solutions in the cloud requires knowledge of best practices and patterns. Furthermore, it requires a deep understanding of the toolsets, services, and components each cloud and framework offer. For example, alongside Azure Arc, AWS offers services like Systems Manager. Before I start an integration project, I'm using the following algorithm: Keep in mind the KISS principle Have a look at existing integration patterns Check on what integration components and services other clouds provide Therefore, multi-cloud integration means to make solutions and components of one cloud provider work with others using existing integration cloud components and patterns. This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report
ChatGPT is a cutting-edge natural language processing model created by OpenAI, an artificial intelligence research center founded by Elon Musk, Sam Altman, and Greg Brockman. The model was announced in June 2018, and has continuously been modified and upgraded, with the most recent version, GPT-3, being released in 2020. ChatGPT was made from GPT-3, and was released back in 2022. The “GPT” in ChatGPT stands for “Generative Pre-trained Transformer,” a reference to the model’s deep learning architecture. ChatGPT essentially uses massive amounts of text data to “learn” the intricacies of human language, allowing it to create replies that are frequently impossible to differentiate from those of humans. The ChatGPT technology is based on a type of neural network known as a transformer, which is well-suited to natural language processing applications. ChatGPT’s capacity to create high-quality output to a wide range of prompts you give in the chat box is one of its key features. The response you get feels like it has been generated by a human. You can also specify the conditions of your request. The model was trained on a vast corpus of text, so it can rely on a wide range of data to answer questions, provide information, or participate in casual conversation generated from its neural network. It is crucial to remember; however, that ChatGPT is not flawless. When faced with suggestions that are beyond the model’s area of knowledge, the model may provide illogical or unsuitable replies. ChatGPT usage is sometimes high, and, because of this, there can sometimes be a limit to how many queries you can send. Now, since the main purpose of this article is to give you a comprehensive and hands-on example of how ChatGPT can be used for generating code for a REST API, let’s first understand what a REST API is. A REST API is a type of web service that allows two computer systems to communicate with one another over the internet by following a set of standardized rules. It follows the REST architecture, which is known for its stateless and client-server nature. A REST API works by the client sending a request to the server using a specific URL and HTTP method, such as GET, POST, PUT, or DELETE, and the server responding with a message containing the requested data or indicating whether the requested operation was successful. REST APIs are widely used in modern web and mobile applications because they provide a standardized and efficient method of exchanging data between systems. You can usually learn more about a specific API by reading through the documentation. Some APIs are commercial and usually have different pricing plans. This also means that making a request requires a bearer token. Now comes the interesting part—creating a fully functional REST API in C# for managing students with the help of ChatGPT, so keep on reading. Creating the API In this section, we’ll let ChatGPT code a fully functional REST API for managing students, including creation, deletion, and modification of student records. To do this, we should have a “students controller,” which would have the API endpoint, and a “students class,” which would have the needed fields such as “Id,” “name,” and “age.” The only job we have here is to make logical, well-structured queries which would become gradually more complex as we build the API part by part. Let’s begin. Write a REST API for Managing Students in C# Here’s an example of a RESTful API for managing students in C#. This API will use the HTTP methods GET, POST, PUT, and DELETE to perform CRUD (Create, Read, Update, Delete) operations on a list of student resources. First, let’s define a “student class” to represent a student resource: C# public class Student { public int Id { get; set; } public string Name { get; set; } public int Age { get; set; } public string Email { get; set; } } Next, let’s create a StudentsController class to handle the HTTP requests: C# [Route("api/students")] public class StudentsController : Controller { private static List<Student> students = new List<Student> { new Student { Id = 1, Name = "John Smith", Age = 20, Email = "john.smith@example.com" }, new Student { Id = 2, Name = "Jane Doe", Age = 19, Email = "jane.doe@example.com" }, new Student { Id = 3, Name = "Bob Johnson", Age = 21, Email = "bob.johnson@example.com" } }; [HttpGet] public IActionResult GetAllStudents() { return Ok(students); } [HttpGet("{id}")] public IActionResult GetStudent(int id) { var student = students.FirstOrDefault(s => s.Id == id); if (student == null) { return NotFound(); } return Ok(student); } [HttpPost] public IActionResult CreateStudent([FromBody] Student student) { student.Id = students.Max(s => s.Id) + 1; students.Add(student); return CreatedAtAction(nameof(GetStudent), new { id = student.Id }, student); } [HttpPut("{id}")] public IActionResult UpdateStudent(int id, [FromBody] Student student) { var existingStudent = students.FirstOrDefault(s => s.Id == id); if (existingStudent == null) { return NotFound(); } existingStudent.Name = student.Name; existingStudent.Age = student.Age; existingStudent.Email = student.Email; return NoContent(); } [HttpDelete("{id}")] public IActionResult DeleteStudent(int id) { var student = students.FirstOrDefault(s => s.Id == id); if (student == null) { return NotFound(); } students.Remove(student); return NoContent(); } This StudentsController class defines the following endpoints: GET /api/students: Returns a list of all students. GET /api/students/{id}: Returns a single student resource by ID. POST /api/students: Creates a new student resource. PUT /api/students/{id}: Updates an existing student resource by ID. DELETE /api/students/{id}: Deletes an existing student resource by ID. Note: we are using the [FromBody] attribute to deserialize the request body into a “student object” for the CreateStudent and UpdateStudent methods. Also note that the students list is static and initialized with some sample data. In a real-world scenario, this data would likely be stored in a database or some other persistent storage. Finally, we’re using the HTTP status codes 200 OK, 201 Created, 204 No Content, and 404 Not Found to indicate the outcome of each operation. Add More Properties to the Student Class Here’s an updated version of the “student class” with some additional properties: C# public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string ZipCode { get; set; } } In this updated version of the class, we’ve added the following properties: FirstName: The first name of the student. LastName: The last name of the student. Address: The street address of the student. City: The city where the student lives. State: The state where the student lives. ZipCode: The ZIP code of the student’s address. These additional properties can provide more information about the student and can be useful for filtering, sorting, and searching operations in the API. Add Another Property to the Student Class Regarding the Subjects the Student Has Taken Here’s an updated version of the “student class” with an additional property for the subjects the student has taken: C# public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string ZipCode { get; set; } public List<string> SubjectsTaken { get; set; } } In this updated version of the class, we’ve added a property called SubjectsTaken, which is a list of strings representing the names of the subjects the student has taken. This property can be useful for querying and filtering students by the subjects they have studied. You can add subjects to the list like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = "CA", ZipCode = "12345", SubjectsTaken = new List<string> { "Math", "Science", "History" } }; And, you can retrieve the list of subjects for a given student like this: C# var student = GetStudent(1); var subjects = student.SubjectsTaken; Change SubjectTaken to an Object Named Subject Containing the Properties Id, Name, Teacher, and Credits Here’s an updated version of the “student class” where the SubjectsTaken property is replaced with a new object subject containing the Id, Name, Teacher, and Credits properties: C# public class Subject { public int Id { get; set; } public string Name { get; set; } public string Teacher { get; set; } public int Credits { get; set; } } public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string ZipCode { get; set; } public List<Subject> SubjectsTaken { get; set; } } In this updated version of the class, we’ve replaced the SubjectsTaken property with a new object subject containing the Id, Name, Teacher, and Credits properties. This new structure can provide more detailed information about the subjects the student has taken, and it can be useful for filtering and querying students based on specific subject properties. You can add a subject to the SubjectsTaken list like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = "CA", ZipCode = "12345", SubjectsTaken = new List<Subject> { new Subject { Id = 1, Name = "Math", Teacher = "Mr. Smith", Credits = 4 }, new Subject { Id = 2, Name = "Science", Teacher = "Ms. Jones", Credits = 3 } } }; And you can retrieve the list of subjects for a given student like this: C# var student = GetStudent(1); var subjects = student.SubjectsTaken; Change the State Property in the Student Class to Enum C# public enum State { AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY } public class Student { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public string Email { get; set; } public string Address { get; set; } public string City { get; set; } public State State { get; set; } public string ZipCode { get; set; } public List<Subject> SubjectsTaken { get; set; } } In this updated version of the class, we’ve replaced the State property with an enum called State. The State enum contains a list of state abbreviations and provides type safety for the State property. You can set the State property for a student like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = State.CA, ZipCode = "12345", SubjectsTaken = new List<Subject> { new Subject { Id = 1, Name = "Math", Teacher = "Mr. Smith", Credits = 4 }, new Subject { Id = 2, Name = "Science", Teacher = "Ms. Jones", Credits = 3 } } }; And you can retrieve the State property for a given student like this: C# var student = GetStudent(1); var state = student.State; Add Validation for Making the ID, FirstName, and Email Mandatory Here’s an updated version of the student class that includes validation for the Id, FirstName, and Email properties: C# public class Student { private int _id; private string _firstName; private string _email; public int Id { get { return _id; } set { if (value <= 0) { throw new ArgumentException("Id must be greater than zero."); } _id = value; } } public string FirstName { get { return _firstName; } set { if (string.IsNullOrEmpty(value)) { throw new ArgumentException("First name cannot be null or empty."); } _firstName = value; } } public string LastName { get; set; } public int Age { get; set; } public string Email { get { return _email; } set { if (string.IsNullOrEmpty(value)) { throw new ArgumentException("Email cannot be null or empty."); } _email = value; } } public string Address { get; set; } public string City { get; set; } public State State { get; set; } public string ZipCode { get; set; } public List<Subject> SubjectsTaken { get; set; } } In this updated version of the class, we’ve added private fields for the Id, FirstName, and Email properties and added validation logic to their corresponding setter methods. The Id property must be greater than zero, and the FirstName and Email properties cannot be null or empty. If any of these validation checks fail, an ArgumentException will be thrown. You can create a new student object like this: C# var student = new Student { Id = 1, FirstName = "John", LastName = "Doe", Age = 20, Email = "john.doe@example.com", Address = "123 Main St", City = "Anytown", State = State.CA, ZipCode = "12345", SubjectsTaken = new List<Subject> { new Subject { Id = 1, Name = "Math", Teacher = "Mr. Smith", Credits = 4 }, new Subject { Id = 2, Name = "Science", Teacher = "Ms. Jones", Credits = 3 } } }; If you attempt to create a new student object with invalid data, an ArgumentException will be thrown: C# var invalidStudent = new Student { Id = 0, // throws ArgumentException FirstName = "", // throws ArgumentException LastName = "Doe", Age = 20, Email = "", // throws ArgumentException Address = "123 Main St", City = "Anytown", State = State.CA, ZipCode = "12345", SubjectsTaken = new List<Subject>() }; Make the Students Controller Asynchronous Here’s an updated version of the StudentsController class that makes use of asynchronous methods: C# [ApiController] [Route("api/[controller]")] public class StudentsController : ControllerBase { private readonly IStudentRepository _studentRepository; public StudentsController(IStudentRepository studentRepository) { _studentRepository = studentRepository; } [HttpGet] public async Task<ActionResult<IEnumerable<Student>>> GetAllStudents() { var students = await _studentRepository.GetAllStudentsAsync(); return Ok(students); } [HttpGet("{id}")] public async Task<ActionResult<Student>> GetStudentById(int id) { var student = await _studentRepository.GetStudentByIdAsync(id); if (student == null) { return NotFound(); } return Ok(student); } [HttpPost] public async Task<ActionResult<Student>> CreateStudent(Student student) { if (!ModelState.IsValid) { return BadRequest(ModelState); } await _studentRepository.AddStudentAsync(student); return CreatedAtAction(nameof(GetStudentById), new { id = student.Id }, student); } [HttpPut("{id}")] public async Task<IActionResult> UpdateStudent(int id, Student student) { if (id != student.Id) { return BadRequest(); } if (!ModelState.IsValid) { return BadRequest(ModelState); } var existingStudent = await _studentRepository.GetStudentByIdAsync(id); if (existingStudent == null) { return NotFound(); } await _studentRepository.UpdateStudentAsync(student); return NoContent(); } [HttpDelete("{id}")] public async Task<IActionResult> DeleteStudent(int id) { var existingStudent = await _studentRepository.GetStudentByIdAsync(id); if (existingStudent == null) { return NotFound(); } await _studentRepository.DeleteStudentAsync(existingStudent); return NoContent(); } } In this updated version of the class, we’ve added the async keyword to the action methods that interact with the IStudentRepository. We’ve also added the await keyword before calling the asynchronous methods to wait for the result to complete before continuing. If the action method returns a value, we’ve wrapped it in the ActionResult<T> type, where “T” is the type of the returned value. If an error occurs, we return an appropriate ActionResult, such as BadRequest() or NotFound(). Note: the IStudentRepository interface must also be updated to include asynchronous versions of its methods. Here’s an example of what the updated IStudentRepository might look like: C# public interface IStudentRepository { Task<IEnumerable<Student>> GetAllStudentsAsync(); Task<Student> GetStudentByIdAsync(int id); Task AddStudentAsync(Student student); Task UpdateStudentAsync(Student student); Task DeleteStudentAsync(Student student); } In this version of the interface, we’ve added the async keyword to each method and changed the return types to Task<T>, where “T” is the type of the returned value. This allows the calling code to use the await keyword to wait for the method to complete. Conclusion In this article, we used ChatGPT to build a REST API for managing students using ASP.NET Core and C# programming language. This API is a basic implementation of a web service that implements CRUD (Create, Read, Update, Delete) operations for managing a list of students by providing several distinct endpoints. ChatGPT was able to add basic validation to the student class, as well as make the code asynchronous. Apart from that, this AI tool easily changed the type of the state property to enum and created a whole new class when we needed a new object for the subject property. All in all, it did a great job and created fully functional classes, a controller, and a repository. ChatGPT, in my opinion, should be taken seriously as a programming aid, especially for simple and repetitive tasks, since it can automate them and recommend more efficient and effective programming techniques. As a result, it can enable programmers to save time, and they can use that time to solve more challenging and creative programming tasks. However, I also believe it’s important to use it as a complement to human programming skills rather than a replacement since it is oftentimes incapable of providing optimal solutions in complex situations. As for its power, I am confident this article was enough to prove the power of ChatGPT even to the biggest skeptics out there. If you are a developer, you can use ChatGPT for other various use cases, such as to validate or test code. You can even use it for code generation in Python. Many companies are already looking for ways to integrate ChatGPT to their platform. In summary, ChatGPT is an impressive AI language model that has the potential to revolutionize the way we approach programming, and while it may not be able to completely replace human programmers, it can certainly be considered a valuable resource.
Most inter-systems communication components that use REST serialize their payload in JSON. As of now, JSON lacks a widely-used schema validation standard: JSON Schema is not widespread. Standard schema validation allows delegating the validation to a third-party library and being done with it. Without one, we must fall back to manual validation in the code. Worse, we must keep the validation code in sync with the schema. XML has schema validation out-of-the-box: an XML document can declare a grammar that it must conform to. SOAP, being based on XML, benefits from it, too. Other serialization alternatives have a schema validation option: e.g., Avro, Kryo, and Protocol Buffers. Interestingly enough, gRPC uses Protobuf to offer RPC across distributed components: gRPC is a modern open source high performance Remote Procedure Call (RPC) framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services. - Why gRPC? Moreover, Protocol is a binary serialization mechanism, saving a lot of bandwidth. Thus, gRPC is an excellent option for inter-system communication. But if all your components talk gRPC, how can simple clients call them? In this post, we will build a gRPC service and show how to call it from cURL. A Simple gRPC Service The gRPC documentation is exhaustive, so here's a summary: gRPC is a Remote Procedure Call framework. It works across a wide range of languages. It relies on Protocol Buffers: Protocol buffers are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages. - Protocol Buffers It's part of the CNCF portfolio and is currently in the incubation stage. Let's set up our gRPC service. We will use Java, Kotlin, Spring Boot, and a dedicated gRPC Spring Boot integration project. The project structure holds two projects: one for the model and one for the code. Let's start with the model project. I didn't want something complicated. Reusing a simple example is enough: the request sends a string, and the response prefixes it with Hello. We design this model in a dedicated Protobuf schema file: ProtoBuf syntax = "proto3"; //1 package ch.frankel.blog.grpc.model; //2 option java_multiple_files = true; //3 option java_package = "ch.frankel.blog.grpc.model"; //3 option java_outer_classname = "HelloProtos"; //3 service HelloService { //4 rpc SayHello (HelloRequest) returns (HelloResponse) { } } message HelloRequest { //5 string name = 1; //6 } message HelloResponse { //7 string message = 1; //6 } Protobuf definition version Package Java-specific configuration Service definition Request definition Field definition: First comes the type, then the name, and finally, the order. Response definition We shall use Maven to generate the Java boilerplate code: XML <project> <dependencies> <dependency> <groupId>io.grpc</groupId> <!--1--> <artifactId>grpc-stub</artifactId> <version>${grpc.version}</version> </dependency> <dependency> <groupId>io.grpc</groupId> <!--1--> <artifactId>grpc-protobuf</artifactId> <version>${grpc.version}</version> </dependency> <dependency> <groupId>jakarta.annotation</groupId> <!--1--> <artifactId>jakarta.annotation-api</artifactId> <version>1.3.5</version> <optional>true</optional> </dependency> </dependencies> <build> <extensions> <extension> <groupId>kr.motd.maven</groupId> <!--2--> <artifactId>os-maven-plugin</artifactId> <version>1.7.1</version> </extension> </extensions> <plugins> <plugin> <groupId>org.xolstice.maven.plugins</groupId> <!--3--> <artifactId>protobuf-maven-plugin</artifactId> <version>${protobuf-plugin.version}</version> <configuration> <protocArtifact>com.google.protobuf:protoc:${protobuf.version}:exe:${os.detected.classifier}</protocArtifact> <pluginId>grpc-java</pluginId> <pluginArtifact>io.grpc:protoc-gen-grpc-java:${grpc.version}:exe:${os.detected.classifier}</pluginArtifact> </configuration> <executions> <execution> <goals> <goal>compile</goal> <goal>compile-custom</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Compile-time dependencies Sniff information about the Operating System; used in the next plugin Generate Java code from the proto file. After compilation, the structure should look something like the following: We can package the classes in a JAR and use it in a web app project. The latter is in Kotlin, but only because it's my favorite JVM language. We only need a specific Spring Boot starter dependency to integrate gRPC endpoints with Spring Boot: XML <dependency> <groupId>net.devh</groupId> <artifactId>grpc-server-spring-boot-starter</artifactId> <version>2.14.0.RELEASE</version> </dependency> Here's the significant bit: Kotlin @GrpcService //1 class HelloService : HelloServiceImplBase() { //2 override fun sayHello( request: HelloRequest, //2 observer: StreamObserver<HelloResponse> //3 ) { with(observer) { val reply = HelloResponse.newBuilder() //2 .setMessage("Hello ${request.name}") //4 .build() onNext(reply) //5 onCompleted() //5 } } } The grpc-server-spring-boot-starter detects the annotation and works its magic. Reference classes generated in the above project The method signature allows a StreamObserver parameter. The class comes from grpc-stub.jar. Get the request and prefix it to build the response message. Play the events. We can now start the web app with ./mvnw spring-boot:run. Testing the gRPC Service The whole idea behind the post is that accessing the gRPC service with regular tools is impossible. To test, we need a dedicated tool nonetheless. I found grpcurl. Let's install it and use it to list available services: Shell grpcurl --plaintext localhost:9090 list #1-2 List all available gRPC services without TLS verification. To avoid clashes between gRPC and other channels, e.g., REST, Spring Boot uses another port. Plain Text ch.frankel.blog.grpc.model.HelloService #1 grpc.health.v1.Health #2 grpc.reflection.v1alpha.ServerReflection #2 The gRPC service we defined Two additional services provided by the custom starter We can also dive into the structure of the service: Shell grpcurl --plaintext localhost:9090 describe ch.frankel.blog.grpc.model.HelloService Java service HelloService { rpc SayHello ( .ch.frankel.blog.grpc.model.HelloRequest ) returns ( .ch.frankel.blog.grpc.model.HelloResponse ); } Finally, we can call the service with data: Shell grpcurl --plaintext -d '{"name": "John"}' localhost:9090 ch.frankel.blog.grpc.model.HelloService/SayHello JSON { "message": "Hello John" } Accessing the gRPC Service With Regular Tools Imagine that we have a regular JavaScript client-side application that needs to access the gRPC service. What would be the alternatives? The general approach is through grpc-web: A JavaScript implementation of gRPC for browser clients. For more information, including a quick start, see the gRPC-web documentation. gRPC-web clients connect to gRPC services via a special proxy; by default, gRPC-web uses Envoy. In the future, we expect gRPC-web to be supported in language-specific web frameworks for languages such as Python, Java, and Node. For details, see the roadmap. - grpc-web The description states a single limitation: it works only for JavaScript (as of now). However, there's another one. It's pretty intrusive. You need to get the proto file, generate boilerplate code, and make your code call it. You must do it for every client type. Worse, if the proto file changes, you need to regenerate the client code in each of them. An alternative exists, though, if you're using an API Gateway. I'll describe how to do it with Apache APISIX, but perhaps other gateways can do the same. grpc-transcode is a plugin that allows transcoding REST calls to gRPC and back again. The first step is to register the proto file in Apache APISIX: Shell curl http://localhost:9180/apisix/admin/protos/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d "{ \"content\": \"$(sed 's/"/\\"/g' ../model/src/main/proto/model.proto)\" }" The second step is to create a route with the above plugin: Shell curl http://localhost:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "uri": "/helloservice/sayhello", #1 "plugins": { "grpc-transcode": { "proto_id": "1", #2 "service": "ch.frankel.blog.grpc.model.HelloService", #3 "method": "SayHello" #4 } }, "upstream": { "scheme": "grpc", "nodes": { "server:9090": 1 } } }' Define a granular route. Reference the proto file defined in the previous command. gRPC service gRPC method At this point, any client can make an HTTP request to the defined endpoint. Apache APISIX will transcode the call to gRPC, forward it to the defined service, get the response, and transcode it again. Shell curl localhost:9080/helloservice/sayhello?name=John JSON {"message":"Hello John"} Compared to grpc-web, the API Gateway approach allows sharing the proto file with a single component: the Gateway itself. Benefits of Transcoding At this point, we can leverage the capabilities of the API Gateway. Imagine we want a default value if no name is passed, e.g., World. Developers would happily set it in the code, but any change to the value would require a complete build and deployment. Changes can be nearly-instant if we put the default value in the Gateway's routes processing chain. Let's change our route accordingly: Shell curl http://localhost:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "uri": "/helloservice/sayhello", "plugins": { "grpc-transcode": { ... }, "serverless-pre-function": { #1 "phase": "rewrite", #2 "functions" : [ "return function(conf, ctx) #3 local core = require(\"apisix.core\") if not ngx.var.arg_name then local uri_args = core.request.get_uri_args(ctx) uri_args.name = \"World\" ngx.req.set_uri_args(uri_args) end end" ] } }, "upstream": { ... } }' Generic all-purpose plugin when none fits Rewrite the request. Magic Lua code that does the trick Now, we can execute the request with an empty argument and get the expected result: Shell curl localhost:9080/helloservice/sayhello?name JSON {"message":"Hello World"} Conclusion In this post, we have briefly described gRPC and how it benefits inter-service communication. We developed a simple gRPC service using Spring Boot and grpc-server-spring-boot-starter. It comes at a cost, though: regular clients cannot access the service. We had to resort to grpcurl to test it. The same goes for clients based on JavaScript - or the browser. To bypass this limitation, we can leverage an API Gateway. I demoed how to configure Apache APISIX with the grpc-transcode plugin to achieve the desired result. The complete source code for this post can be found on GitHub. To Go Further os-maven-plugin Maven Protocol Buffers Plugin gRPC-Spring-Boot-Starter
John Vester
Lead Software Engineer,
Marqeta @JohnJVester
Colin Domoney
Chief Technology Evangelist,
42Crunch
Saurabh Dashora
Founder,
ProgressiveCoder
Cameron HUNT
Integration Architect,
TeamWork France