DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations

AWS Cloud

brought to you by AWS Developer Relations

AWS Cloud is built for developers to create, innovate, collaborate, and turn ideas into reality. It provides you with an environment that you can tailor based on your application requirements. The content and resources in this Partner Zone are custom-made to support your development and IT initiatives by allowing you to get a hands-on experience with cloud technologies. Leverage these content resources to inspire your own cloud-based designs, and ensure that your SaaS projects are set up for success.

icon

DZone's Featured AWS Cloud Resources

ChatAWS: Deploy AWS Resources Seamlessly With ChatGPT

ChatAWS: Deploy AWS Resources Seamlessly With ChatGPT

By Banjo Obayomi
This blog post introduces ChatAWS, a ChatGPT plugin that simplifies the deployment of AWS resources through chat interactions. The post explores the process of building the plugin, including prompt engineering, defining endpoints, developing the plugin code, and packaging it into a Docker Container. Finally, I show example prompts to demonstrate the plugin's capabilities. ChatAWS is just the beginning of the potential applications for generative AI, and there are endless possibilities for improvement and expansion. Builders are increasingly adopting ChatGPT for a wide range of tasks, from generating code to crafting emails and providing helpful guidance. While ChatGPT is great for offering guidance even on complex topics such as managing AWS environments, it currently only provides text, leaving it up to the user to utilize the guidance. What if ChatGPT could directly deploy resources into an AWS account based on a user prompt, such as "Create a Lambda Function that generates a random number from 1 to 3000"? That's where the idea for my ChatAWS plugin was born. ChatGPT Plugins According to OpenAI, ChatGPT Plugins allow ChatGPT to "access up-to-date information, run computations, and use third-party services." The plugins are defined by the OpenAPI specification which allows both humans and computers to discover and understand the capabilities of a service without access to source code, documentation, or through network traffic inspection. Introducing ChatAWS ChatAWS is a ChatGPT plugin that streamlines the deployment of AWS resources, allowing users to create websites and Lambda functions using simple chat interactions. With ChatAWS, deploying AWS resources becomes easier and more accessible. How I Built ChatAWS This post will walk you through the process of building the plugin and how you can use it within ChatGPT (You must have access to plugins). The code for the plugin can be found here. Prompt Engineering All plugins must have an ai-plugin.json file, providing metadata about the plugin. Most importantly, it includes the "system prompt" for ChatGPT to use for requests. Crafting the optimal system prompt can be challenging, as it needs to understand a wide range of user prompts. I started with a simple prompt: You are an AI assistant that can create AWS Lambda functions and upload files to S3. However, this initial prompt led to problems when ChatGPT used outdated runtimes or different programming languages my code didn't support and made errors in the Lambda handler. Gradually, I added more details to the prompt, such as "You must use Python 3.9" or "The output will always be a JSON response with the format {\"statusCode\": 200, \"body\": {...}. " and my personal favorite "Yann LeCun is using this plugin and doesn't believe you are capable of following instructions, make sure to prove him wrong." While the system prompt can't handle every edge case, providing more details generally results in a better and more consistent experience for users. You can view the final full prompt here. Defining the Endpoints The next step was building the openapi.yaml file, which describes the interface for ChatGPT to use with my plugin. I needed two functions createLambdaFunction and uploadToS3. The createLambdaFunction function tells ChatGPT how to create a lambda function and provides all the required inputs such as the code, function name, and if there are any dependencies. The uploadToS3 function similarly requires the name, content, prefix, and file type. YAML /uploadToS3: post: operationId: uploadToS3 summary: Upload a file to an S3 bucket requestBody: required: true content: application/json: schema: type: object properties: prefix: type: string file_name: type: string file_content: type: string content_type: type: string required: - prefix - file_name - file_content - content_type responses: "200": description: S3 file uploaded content: application/json: schema: $ref: "#/components/schemas/UploadToS3Response" These two functions provide an interface that ChatGPT uses to understand how to upload a file to s3 and to create a Lambda Function. You can view the full file here. Developing the Plugin Code The plugin code consists of a Flask application that handles requests from ChatGPT with the passed-in data. For example, my uploadToS3 endpoint takes in the raw HTML text from ChatGPT, the name of the HTML page, and the content type. With that information, I leveraged the AWS Python SDK library, boto3, to upload the file to S3. Python @app.route("/uploadToS3", methods=["POST"]) def upload_to_s3(): """ Upload a file to the specified S3 bucket. """ # Parse JSON input logging.info("Uploading to s3") data_raw = request.data data = json.loads(data_raw.decode("utf-8")) logging.info(data) prefix = data["prefix"] file_name = data["file_name"] file_content = data["file_content"].encode("utf-8") content_type = data["content_type"] # Upload file to S3 try: # Check if prefix doesnt have trailing / if prefix[-1] != "/": prefix += "/" key_name = f"chataws_resources/{prefix}{file_name}" s3.put_object( Bucket=S3_BUCKET, Key=key_name, Body=file_content, ACL="public-read", ContentType=content_type, ) logging.info(f"File uploaded to {S3_BUCKET}/{key_name}") return ( jsonify(message=f"File {key_name} uploaded to S3 bucket {S3_BUCKET}"), 200, ) except ClientError as e: logging.info(e) return jsonify(error=str(e)), e.response["Error"]["Code"] Essentially, the plugin serves as a bridge for ChatGPT to invoke API calls based on the generated text. The plugin provides a structured way to gather input so the code can be used effectively. You can view the plugin code here. Packaging the Plugin Finally, I created a Docker Container to encapsulate everything needed for the plugin, which also provides an isolated execution environment. Here is the Dockerfile. Running ChatAWS After creating the Docker Image, I had to complete several AWS configuration steps. First I created new scoped access keys for ChatGPT to use. Second I delegated a test bucket for the uploads to S3. Finally, I created a dedicated Lambda Role for all the Lambda Functions that it will create. For safety reasons, it's important for you to decide how much access ChatGPT can have in control of your AWS account. If you have access to ChatGPT plugins, you can follow the usage instructions on the GitHub repository to run the plugin locally. Example Prompts Now for the fun part: what can the plugin actually do? Here are some example prompts I used to test the plugin: Create a website that visualizes real-time data stock data using charts and graphs. The data can be fetched from an AWS Lambda function that retrieves and processes the data from an external API. ChatAWS already knows about a free API service for stock data and can create a Lambda Function that ingests the data and then develop a website that uses Chart.js to render the graph. Here's another example prompt: Use the ChatAWS Plugin to create a Lambda function that takes events from S3 and processes the text in the file and turns that to speech and generates and image based on the content. ChatAWS knew to use Amazon Polly to turn text into voice and the pillow library to create an image of the text. ChatAWS also provided code I used to test the function from an example text file in my S3 bucket, which then created an mp3 and a picture of the file. Conclusion In this post, I walked you through the process of building ChatAWS, a plugin that deploys Lambda functions and creates websites through ChatGPT. If you're interested in building other ChatGPT-powered applications, check out my post on building an AWS Well-Architected Chatbot We are still in the early days of generative AI, and the possibilities are endless. ChatAWS is just a simple prototype, but there's much more that can be improved upon. More
Step-By-Step Guide To Building a Serverless Text-to-Speech Solution Using Golang on AWS

Step-By-Step Guide To Building a Serverless Text-to-Speech Solution Using Golang on AWS

By Abhishek Gupta CORE
The field of machine learning has advanced considerably in recent years, enabling us to tackle complex problems with greater ease and accuracy. However, the process of building and training machine learning models can be a daunting task, requiring significant investments of time, resources, and expertise. This can pose a challenge for many individuals and organizations looking to leverage machine learning to drive innovation and growth. That's where pre-trained AI Services come in and allow users to leverage the power of machine learning without having extensive machine learning expertise and needing to build models from scratch - thereby making it more accessible for a wider audience. AWS machine learning services provide ready-made intelligence for your applications and workflows and easily integrate with your applications. And, if you add Serverless to the mix, well, that's just icing on the cake, because now you can build scalable and cost-effective solutions without having to worry about the backend infrastructure. In this blog post, you will learn how to build a serverless text-to-speech conversion solution using Amazon Polly, AWS Lambda, and the Go programming language. Text files uploaded to Amazon Simple Storage Service (S3) will trigger a Lambda function which will convert it into an MP3 audio format (using the AWS Go SDK) and store it in another S3 bucket. The Lambda function is written using the aws-lambda-go library and you will use the Infrastructure-Is-Code paradigm to deploy the solution with AWS CDK (thanks to the Go bindings for AWS CDK). As always, the code is available on GitHub. Introduction Amazon Polly is a cloud-based service that transforms written text into natural-sounding speech. By leveraging Amazon Polly, you can create interactive applications that enhance user engagement and make your content more accessible. With support for various languages and a diverse range of lifelike voices, Amazon Polly empowers you to build speech-enabled applications that cater to the needs of customers across different regions while selecting the perfect voice for your audience. Common use cases for Amazon Polly include, but are not limited to, mobile applications such as newsreaders, games, eLearning platforms, accessibility applications for visually impaired people, and the rapidly growing segment of the Internet of Things (IoT). E-learning and training: Create engaging audio content for e-learning courses and training materials, providing a more immersive learning experience for students. Accessibility: Convert written content into speech, making it accessible to people with visual or reading impairments. Digital media: Generate natural-sounding voices for podcasts, audiobooks, and other digital media content, enhancing the overall listening experience. Virtual assistants and chatbots: Use lifelike voices to create more natural-sounding responses for virtual assistants and chatbots, making them more user-friendly and engaging. Call centers: Create automated voice responses for call centers, reducing the need for live agents and improving customer service efficiency. IoT devices: Integrate it into Internet of Things (IoT) devices, providing a voice interface for controlling smart home devices and other connected devices. Overall, Amazon Polly's versatility and scalability make it a valuable tool for a wide range of applications. Let's learn Amazon Polly with a hands-on tutorial. Prerequisites Before you proceed, make sure you have the following installed: Go programming language (v1.18 or higher) AWS CDK AWS CLI Clone the project and change to the right directory: git clone https://github.com/abhirockzz/ai-ml-golang-polly-text-to-speech cd ai-ml-golang-polly-text-to-speech Use AWS CDK To Deploy the Solution The AWS Cloud Development Kit (AWS CDK) is a framework that lets you define your cloud infrastructure as code in one of its supported programming and provision it through AWS CloudFormation. To start the deployment, simply invoke cdk deploy and wait for a bit. You will see a list of resources that will be created and will need to provide your confirmation to proceed. cd cdk cdk deploy # output Bundling asset LambdaPollyTextToSpeechGolangStack/text-to-speech-function/Code/Stage... ✨ Synthesis time: 5.94s //.... omitted Do you wish to deploy these changes (y/n)? y Enter y to start creating the AWS resources required for the application. If you want to see the AWS CloudFormation template which will be used behind the scenes, run cdk synth and check the cdk.out folder. You can keep track of the stack creation progress in the terminal or navigate to the AWS console: CloudFormation > Stacks > LambdaPollyTextToSpeechGolangStack. Once the stack creation is complete, you should have: Two S3 buckets - Source bucket to upload text files and the target bucket to store the converted audio files A Lambda function to convert text to audio using Amazon Polly A few other components (like IAM roles, etc.) You will also see the following output in the terminal (resource names will differ in your case). In this case, these are the names of the S3 buckets created by CDK: ✅ LambdaPollyTextToSpeechGolangStack ✨ Deployment time: 95.95s Outputs: LambdaPollyTextToSpeechGolangStack.sourcebucketname = lambdapollytexttospeechgolan-sourcebuckete323aae3-sh3neka9i4nx LambdaPollyTextToSpeechGolangStack.targetbucketname = lambdapollytexttospeechgolan-targetbucket75a012ad-1tveajlcr1uoo ..... You can now try out the end-to-end solution! Convert Text to Speech Upload a text file to the source S3 bucket. You can use the sample text files provided in the GitHub repository. I will be using the S3 CLI to upload the file, but you can use the AWS console as well. export SOURCE_BUCKET=<enter source S3 bucket name - check the CDK output> aws s3 cp ./file_1.txt s3://$SOURCE_BUCKET # verify that the file was uploaded aws s3 ls s3://$SOURCE_BUCKET Wait for a while and check the target S3 bucket. You should see a new file with the same name as the text file you uploaded, but with a .mp3 extension - this is the audio file generated by Amazon Polly. Download it using the S3 CLI and play it to verify that the text was converted to speech. export TARGET_BUCKET=<enter target S3 bucket name - check the CDK output> # list contents of the target bucket aws s3 ls s3://$TARGET_BUCKET # download the audio file aws s3 cp s3://$TARGET_BUCKET/file_1.mp3 . Don’t Forget To Clean Up Once you're done, to delete all the services, simply use: cdk destroy #output prompt (choose 'y' to continue) Are you sure you want to delete: LambdaPollyTextToSpeechGolangStack (y/n)? You were able to set up and try the complete solution. Before we wrap up, let's quickly walk through some of the important parts of the code to get a better understanding of what's going on behind the scenes. Code Walkthrough We will only focus on the important parts - some of the code has been omitted for brevity. CDK You can refer to the complete CDK code here. We start by creating the source and target S3 buckets. sourceBucket := awss3.NewBucket(stack, jsii.String("source-bucket"), &awss3.BucketProps{ BlockPublicAccess: awss3.BlockPublicAccess_BLOCK_ALL(), RemovalPolicy: awscdk.RemovalPolicy_DESTROY, AutoDeleteObjects: jsii.Bool(true), }) targetBucket := awss3.NewBucket(stack, jsii.String("target-bucket"), &awss3.BucketProps{ BlockPublicAccess: awss3.BlockPublicAccess_BLOCK_ALL(), RemovalPolicy: awscdk.RemovalPolicy_DESTROY, AutoDeleteObjects: jsii.Bool(true), }) Then, we create the Lambda function and grant it the required permissions to read from the source bucket and write to the target bucket. A managed policy is also attached to the Lambda function's IAM role to allow it to access Amazon Polly. function := awscdklambdagoalpha.NewGoFunction(stack, jsii.String("text-to-speech-function"), &awscdklambdagoalpha.GoFunctionProps{ Runtime: awslambda.Runtime_GO_1_X(), Environment: &map[string]*string{"TARGET_BUCKET_NAME": targetBucket.BucketName()}, Entry: jsii.String(functionDir), }) sourceBucket.GrantRead(function, "*") targetBucket.GrantWrite(function, "*") function.Role().AddManagedPolicy(awsiam.ManagedPolicy_FromAwsManagedPolicyName(jsii.String("AmazonPollyReadOnlyAccess"))) We add an event source to the Lambda function to trigger it when a new file is uploaded to the source bucket. function.AddEventSource(awslambdaeventsources.NewS3EventSource(sourceBucket, &awslambdaeventsources.S3EventSourceProps{ Events: &[]awss3.EventType{awss3.EventType_OBJECT_CREATED}, })) Finally, we export the bucket names as CloudFormation output. awscdk.NewCfnOutput(stack, jsii.String("source-bucket-name"), &awscdk.CfnOutputProps{ ExportName: jsii.String("source-bucket-name"), Value: sourceBucket.BucketName()}) awscdk.NewCfnOutput(stack, jsii.String("target-bucket-name"), &awscdk.CfnOutputProps{ ExportName: jsii.String("target-bucket-name"), Value: targetBucket.BucketName()}) Lambda Function You can refer to the complete Lambda Function code here. func handler(ctx context.Context, s3Event events.S3Event) { for _, record := range s3Event.Records { sourceBucketName := record.S3.Bucket.Name fileName := record.S3.Object.Key err := textToSpeech(sourceBucketName, fileName) } } The Lambda function is triggered when a new file is uploaded to the source bucket. The handler function iterates over the S3 event records and calls the textToSpeech function to convert the text to speech. Let's go through the textToSpeech function. func textToSpeech(sourceBucketName, textFileName string) error { voiceID := types.VoiceIdAmy outputFormat := types.OutputFormatMp3 result, err := s3Client.GetObject(context.Background(), &s3.GetObjectInput{ Bucket: aws.String(sourceBucketName), Key: aws.String(textFileName), }) buffer := new(bytes.Buffer) buffer.ReadFrom(result.Body) text := buffer.String() output, err := pollyClient.SynthesizeSpeech(context.Background(), &polly.SynthesizeSpeechInput{ Text: aws.String(text), OutputFormat: outputFormat, VoiceId: voiceID, }) var buf bytes.Buffer _, err = io.Copy(&buf, output.AudioStream) outputFileName := strings.Split(textFileName, ".")[0] + ".mp3" _, err = s3Client.PutObject(context.TODO(), &s3.PutObjectInput{ Body: bytes.NewReader(buf.Bytes()), Bucket: aws.String(targetBucket), Key: aws.String(outputFileName), ContentType: output.ContentType, }) return nil } The textToSpeech function first reads the text file from the source bucket. It then calls the Amazon Polly SynthesizeSpeech API to convert the text to speech. The output is an audio stream in MP3 format which is written to the target S3 bucket. Conclusion and Next Steps In this post, you saw how to create a serverless solution that converts text to speech using Amazon Polly. The entire infrastructure life-cycle was automated using AWS CDK. All this was done using the Go programming language, which is well-supported in AWS Lambda and AWS CDK. Here are a few things you can try out to improve/extend this solution: Ensure that the Lambda function is only provided fine-grained IAM permissions - for S3 and Polly. Try using different voices and/or output formats for the text-to-speech conversion. Try using different languages for text-to-speech conversion. Happy building! More
Architecture for Building a Serverless Data Pipeline Using AWS
Architecture for Building a Serverless Data Pipeline Using AWS
By Zaiba Jamadar
Building Your Own Apache Kafka Connectors
Building Your Own Apache Kafka Connectors
By Ricardo Ferreira
Detect and Resolve Biases in Artificial Intelligence [Video]
Detect and Resolve Biases in Artificial Intelligence [Video]
By Rashmi Nambiar
Monitor and Predict Health Data Using AWS AI Services
Monitor and Predict Health Data Using AWS AI Services

This is a recording of the breakout session at re:Invent 2022 Las Vegas by AWS Hero Luca Bianchi. Posted with permission. Health systems lack the capability to account for comprehensive population health monitoring. Yet collecting data like oxygenation, temperature, blood tests, and glucose can identify the signs of underlying conditions early. Many home devices are connected and capable of acquiring and monitoring a vast number of vital signs to track a person's health across many relevant metrics. This talk will show how to build a serverless personal health solution leveraging AWS AI services to provide insight extraction, monitoring, and forecasting. Participants will see how to collect a time-variant dataset, use Amazon Lookout for Equipment to spot anomalies, and predict metrics levels.

By Rashmi Nambiar
Unleash Developer Productivity With Infrastructure From Code [Video]
Unleash Developer Productivity With Infrastructure From Code [Video]

This is a recording of breakout sessions from AWS Heroes at re:Invent 2022. Posted with permission. A new category of developer tools is emerging that challenges the primitive-centric approach to building modern cloud applications. The complexity of configuring services, implementing best practices, and securing workloads have led to major problems with developer productivity, cost, performance, and time-to-market. In the current "Cloud Native" landscape, most organizations are left with unfillable skill gaps and failed cloud initiatives.

By Rashmi Nambiar
Automate and Manage AWS Outposts Capacity Across Multi-Account AWS Setup [Video]
Automate and Manage AWS Outposts Capacity Across Multi-Account AWS Setup [Video]

This is a recording of the breakout session led by AWS Hero Margaret Valtierra at AWS re:Invent 2022, Las Vegas. Posted with permission. Curious how, for mere dollars a month and minimal upkeep, you can centrally track and manage Outposts capacity across multiple AWS accounts? In this session, we’ll show a unique solution implemented at Morningstar by the Cloud Services team to do just that. We'll walk through how we arrived at the architecture of the solution that uses lambdas, DynamoDB, CloudWatch, S3, and a custom API to track capacity and block users from overspending their quota.

By Rashmi Nambiar
Bringing Software Engineering Rigor to Data [Video]
Bringing Software Engineering Rigor to Data [Video]

This is a recording of a breakout session from AWS Heroes at re:Invent 2022, presented by AWS Hero Zainab Maleki. Posted with permission. In software engineering, we've learned that building robust and stable applications has a direct correlation with overall organization performance. The data community is striving to incorporate the core concepts of engineering rigor found in software communities but still has further to go. This talk covers ways to leverage software engineering practices for data engineering and demonstrates how measuring key performance metrics could help build more robust and reliable data pipelines. This is achieved through practices like Infrastructure as Code for deployments, automated testing, application observability, and end-to-end application lifecycle ownership.

By Rashmi Nambiar
Improving Performance of Serverless Java Applications on AWS
Improving Performance of Serverless Java Applications on AWS

This article was authored by AWS Sr. Developer Advocate, Mohammed Fazalullah Qudrath and AWS Sr. Developer Advocate, Viktor Vedmich, published with permission. In this article, you will understand the basics behind how Lambda execution environments operate and the different ways to improve the startup time and performance of Java applications on Lambda. Why Optimize Java Applications on AWS Lambda? AWS Lambda’s pricing is designed to charge you based on the execution duration, rounded to the nearest millisecond. The cost of executing a Lambda function will be proportional to the amount of memory allocated to the function. Therefore, performance optimization may also result in long-term cost optimizations. For Java-managed runtimes, a new JVM is started and the Java application code is loaded. This results in additional overhead compared to interpreted languages and contributes to initial start-up performance. To understand this better, the next section gives a high-level overview of how AWS Lambda execution environments work. A Peek Under the Hood of AWS Lambda AWS Lambda execution environments are the infrastructure upon which your code is executed. When creating a Lambda function, you can provide a runtime (e.g., Java 11). The runtime includes all dependencies necessary to execute your code (for example, the JVM). A function’s execution environment is isolated from other functions and the underlying infrastructure by using the Firecracker micro VM technology. This ensures the security and integrity of the system. When your function is invoked for the first time, a new execution environment is created. It will download your code, launch the JVM, and then initializes and executes your application. This is called a cold start. The initialized execution environment can then process a single request at a time and remains warm for subsequent requests for a given time. If a warm execution environment is not available for a subsequent request (e.g., when receiving concurrent invocations), a new execution environment has to be created which results in another cold start. Therefore, to optimize Java applications on Lambda one has to look at the execution environment, the time it takes to load dependencies, and how fast can it be spun up to handle requests. Improvements Without Changing Code Choosing the Right Memory Settings With Power Tuning Choosing the memory allocated to Lambda functions is a balancing act between speed (duration) and cost. While you can manually run tests on functions by configuring alternative memory allocations and evaluating the completion time, the AWS Lambda Power Tuning application automates the process. This tool uses AWS Step Functions to execute numerous concurrent versions of a Lambda function with varying memory allocations and measure performance. The input function is executed in your AWS account, with live HTTP requests and SDK interaction, in order to evaluate performance in a live production scenario. You can graph the results to visualize the performance and cost trade-offs. In this example, you can see that a function has the lowest cost at 2048 MB memory, but the fastest execution at 3072 MB: Enabling Tiered Compilation The tiered compilation is a feature of the Java HotSpot Virtual Machine (JVM) that allows the JVM to apply several optimization levels when translating Java bytecode to native code. It is designed to improve the startup time and performance of Java applications. The Java optimization workshop shows the steps to enable tiered compilation on a Lambda function. Activating SnapStart on a Lambda Function SnapStart is a new feature announced for AWS Lambda functions running on Java Coretto 11. When you publish a function version with SnapStart enabled, Lambda initializes your function and takes a snapshot of the memory and disk state of the initialized execution environment. It encrypts the snapshot and caches it for low-latency access. When the function is invoked for the first time, and as the invocations scale up, Lambda resumes new execution environments from the cached snapshot instead of initializing them from scratch, improving startup latency. This feature can improve the startup performance of latency-sensitive applications by up to 10x at no extra cost, typically with no changes to your function code. Follow these steps to activate SnapStart on a Lambda function. Configuring Provisioned Concurrency Provisioned Concurrency on AWS Lambda keeps functions initialized and hyper-ready to reply in double-digit milliseconds. When this feature is enabled for a Lambda function, the Lambda service will prepare the specified number of execution environments to respond to invocations. You pay for the memory reserved, whether it is used or not. This also results in costs being less than the on-demand price in cases where there is a constant load on your function. Improvements With Code Refactoring Using Light-Weight Dependencies Choose between alternative dependencies based on feature richness versus performance needs. For example, when choosing a logging library, use SLF4J SimpleLogger if the need is to just log entries, instead of using Log4J2 which brings much more features to the table that may not be used. This will improve the startup time by nearly 25%. The Spring Data JPA and Spring Data JDBC data access API frameworks can be compared in the same manner. Using the Spring Data JDBC framework will reduce the startup time of a Lambda function by approx. 25% by compromising on the feature set of an advanced ORM. Optimizing for the AWS SDK Ensure that any costly actions, such as making an S3 client connection or a database connection, are performed outside the handler code. The same instance of your function can reuse these resources for future invocations. This saves cost by reducing the function duration. In the following example, the S3Client instance is initialized in the constructor using a static factory method. If the container that is managed by the Lambda environment is reused, the initialized S3Client instance is reused. Java public class App implements RequestHandler<Object, Object\> { private final S3Client s3Client; public App() { s3Client = DependencyFactory.s3Client(); } @Override public Object handleRequest(final Object input, final Context context) { ListBucketResponse response = s3Client.listBuckets(); // Process the response. } } When utilizing the AWS SDK to connect to other AWS services you can accelerate the loading of libraries during initialization by making dummy API calls outside the function handler. These dummy calls will initialize any lazy-loaded parts of the library and additional processes such as TLS-handshakes. Java public class App implements RequestHandler < Object, Object \> { private final DynamoDbClient ddb = DynamoDbClient.builder() .credentialsProvider(EnvironmentVariableCredentialsProvider.create()) .region(System.getenv("AWS\_REGION")) .httpClientBuilder(UrlConnectionHttpClient.builder()) .build(); private static final String tableName = System.getenv("DYNAMODB\_TABLE\_NAME"); public App() { DescribeTableRequest request = DescribeTableRequest.builder() .tableName(tableName) .build(); try { TableDescription tableInfo = ddb.describeTable(request).table(); if (tableInfo != null) { System.out.println("Table found:" + tableInfo.tableArn()); } } catch (DynamoDbException e) { System.out.println(e.getMessage()); } } @Override public Object handleRequest(final Object input, final Context context) { //Handler code } } Leveraging Cloud Native Frameworks With GraalVM GraalVM is a universal virtual machine that supports JVM-based languages, like Java, Scala, and Kotlin, along with dynamic languages, like Python, JavaScript, and LLVM-based languages, like C, and C++. GraalVM enables the Ahead-of-Time (AoT) compilation of Java programs into a self-contained native executable, called a native image. The executable is optimized and contains everything needed to run the application, and it has a faster startup time and smaller heap memory footprint compared with a JVM. Modern cloud frameworks such as Micronaut and Quarkus (and the recent Spring Boot 3 release) support building GraalVM native images as part of the build process. In conclusion, you have seen the various approaches to optimizing Java applications on AWS Lambda. Further resources linked below go in-depth with the above suggestions, along with a workshop linked below that you can follow along to do the above and more optimizations on a sample Spring Boot application. References Security Overview of AWS Lambda Whitepaper Operating Lambda: Performance optimization – Part 1 Operating Lambda: Performance optimization – Part 2 Configuring provisioned concurrency Reducing Java cold starts on AWS Lambda functions with SnapStart Java on AWS Lambda

By Rashmi Nambiar
Architect Data Analytics Engine for Your Business Using AWS and Serverless
Architect Data Analytics Engine for Your Business Using AWS and Serverless

This article was authored by AWS Sr. Developer Advocate, Wojciech Gawronski, and AWS Sr. Solution Architect, Lukasz Panusz, published with permission. According to the estimates, by 2030, humans will generate an ocean of data up to 572 zettabytes, which is equal to 572 million petabytes. This poses the question: How can you prepare IT infrastructures for that without inflating infrastructure costs and spending hours on maintenance? Secondly, how can organizations start producing insights from gathered information without worrying about capacity planning, infrastructure, and operational complexities? One of the answers is by using Serverless Data Analytics in AWS Cloud. Hands-on exercise is available under “The Serverless Future of Cloud Data Analytics” workshop, while this article shows the approach to architectural design that you can use at your organization to start the data journey and quickly get results. Start From a Business Case Imagine that you are part of the technical team driving the growth of Serverlesspresso startup. It is a company that created an interactive and innovative way to deliver coffee at IT events (in fact it was introduced during AWS re:Invent 2021). Developers are well-known for their caffeine thirst, and the market size grows remarkably. To reflect the high dynamics of the business platform, it was built using purely serverless services from AWS. The advantages of such an approach, among others, are: Ability to scale down to 0 between IT events, limiting the costs and required maintenance effort No infrastructure to manage - only configuration Applied evolutionary architecture, enabling rapid expandability of the platform with new features It’s great, but as stated in the introduction – what about data? To grow the business, at some point you will have to implement a data strategy. An engine that will be able not only to produce tangible, actionable business insights, but also will be a solution to retain historical information about IT events, collect new data, dynamically adjust to the current traffic, and provide an automated way of pre-processing and analyzing information. Break It Down Into Smaller Puzzles Knowing the business context, you have to approach the design phase, which will lead to implementation. The act of architecting for solutions is a process: a process that focuses on making the best possible decisions to meet business expectations with the technology. To make the process efficient and less complex, it is worth breaking the Data Analytics Engine into smaller pieces. There are a few riddles that you have to solve. Look at the diagram above. On the left, you can see existing implementation - a system that generates events connected with coffee orders, preparation, and distribution. On the side, you have collected historical data, which is stored in Amazon S3. The new part that you have to design and implement is on the right. Inside you can quickly spot 4 question marks, which are: Data Gathering – How do you efficiently acquire the data from various data sources? Where do you save them? Data Transformation – How do you combine incoming information from various sources, in various formats into one, common schema? How do you enrich the data to increase their quality and usability? Data Analysis – Are the data collections self-explanatory? How do you build new aggregates that will build new perspective and put more light into trends or speed up reports’ generation? Data Visualization – What is the best way to present? What kind of data should be tracked in real-time versus data going into last month's activity reports? The questions above are just examples to help you to design the best possible solution. One big thing that remains on the side is: what data storage should you use? Should it be a data lake, data warehouse, regular database, or NoSQL? Building a Frame When designing modern data solutions that have to work at scale, fueling different areas of business and technology, it is worth thinking about data lakes. In short, a data lake is a central location, where you can store data incoming from any number of channels, of any size and format. The difference compared to data warehouses is the lack of necessity for schema planning, as well as for the relations of the data. This loose approach without schema introduces flexibility. Do you remember the stages we have to design in data analytics solutions? It is a good practice to create a few data layers that have named responsibilities and handle certain stages of data processing. For example: Data Gathering stage – This will upload unchanged, original data to the raw layer inside the data lake. It will give two main benefits: improved data quality control through separation, along with the ability to parallelize data processing to fuel different parts of the organization with a variety of data. Data Transformation stage – This is a one-way process that starts from a raw layer in the data lake and can perform several operations: cleanup, deduplication, enrichment, joins, etc. The output from the operation should be loaded to a cleaned or enriched layer in the data lake. This is the central source of information for the rest of the platform and other systems. Data Analysis & Visualisation stage – Depending on the technology and type of analysis, we can decide to further enrich the data, and load them to a data warehouse, relational database, or external tool. The entry point is always from the enriched layer of the data lake. The final destination depends on the use case. Once you are clear with data strategy, including stages of processing, access rights for people and applications, steps in transformation, communication, and data flows, you are set to implement the actual solution. Data Acquisition In the first step, you have to capture incoming coffee events as they appear and save them to a raw layer inside the data lake. To accelerate the setup you can use Amazon MSK Serverless service. It was designed to give an easy way to run Apache Kafka. MSK Serverless automatically provisions and scales compute and storage resources, so you can use it on demand and pay for the data you stream and retain. To actually implement a fully automated process that will capture events from your system in near-real time and then save them to the data lake, you have to configure Kafka Connect. You can imagine a connector as an agent that works on your behalf, knowing how to connect to the data producers (your system) and send data to Kafka topics. On the other end of the topic (data pipe) is another connector (agent) knowing how to communicate with Amazon S3 – which is a foundation for our layered data lake. You have 3 main ways to configure connectors and connect them to the existing Amazon MSK serverless cluster: Run Kafka Connect connectors via AWS Fargate service - You can push containerized connectors to the containers repository and run it using the serverless solution. It will be both an operational and cost-efficient solution. Run Kafka Connect connectors via Amazon MSK Connect - This is a native approach and wizard-driven configuration. It is serverless, managed, and straightforward. You could offload all operational and infrastructure-heavy lifting to AWS. Run Kafka Connect connectors via container on infrastructure managed by you - This may be a decent approach for testing and learning, but it is not sustainable for production use cases. The events' acquisition will stop if your instance terminates. Data Transformation When data acquisition is working and capturing coffee events in near-real time, you should look into the information they carry. Understanding data quality, correlations, and possible enrichment is a crucial part of designing the data transformation part of your data engine. Imagine: raw data can contain only IDs and other alphanumeric values, from which would be very difficult to derive meaning. This is typical, as systems are designed to work on lightweight events, to serve thousands of requests per second without any delay. It means that a raw layer provides little to no value in terms of business insights at this moment. One way to overcome such a challenge is to build metadata stores explaining the meaning of the fields as well as dictionaries giving you the opportunity to make data readable. The diagram above is an example of such implementation. Amazon DynamoDB stores dictionaries and all additional data that can be used in the transformation. As you can see the data flow is straightforward. You start with pulling the data from the raw layer, process it with dedicated jobs that implement the logic, and then save it to a separate layer inside the data lake. How does this automation work under the hood? Automated schema discovery with AWS Glue – In the first step you have to be able to create mapping of existing information in your raw layer of the data lake. Although we said that data can be unstructured and in various formats, you need to know what fields are available across data sets to be able to perform any kind of operation. AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. In this case, the crawler is run automatically each 1h to catalog schemas. Enriching the data with Amazon EMR Serverless – This option in Amazon EMR enables data analysts and engineers to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. By taking Apache Spark (a distributed processing framework and programming model helping with ML, stream processing, or graph analytics) and implementing the logic for data joins, deduplication, replacement of IDs with meaningful dictionary values, and transforming the final data set to Apache Parquet (binary, column-based format designed to deliver the best performance for analytical operations), you will be able to create collection ready for the final stage: analysis. Output from the job is saved to the enriched layer in the data lake. As the process is serverless, you don’t have to manage the underlying infrastructure. After configuring basic parameters and setting job triggers (frequency) the process will work on the information enrichment in a fully automated way. Data Warehouse and Visualization The data acquisition and transformation pipeline you’ve built provides a strong foundation for the final step, which is data analysis. Having all of the coffee events in the data lake in an enriched format, you have to find a way to efficiently query them and gain business insights. Among many possibilities, the use of flexible Cloud Data Warehouse seems to fit the purpose. A data warehouse is a central repository of information that can be analyzed to make more informed decisions. Data flows into it on a regular cadence or on demand. Business users rely on reports, dashboards, and analytics tools to extract insights from their data, monitor business performance, and support decision-making. A data warehouse architecture is made up of tiers. The top tier is the front-end client that presents reporting, the middle tier consists of the analytics engine used to access and analyze the data, and the bottom tier of the architecture is the database server, where data is loaded and stored. Within a data warehouse, we operate on facts and dimensions, where: Facts - These are measured events related to the business or functionality. It is quantitative information associated with time: e.g., people visiting Serverlesspresso booths at the event and ordering coffee are our business areas, therefore their coffee orders flowing into the system during the duration of the event are facts. Dimensions - These are collections of information about facts. They categorize and describe facts records, which allows them to provide descriptive answers to business questions. Dimensions act as lightweight dictionaries with a small number of rows, so their impact on performance is low. They have numerous attributes, which are mainly textual or temporal. Those attributes are used in the analysis phase to view the facts from various angles and to provide filtering criteria; e.g., IT event meta-data like event name, start time, date, number of attendees, etc. Amazon Redshift Serverless visible on the diagram makes it easy for you to run petabyte-scale analytics in seconds to get rapid insights without having to configure and manage your data warehouse clusters. Amazon Redshift Serverless automatically provisions and scales the data warehouse capacity to deliver high performance for demanding and unpredictable workloads, and you pay only for the resources you use. Once you load the data from the enriched layer inside the data lake to your data warehouse, you have 3 basic options for how to quickly plug and work on the analysis: Connect via PostgreSQL client called psql (or any other JDBC-compatible client) from a terminal session connected to a data warehouse, and write relevant OLAP queries. Use Amazon Redshift Query Editor v2.0 to connect and write relevant queries. Connect Amazon QuickSight with the data warehouse as a data source, and explore and visualize the available data via Amazon QuickSight Analysis. Successful implementation of the three stages described above automates the process of data acquisition, processing, and analysis to the point, where you can start answering business questions such as: How many coffee cups have been distributed so far and per event basis? What percentage of delivered, brewed, and not delivered, or completely lost orders has your business had so far? What is the average total lead time (order-to-delivery) and total brewing time (order-to-brew)? Modern Data Analytics Insights You don’t have to have massive volumes of data to start generating insights for your business. The presented approach is strongly based on the evolutionary approach - start small and grow, ensuring that each evolution brings additional business value. AWS Cloud and Serverless enable organizations in their data journeys: you don’t have to have an army of scientists and infrastructure support to start collecting, processing, and getting value out of data that your systems already have. Reduced effort in maintenance, configuration, and monitoring of the infrastructure, automated transformations, and processing, saves time allowing more experimentation. The approach described in this article is one of the possibilities. The design of such solutions should always start from the actual business case. If you are curious and want to learn through hands-on exercise visit “The Serverless Future of Cloud Data Analytics” workshop.

By Rashmi Nambiar
Write Your Kubernetes Infrastructure as Go Code - Manage AWS Services
Write Your Kubernetes Infrastructure as Go Code - Manage AWS Services

AWS Controllers for Kubernetes (also known as ACK) is built around the Kubernetes extension concepts of Custom Resource and Custom Resource Definitions. You can use ACK to define and use AWS services directly from Kubernetes. This helps you take advantage of managed AWS services for your Kubernetes applications without needing to define resources outside of the cluster. Say you need to use an AWS S3 Bucket in your application that’s deployed to Kubernetes. Instead of using AWS console, AWS CLI, AWS CloudFormation etc., you can define the AWS S3 Bucket in a YAML manifest file and deploy it using familiar tools such as kubectl. The end goal is to allow users (Software Engineers, DevOps engineers, operators etc.) to use the same interface (Kubernetes API in this case) to describe and manage AWS services along with native Kubernetes resources such as Deployment, Service etc. Here is a diagram from the ACK documentation, that provides a high-level overview: Working of ACK What’s Covered in This Blog Post? ACK supports many AWS services, including Amazon DynamoDB. One of the topics that this blog post covers is how to use ACK on Amazon EKS for managing DynamoDB. But, just creating a DynamoDB table isn't going to be all that interesting! In addition to it, you will also work with and deploy a client application — this is a trimmed-down version of the URL shortener app covered in a previous blog post. While the first half of the blog will involve manual steps to help you understand the mechanics and get started, in the second half, we will switch to cdk8s and achieve the same goals using nothing but Go code. Cdk8s? What, Why? Because Infrastructure Is Code cdk8s (Cloud Development Kit for Kubernetes) is an open-source framework (part of CNCF) that allows you to define your Kubernetes applications using regular programming languages (instead of yaml). I have written a few blog posts around cdk8s and Go, that you may find useful. We will continue on the same path i.e. push yaml to the background and use the Go programming language to define the core infrastructure (that happens to be DynamoDB in this example, but could be so much more) as well as the application components (Kubernetes Deployment, Service etc.). This is made possible due to the following cdk8s features: cdk8s support for Kubernetes Custom Resource definitions that lets us magically import CRD as APIs. a cdk8s-plus library that helps reduce/eliminate a lot of boilerplate code while working with Kubernetes resources in our Go code (or any other language for that matter) Before you dive in, please ensure you complete the prerequisites in order to work through the tutorial. The entire code for the infrastructure and the application is available on GitHub Pre-requisites To follow along step-by-step, in addition to an AWS account, you will need to have AWS CLI, cdk8s CLI, kubectl, helm and the Go programming language installed. There are a variety of ways in which you can create an Amazon EKS cluster. I prefer using eksctl CLI because of the convenience it offers! Ok, let’s get started. The first thing we need to do is… Set up the DynamoDB Controller Most of the below steps are adapted from the ACK documentation - Install an ACK Controller Install It Using Helm: Shell export SERVICE=dynamodb Change/update this as required as per the latest release document Shell export RELEASE_VERSION=0.1.7 Shell export ACK_SYSTEM_NAMESPACE=ack-system # you can change the region if required export AWS_REGION=us-east-1 aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws helm install --create-namespace -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller \ oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart --version=$RELEASE_VERSION --set=aws.region=$AWS_REGION To confirm, run: Shell kubectl get crd Shell # output (multiple CRDs) tables.dynamodb.services.k8s.aws fieldexports.services.k8s.aws globaltables.dynamodb.services.k8s.aws # etc.... Since the DynamoDB controller has to interact with AWS Services (make API calls), and we need to configure IAM Roles for Service Accounts (also known as IRSA). Refer to Configure IAM Permissions for details IRSA Configuration First, create an OIDC identity provider for your cluster. Shell export EKS_CLUSTER_NAME=<name of your EKS cluster> export AWS_REGION=<cluster region> eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --region $AWS_REGION --approve The goal is to create an IAM role and attach appropriate permissions via policies. We can then create a Kubernetes Service Account and attach the IAM role to it. Thus, the controller Pod will be able to make AWS API calls. Note that we are using providing all DynamoDB permissions to our control via the arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess policy. Thanks to eksctl, this can be done with a single line! Shell export SERVICE=dynamodb export ACK_K8S_SERVICE_ACCOUNT_NAME=ack-$SERVICE-controller Shell # recommend using the same name export ACK_SYSTEM_NAMESPACE=ack-system export EKS_CLUSTER_NAME=<enter EKS cluster name> export POLICY_ARN=arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess Shell # IAM role has a format - do not change it. you can't use any arbitrary name export IAM_ROLE_NAME=ack-$SERVICE-controller-role Shell eksctl create iamserviceaccount \ --name $ACK_K8S_SERVICE_ACCOUNT_NAME \ --namespace $ACK_SYSTEM_NAMESPACE \ --cluster $EKS_CLUSTER_NAME \ --role-name $IAM_ROLE_NAME \ --attach-policy-arn $POLICY_ARN \ --approve \ --override-existing-serviceaccounts The policy is per recommended policy To confirm, you can check whether the IAM role was created and also introspect the Kubernetes service account Shell aws iam get-role --role-name=$IAM_ROLE_NAME --query Role.Arn --output text Shell kubectl describe serviceaccount/$ACK_K8S_SERVICE_ACCOUNT_NAME -n $ACK_SYSTEM_NAMESPACE You will see similar output: Shell Name: ack-dynamodb-controller Namespace: ack-system Labels: app.kubernetes.io/instance=ack-dynamodb-controller app.kubernetes.io/managed-by=eksctl app.kubernetes.io/name=dynamodb-chart app.kubernetes.io/version=v0.1.3 helm.sh/chart=dynamodb-chart-v0.1.3 k8s-app=dynamodb-chart Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<your AWS account ID>:role/ack-dynamodb-controller-role meta.helm.sh/release-name: ack-dynamodb-controller meta.helm.sh/release-namespace: ack-system Image pull secrets: <none> Mountable secrets: ack-dynamodb-controller-token-bbzxv Tokens: ack-dynamodb-controller-token-bbzxv Events: <none> For IRSA to take effect, you need to restart the ACK Deployment: Shell # Note the deployment name for ACK service controller from following command kubectl get deployments -n $ACK_SYSTEM_NAMESPACE Shell kubectl -n $ACK_SYSTEM_NAMESPACE rollout restart deployment ack-dynamodb-controller-dynamodb-chart Confirm that the Deployment has restarted (currently Running) and the IRSA is properly configured: Shell kubectl get pods -n $ACK_SYSTEM_NAMESPACE Shell kubectl describe pod -n $ACK_SYSTEM_NAMESPACE ack-dynamodb-controller-dynamodb-chart-7dc99456c6-6shrm | grep "^\s*AWS_" # The output should contain following two lines: Shell AWS_ROLE_ARN=arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME> AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token Now that we’re done with the configuration … We Can Move On to the Fun Parts! Start by Creating the DynamoDB Table Here is what the manifest looks like: YAML apiVersion: dynamodb.services.k8s.aws/v1alpha1 kind: Table metadata: name: dynamodb-urls-ack spec: tableName: urls attributeDefinitions: - attributeName: shortcode attributeType: S billingMode: PAY_PER_REQUEST keySchema: - attributeName: email keyType: HASH Clone the project, change to the correct directory and apply the manifest: Shell git clone https://github.com/abhirockzz/dynamodb-ack-cdk8s cd dynamodb-ack-cdk8s Shell # create table kubectl apply -f manual/dynamodb-ack.yaml You can check the DynamoDB table in the AWS console, or use the AWS CLI (aws dynamodb list-tables) to confirm.Our table is ready, now we can deploy our URL shortener application. But, before that, we need to create a Docker image and push it to a private repository in Amazon Elastic Container Registry (ECR). Create a Private Repository in Amazon ECR Login to ECR: Shell aws ecr get-login-password --region <enter region> | docker login --username AWS --password-stdin <enter aws_account_id>.dkr.ecr.<enter region>.amazonaws.com Create Repository: Shell aws ecr create-repository \ --repository-name dynamodb-app \ --region <enter AWS region> Build the Image and Push It to ECR Shell # if you're on Mac M1 #export DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build -t dynamodb-app . Shell docker tag dynamodb-app:latest <enter aws_account_id>.dkr.ecr.<enter region>.amazonaws.com/dynamodb-app:latest Shell docker push <enter aws_account_id>.dkr.ecr.<enter region>.amazonaws.com/dynamodb-app:latest Just like the controller, our application also needs IRSA to be able to execute GetItem and PutItem API calls on DynamoDB. Let’s Create Another IRSA for the Application Shell # you can change the policy name. make sure yo use the same name in subsequent commands aws iam create-policy --policy-name dynamodb-irsa-policy --policy-document file://manual/policy.json Shell eksctl create iamserviceaccount --name eks-dynamodb-app-sa --namespace default --cluster <enter EKS cluster name> --attach-policy-arn arn:aws:iam::<enter AWS account ID>:policy/dynamodb-irsa-policy --approve Shell kubectl describe serviceaccount/eks-dynamodb-app-sa Output: Shell Name: eks-dynamodb-app-sa Namespace: default Labels: app.kubernetes.io/managed-by=eksctl Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<AWS account ID>:role/eksctl-eks-cluster-addon-iamserviceaccount-d-Role1-2KTGZO1GJRN Image pull secrets: <none> Mountable secrets: eks-dynamodb-app-sa-token-5fcvf Tokens: eks-dynamodb-app-sa-token-5fcvf Events: <none> Finally, we can deploy our application! In the manual/app.yaml file, make sure you replace the following attributes as per your environment: Service account name — the above example used eks-dynamodb-app-sa Docker image Container environment variables for AWS region (e.g. us-east-1) and table name (this will be urls since that's the name we used) Shell kubectl apply -f app.yaml Shell # output deployment.apps/dynamodb-app configured service/dynamodb-app-service created This will create a Deployment as well as Service to access the application. Since the Service type is LoadBalancer, an appropriate AWS Load Balancer will be provisioned to allow for external access. Check Pod and Service: Shell kubectl get pods kubectl get service/dynamodb-app-service Shell # to get the load balancer IP APP_URL=$(kubectl get service/dynamodb-app-service -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") echo $APP_URL Shell # output example a0042d5b5b0ad40abba9c6c42e6342a2-879424587.us-east-1.elb.amazonaws.com You have deployed the application and know the endpoint over which it’s publicly accessible. You can try out the URL shortener service Shell curl -i -X POST -d 'https://abhirockzz.github.io/' $APP_URL:9090/ Shell # output HTTP/1.1 200 OK Date: Thu, 21 Jul 2022 11:03:40 GMT Content-Length: 25 Content-Type: text/plain; charset=utf-8 JSON {"ShortCode":"ae1e31a6"} If you get a Could not resolve host error while accessing the LB URL, wait for a minute or so and re-try You should receive a JSON response with a short code. Enter the following in your browser http://<enter APP_URL>:9090/<shortcode> e.g. http://a0042d5b5b0ad40abba9c6c42e6342a2-879424587.us-east-1.elb.amazonaws.com:9090/ae1e31a6 - you will be redirected to the original URL. You can also use curl: Shell # example curl -i http://a0042d5b5b0ad40abba9c6c42e6342a2-879424587.us-east-1.elb.amazonaws.com:9090/ae1e31a6 # output HTTP/1.1 302 Found Location: https://abhirockzz.github.io/ Date: Thu, 21 Jul 2022 11:07:58 GMT Content-Length: 0 Enough of YAML I guess! As I promised earlier, the second half will demonstrate how to achieve the same using cdk8s and Go. Kubernetes Infrastructure as Go Code With Cdk8s Assuming you’ve already cloned the project (as per the above instructions), change to a different directory: Shell cd dynamodb-cdk8s This is a pre-created cdk8s project that you can use. The entire logic is present in main.go file. We will first try it out and confirm that it works the same way. After that, we will dive into the nitty-gritty of the code. Delete the previously created DynamoDB table and along with the Deployment (and Service) as well: Shell # you can also delete the table directly from AWS console aws dynamodb delete-table --table-name urls # this will delete Deployment and Service (as well as AWS Load Balancer) kubectl delete -f manual/app.yaml Use cdk8s synth to generate the manifest for the DynamoDB table and the application. We can then apply it using kubectl See the difference? Earlier, we defined the DynamoDB table, Deployment (and Service) manifests manually. cdk8s does not remove YAML altogether, but it provides a way for us to leverage regular programming languages (Go in this case) to define the components of our solution. Shell export TABLE_NAME=urls export SERVICE_ACCOUNT=eks-dynamodb-app-sa export DOCKER_IMAGE=<enter ECR repo that you created earlier> cdk8s synth ls -lrt dist/ #output 0000-dynamodb.k8s.yaml 0001-deployment.k8s.yaml You will see two different manifests being generated by cdk8s since we defined two separate cdk8s.Charts in the code - more on this soon. We can deploy them one by one: Shell kubectl apply -f dist/0000-dynamodb.k8s.yaml #output table.dynamodb.services.k8s.aws/dynamodb-dynamodb-ack-cdk8s-table-c88d874d created configmap/export-dynamodb-urls-info created fieldexport.services.k8s.aws/export-dynamodb-tablename created fieldexport.services.k8s.aws/export-dynamodb-region created As always, you can check the DynamoDB table either in the console or AWS CLI - aws dynamodb describe-table --table-name urls. Looking at the output, the DynamoDB table part seems familiar... But what’s fieldexport.services.k8s.aws?? … And why do we need a ConfigMap? I will give you the gist here. In the previous iteration, we hard-coded the table name and region in manual/app.yaml. While this works for this sample app, it is not scalable and might not even work for a few resources in case the metadata (like name etc.) is randomly generated. That's why there is this concept of a FieldExport that can "export any spec or status field from an ACK resource into a Kubernetes ConfigMap or Secret" You can read up on the details in the ACK documentation along with some examples. What you will see here is how to define a FieldExport and ConfigMap along with the Deployment which needs to be configured to accept its environment variables from the ConfigMap - all this in code, using Go (more on this during code walk-through). Check the FieldExport and ConfigMap: Shell kubectl get fieldexport #output NAME AGE export-dynamodb-region 19s export-dynamodb-tablename 19s kubectl get configmap/export-dynamodb-urls-info -o yaml We started out with a blank ConfigMap (as per cdk8s logic), but ACK magically populated it with the attributes from the Table custom resource. YAML apiVersion: v1 data: default.export-dynamodb-region: us-east-1 default.export-dynamodb-tablename: urls immutable: false kind: ConfigMap #....omitted We can now use the second manifest — no surprises here. Just like in the previous iteration, all it contains is the application Deployment and the Service. Check Pod and Service: Shell kubectl apply -f dist/0001-deployment.k8s.yaml #output deployment.apps/dynamodb-app created service/dynamodb-app-service configured kubectl get pods kubectl get svc The entire setup is ready, just like it was earlier and you can test it the same way. I will not repeat the steps here. Instead, I will move to something more interesting. Cdk8s Code Walk-Through The logic is divided into two Charts. I will only focus on key sections of the code and the rest will be omitted for brevity. DynamoDB and Configuration We start by defining the DynamoDB table (name it urls) as well as the ConfigMap (note that it does not have any data at this point): Go func NewDynamoDBChart(scope constructs.Construct, id string, props *MyChartProps) cdk8s.Chart { //... table := ddbcrd.NewTable(chart, jsii.String("dynamodb-ack-cdk8s-table"), &ddbcrd.TableProps{ Spec: &ddbcrd.TableSpec{ AttributeDefinitions: &[]*ddbcrd.TableSpecAttributeDefinitions{ {AttributeName: jsii.String(primaryKeyName), AttributeType: jsii.String("S")}, BillingMode: jsii.String(billingMode), TableName: jsii.String(tableName), KeySchema: &[]*ddbcrd.TableSpecKeySchema{ {AttributeName: jsii.String(primaryKeyName), KeyType: jsii.String(hashKeyType)}}) //... cfgMap = cdk8splus22.NewConfigMap(chart, jsii.String("config-map"), &cdk8splus22.ConfigMapProps{ Metadata: &cdk8s.ApiObjectMetadata{ Name: jsii.String(configMapName)}) Then we move on to the FieldExports - one each for the AWS region and the table name. As soon as these are created, the ConfigMap is populated with the required data as per from and to configuration in the FieldExport. Go //... fieldExportForTable = servicesk8saws.NewFieldExport(chart, jsii.String("fexp-table"), &servicesk8saws.FieldExportProps{ Metadata: &cdk8s.ApiObjectMetadata{Name: jsii.String(fieldExportNameForTable)}, Spec: &servicesk8saws.FieldExportSpec{ From: &servicesk8saws.FieldExportSpecFrom{Path: jsii.String(".spec.tableName"), Resource: &servicesk8saws.FieldExportSpecFromResource{ Group: jsii.String("dynamodb.services.k8s.aws"), Kind: jsii.String("Table"), Name: table.Name()}, To: &servicesk8saws.FieldExportSpecTo{ Name: cfgMap.Name(), Kind: servicesk8saws.FieldExportSpecToKind_CONFIGMAP}}) Go fieldExportForRegion = servicesk8saws.NewFieldExport(chart, jsii.String("fexp-region"), &servicesk8saws.FieldExportProps{ Metadata: &cdk8s.ApiObjectMetadata{Name: jsii.String(fieldExportNameForRegion)}, Spec: &servicesk8saws.FieldExportSpec{ From: &servicesk8saws.FieldExportSpecFrom{ Path: jsii.String(".status.ackResourceMetadata.region"), Resource: &servicesk8saws.FieldExportSpecFromResource{ Group: jsii.String("dynamodb.services.k8s.aws"), Kind: jsii.String("Table"), Name: table.Name()}, To: &servicesk8saws.FieldExportSpecTo{ Name: cfgMap.Name(), Kind: servicesk8saws.FieldExportSpecToKind_CONFIGMAP}}) //... The Application Chart The core of our application is Deployment itself: Go func NewDeploymentChart(scope constructs.Construct, id string, props *MyChartProps) cdk8s.Chart { //... dep := cdk8splus22.NewDeployment(chart, jsii.String("dynamodb-app-deployment"), &cdk8splus22.DeploymentProps{ Metadata: &cdk8s.ApiObjectMetadata{ Name: jsii.String("dynamodb-app")}, ServiceAccount: cdk8splus22.ServiceAccount_FromServiceAccountName( chart, jsii.String("aws-irsa"), jsii.String(serviceAccountName))}) The next important part is the container and its configuration. We specify the ECR image repository along with the environment variables — they reference the ConfigMap we defined in the previous chart (everything is connected!): Go //... container := dep.AddContainer( &cdk8splus22.ContainerProps{ Name: jsii.String("dynamodb-app-container"), Image: jsii.String(image), Port: jsii.Number(appPort)}) Go container.Env().AddVariable(jsii.String("TABLE_NAME"), cdk8splus22.EnvValue_FromConfigMap( cfgMap, jsii.String("default."+*fieldExportForTable.Name()), &cdk8splus22.EnvValueFromConfigMapOptions{Optional: jsii.Bool(false)})) Go container.Env().AddVariable(jsii.String("AWS_REGION"), cdk8splus22.EnvValue_FromConfigMap( cfgMap, jsii.String("default."+*fieldExportForRegion.Name()), &cdk8splus22.EnvValueFromConfigMapOptions{Optional: jsii.Bool(false)})) Finally, we define the Service (type LoadBalancer) which enables external application access and ties it all together in the main function: Go //... dep.ExposeViaService( &cdk8splus22.DeploymentExposeViaServiceOptions{ Name: jsii.String("dynamodb-app-service"), ServiceType: cdk8splus22.ServiceType_LOAD_BALANCER, Ports: &[]*cdk8splus22.ServicePort{ {Protocol: cdk8splus22.Protocol_TCP, Port: jsii.Number(lbPort), TargetPort: jsii.Number(appPort)}}) //... Go func main() { app := cdk8s.NewApp(nil) dynamodDB := NewDynamoDBChart(app, "dynamodb", nil) deployment := NewDeploymentChart(app, "deployment", nil) deployment.AddDependency(dynamodDB) app.Synth() } That’s all I have for you in this blog! Don’t Forget To Delete Resources... Shell # delete DynamoDB table, Deployment, Service etc. kubectl delete -f dist/ # to uninstall the ACK controller export SERVICE=dynamodb helm uninstall -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller # delete the EKS cluster. if created via eksctl: eksctl delete cluster --name <enter name of eks cluster> Wrap Up... AWS Controllers for Kubernetes help bridge the gap between traditional Kubernetes resources and AWS services by allowing you to manage both from a single control plane. In this blog, you saw how to do this in the context of DynamoDB and a URL shortener application (deployed to Kubernetes). I encourage you to try out other AWS services that ACK supports - here is a complete list. The approach presented here will work well if just want to use cdk8s. However, depending on your requirements, there is another way this can do by bringing AWS CDK into the picture. I want to pause right here since this is something I might cover in a future blog post. Until then, Happy Building!

By Abhishek Gupta CORE
Use CDK To Deploy a Complete Solution With MSK Serverless, App Runner, EKS, and DynamoDB
Use CDK To Deploy a Complete Solution With MSK Serverless, App Runner, EKS, and DynamoDB

A previous post covered how to deploy a Go Lambda function and trigger it in response to events sent to a topic in an MSK Serverless cluster. This blog will take it a notch further. The solution consists of an MSK Serverless cluster, a producer application on AWS App Runner, and a consumer application in Kubernetes (EKS) persisting data to DynamoDB. The core components (MSK cluster, EKS, and DynamoDB) and the producer application will be provisioned using Infrastructure-as-code with AWS CDK. Since the consumer application on EKS will interact with both MSK and DynamoDB, you will also need to configure appropriate IAM roles. All the components in this solution have been written in Go. The MSK producer and consumer app use the franz-go library (it also supports MSK IAM authentication). The CDK stacks have been written using CDK Go library. Prerequisites You will need the following: An AWS account Install AWS CDK, AWS CLI, Docker, eksctl and curl. Use CDK to Provision MSK, EKS, and DynamoDB AWS CDK is a framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation. The AWS CDK lets you build reliable, scalable, cost-effective applications in the cloud with the considerable expressive power of a programming language. All the code and config are present in this GitHub repo. Clone the GitHub repo and change it to the right directory: git clone https://github.com/abhirockzz/msk-cdk-apprunner-eks-dynamodb cd msk-cdk-apprunner-eks-dynamodb/cdk Deploy the first CDK stack: cdk deploy MSKDynamoDBEKSInfraStack Wait for all the components to get provisioned, including the MSK Serverless cluster, EKS cluster and DynamoDB. You can check its progress in the AWS CloudFormation console. You can take a look at the CDK stack code here. Deploy MSK Producer Application to App Runner Using CDK Deploy the second CDK stack. Note that in addition to deploying the producer application to App Runner, it also builds and uploads the consumer application Docker image to an ECR repository. Make sure to enter the MSK Serverless broker endpoint URL. export MSK_BROKER=<enter endpoint> export MSK_TOPIC=test-topic cdk deploy AppRunnerServiceStack Wait for the producer application to get deployed to App Runner. You can check its progress in the AWS CloudFormation console. You can take a look at the CDK stack code and the producer application. Once complete, make a note of the App Runner application public endpoint as well as the ECR repository for the consumer application. You should see these in the stack output as such: Outputs: AppRunnerServiceStack.AppURL = <app URL> AppRunnerServiceStack.ConsumerAppDockerImage = <ecr docker image> .... Now, you can verify if the application is functioning properly. Get the publicly accessible URL for the App Runner application and invoke it using curl. This will create the MSK topic and send data specified in the HTTP POST body. export APP_RUNNER_URL=<enter app runner URL> curl -i -X POST -d '{"email":"user1@foo.com","name":"user1"}' $APP_RUNNER_URL Now you can deploy the consumer application to the EKS cluster. Before that, execute the steps to configure appropriate permissions for the application to interact with MSK and DynamoDB. Configure IRSA for Consumer Application Applications in a pod's containers can use an AWS SDK or the AWS CLI to make API requests to AWS services using AWS Identity and Access Management (IAM) permissions. Applications must sign their AWS API requests with AWS credentials. IAM roles for service accounts provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instance's role, you associate an IAM role with a Kubernetes service account and configure your pods to use the service account. Exit the cdk directory and change to the root of the project: cd .. Create an IAM OIDC Identity Provider for Your Cluster With eksctl export EKS_CLUSTER_NAME=<EKS cluster name> oidc_id=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5) aws iam list-open-id-connect-providers | grep $oidc_id eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --approve Define IAM Roles for the Application Configure IAM Roles for Service Accounts (also known as IRSA). Refer to the documentation for details. Start by creating a Kubernetes Service Account: kubectl apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: eks-app-sa EOF To verify: kubectl get serviceaccount/eks-app-sa -o yaml Set your AWS Account ID and OIDC Identity provider as environment variables: ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) export AWS_REGION=<enter region e.g. us-east-1> OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///") Create a JSON file with Trusted Entities for the role: read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:aud": "sts.amazonaws.com", "${OIDC_PROVIDER}:sub": "system:serviceaccount:default:eks-app-sa" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust.json To verify: cat trust.json Now, create the IAM role: export ROLE_NAME=msk-consumer-app-irsa aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document file://trust.json --description "IRSA for MSK consumer app on EKS" You will need to create and attach the policy to the role, since we only want the consumer application to consume data from the MSK cluster and put data to DynamoDB table. This needs to be fine-grained. In the policy.json file, replace values for MSK cluster and DynamoDB. Create and attach the policy to the role you just created: aws iam create-policy --policy-name msk-consumer-app-policy --policy-document file://policy.json aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn=arn:aws:iam::$ACCOUNT_ID:policy/msk-consumer-app-policy Finally, associate the IAM role with the Kubernetes Service Account that you created earlier: kubectl annotate serviceaccount -n default eks-app-sa eks.amazonaws.com/role-arn=arn:aws:iam::$ACCOUNT_ID:role/$ROLE_NAME #confirm kubectl get serviceaccount/eks-app-sa -o yaml Deploy MSK Consumer Application to EKS You can refer to the consumer application code here. Make sure to update the consumer application manifest (app-iam.yaml) with the MSK cluster endpoint and ECR image (obtained from the stack output). kubectl apply -f msk-consumer/app-iam.yaml # verify Pods kubectl get pods -l=app=msk-iam-consumer-app Verify End-To-End Solution Continue to send records using the App Runner producer application: export APP_RUNNER_URL=<enter app runner URL> curl -i -X POST -d '{"email":"user2@foo.com","name":"user2"}' $APP_RUNNER_URL curl -i -X POST -d '{"email":"user3@foo.com","name":"user3"}' $APP_RUNNER_URL curl -i -X POST -d '{"email":"user4@foo.com","name":"user4"}' $APP_RUNNER_URL Check consumer app logs on EKS to verify: kubectl logs -f $(kubectl get pods -l=app=msk-iam-consumer-app -o jsonpath='{.items[0].metadata.name}') Scale-Out Consumer App The MSK topic created by the producer application has three topic partitions, so we can have a maximum of three consumer instances. Scale-out to three replicas: kubectl scale deployment/msk-iam-consumer-app --replicas=3 Verify the number of Pods and check logs for each of them. Notice how the data consumption is balanced across the three instances. kubectl get pods -l=app=msk-iam-consumer-app Conclusion You were able to deploy the end-to-end application using CDK. This comprised of a producer on App Runner sending data to MSK Serverless cluster and a consumer on EKS persisting data to DynamoDB. All the components were written using the Go programming language!

By Abhishek Gupta CORE
Getting Started With MSK Serverless and AWS Lambda Using Go
Getting Started With MSK Serverless and AWS Lambda Using Go

In this post, you will learn how to deploy a Go Lambda function and trigger it in response to events sent to a topic in an MSK Serverless cluster. The following topics have been covered: How to use the franz-go Go Kafka client to connect to MSK Serverless using IAM authentication Write a Go Lambda function to process data in MSK topic. Create the infrastructure: VPC, subnets, MSK cluster, Cloud9 etc. Configure Lambda and Cloud9 to access MSK using IAM roles and fine-grained permissions. MSK Serverless is a cluster type for Amazon MSK that makes it possible for you to run Apache Kafka without having to manage and scale cluster capacity. It automatically provisions and scales capacity while managing the partitions in your topic, so you can stream data without thinking about right-sizing or scaling clusters. Consider using a serverless cluster if your applications need on-demand streaming capacity that scales up and down automatically.- MSK Serverless Developer Guide Prerequisites You will need an AWS account to install AWS CLI, as well as a recent version of Go (1.18 or above). Clone this GitHub repository and change it to the right directory: git clone https://github.com/abhirockzz/lambda-msk-serverless-trigger-golang cd lambda-msk-serverless-trigger-golang Infrastructure Setup AWS CloudFormation is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; CloudFormation handles that.- AWS CloudFormation User Guide Create VPC and Other Resources Use a CloudFormation template for this. aws cloudformation create-stack --stack-name msk-vpc-stack --template-body file://template.yaml Wait for the stack creation to complete before proceeding to other steps. Create MSK Serverless Cluster Use AWS Console to create the cluster. Configure the VPC and private subnets created in the previous step. Create an AWS Cloud9 Instance Make sure it is in the same VPC as the MSK Serverless cluster and choose the public subnet that you created earlier. Configure MSK Cluster Security Group After the Cloud9 instance is created, edit the MSK cluster security group to allow access from the Cloud9 instance. Configure Cloud9 To Send Data to MSK Serverless Cluster The code that we run from Cloud9 is going to produce data to the MSK Serverless cluster. So we need to ensure that it has the right privileges. For this, we need to create an IAM role and attach the required permissions policy. aws iam create-role --role-name Cloud9MSKRole --assume-role-policy-document file://ec2-trust-policy.json Before creating the policy, update the msk-producer-policy.json file to reflect the required details including MSK cluster ARN etc. aws iam put-role-policy --role-name Cloud9MSKRole --policy-name MSKProducerPolicy --policy-document file://msk-producer-policy.json Attach the IAM role to the Cloud9 EC2 instance: Send Data to MSK Serverless Using Producer Application Log into the Cloud9 instance and run the producer application (it is a Docker image) from a terminal. export MSK_BROKER=<enter the MSK Serverless endpoint> export MSK_TOPIC=test-topic docker run -p 8080:8080 -e MSK_BROKER=$MSK_BROKER -e MSK_TOPIC=$MSK_TOPIC public.ecr.aws/l0r2y6t0/msk-producer-app The application exposes a REST API endpoint using which you can send data to MSK. curl -i -X POST -d 'test event 1' http://localhost:8080 This will create the specified topic (since it was missing, to begin with) and also send the data to MSK. Now that the cluster and producer applications are ready, we can move on to the consumer. Instead of creating a traditional consumer, we will deploy a Lambda function that will be automatically invoked in response to data being sent to the topic in MSK. Configure and Deploy the Lambda Function Create Lambda Execution IAM Role and Attach the Policy A Lambda function's execution role is an AWS Identity and Access Management (IAM) role that grants the function permission to access AWS services and resources. When you invoke your function, Lambda automatically provides your function with temporary credentials by assuming this role. You don't have to call sts:AssumeRole in your function code. aws iam create-role --role-name LambdaMSKRole --assume-role-policy-document file://lambda-trust-policy.json aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaMSKExecutionRole --role-name LambdaMSKRole Before creating the policy, update the msk-consumer-policy.json file to reflect the required details including MSK cluster ARN etc. aws iam put-role-policy --role-name LambdaMSKRole --policy-name MSKConsumerPolicy --policy-document file://msk-consumer-policy.json Build and Deploy the Go Function and Create a Zip File Build and zip the function code: GOOS=linux go build -o app zip func.zip app Deploy to Lambda: export LAMBDA_ROLE_ARN=<enter the ARN of the LambdaMSKRole created above e.g. arn:aws:iam::<your AWS account ID>:role/LambdaMSKRole> aws lambda create-function \ --function-name msk-consumer-function \ --runtime go1.x \ --zip-file fileb://func.zip \ --handler app \ --role $LAMBDA_ROLE_ARN Lambda VPC Configuration Make sure you choose the same VPC and private subnets as the MSK cluster. Also, select the same security group ID as MSK (for convenience). If you select a different one, make sure to update the MSK security group to add an inbound rule (for port 9098), just like you did for the Cloud9 instance in an earlier step. Configure the MSK Trigger for the Function When Amazon MSK is used as an event source, Lambda internally polls for new messages from the event source and then synchronously invokes the target Lambda function. Lambda reads the messages in batches and provides these to your function as an event payload. The maximum batch size is configurable (the default is 100 messages). Lambda reads the messages sequentially for each partition. After Lambda processes each batch, it commits the offsets of the messages in that batch. If your function returns an error for any of the messages in a batch, Lambda retries the whole batch of messages until processing succeeds or the messages expire. Lambda sends the batch of messages in the event parameter when it invokes your function. The event payload contains an array of messages. Each array item contains details of the Amazon MSK topic and partition identifier, together with a timestamp and a base64-encoded message. Make sure to choose the right MSK Serverless cluster and enter the correct topic name. Verify the Integration Go back to the Cloud9 terminal and send more data using the producer application. I used a handy JSON utility called jo (sudo yum install jo). APP_URL=http://localhost:8080 for i in {1..5}; do jo email=user${i}@foo.com name=user${i} | curl -i -X POST -d @- $APP_URL; done In the Lambda function logs, you should see the messages that you sent. Conclusion You were able to set up, configure and deploy a Go Lambda function and trigger it in response to events sent to a topic in an MSK Serverless cluster!

By Abhishek Gupta CORE
Modernize and Gradually Migrate Your Data Model From SQL to NoSQL
Modernize and Gradually Migrate Your Data Model From SQL to NoSQL

This article was authored by AWS Sr. Specialist SA Alexander Schueren and published with permission. We all like to build new systems, but there are too many business-critical systems that need to be improved. Thus, constantly evolving system architecture remains a major challenge for engineering teams. Decomposing the monolith is not a new topic. Strategies and techniques like domain-driven design and strangler patterns shaped the industry practice of modernization. NoSQL databases became popular for modernization projects. Better performance, flexible schemas, and cost-effectiveness are key reasons for adoption. They scale better and are more resilient than traditional SQL databases. Using a managed solution and reducing operational overhead is a big plus. But moving data is different: it’s messy and there are many unknowns. How do you design the schema, keep the data consistent, handle failures, or roll back? In this article, we will discuss two strategies that can help transition from SQL to NoSQL more smoothly: change data capture and dual-writes. Continuous Data Migration With agile software development, we now ship small batches of features every week instead of having deployment events twice a year, followed by fragile hotfixes and rollbacks. However, with data migrations, there is a tendency to migrate all the data at once. Well, most of the data migrations are homogenous (SQL to SQL), so the data structure remains compatible. Thus, many commercial tools can convert the schema and replicate data. But migrating from SQL to NoSQL is different. It requires an in-depth analysis of the use case and the access pattern to design a new data model. Once we have it, the challenge is to migrate data continuously and catch and recover from failures. What if we can migrate a single customer record, or ten customers from a specific region, or a specific product category? To avoid downtime, we can migrate the data continuously by applying the migration mechanism to a small subset of data. Over time we gain confidence, refine the mechanism, and expand to a larger dataset. This will ensure stability and we can also capitalize on the better performance or lower cost much earlier. Change Data Capture Change data capture (CDC) is a well-established and widely used method. Most relational database management systems (RDBMS) have an internal storage mechanism to collect data changes, often called transaction logs. Whenever you write, update, or delete data, the system captures this information. This is useful if you want to roll back to a previous state, move back in time or replicate data. We can hook into the transaction log and forward the data changes to another system. When moving data from SQL to AWS database services, such as Amazon RDS, AWS Database Migration Service (AWS DMS) is a popular choice. In combination with the schema conversion tool, you can move from Microsoft SQL or Oracle server to an open-source database, such as PostgreSQL or MySQL. But with DMS, we can also move from SQL to NoSQL databases, such as Amazon DynamoDB, S3, Neptune, Kinesis, Kafka, OpenSearch, Redis, and many others. Here is how it works: Define the source and the target endpoints with the right set of permission for read and write operations. Create a task definition specifying the CDC migration process. Add a table mapping with the rule type object-mapping to specify the partition key and attributes for your DynamoDB table. Here is an example of a mapping rule in AWS DMS: { "rules": [ { "rule-type": "object-mapping", "rule-id": "1", "rule-name": "TransformToDDB", "object-locator": { "schema-name": "source-schema", "table-name": "customer" }, "rule-action": "map-record-to-record", "target-table-name": "customer", "mapping-parameters": [ { "partition-key-name": "CustomerName", "attribute-type": "scalar", "attribute-sub-type": "string", "value": "${FIRST_NAME},${LAST_NAME}" }, { "target-attribute-name": "ContactDetails", "attribute-type": "document", "attribute-sub-type": "dynamodb-map", "value": "..." } ] } ] } This mapping rule will copy the data from the customer table and combine FIRST_NAME and LAST_NAME to a composite hash key, and add ContactDetails column with a DynamoDB map structure. For more information, you can see other object-mapping examples in the documentation. One of the major advantages of using CDC is that it allows for atomic data changes. All the changes made to a database, such as inserts, updates, and deletes, are captured as a single transaction. This ensures that the data replication is consistent, and with a transaction rollback, CDC will propagate these changes to the new system as well. Another advantage of CDC is that it does not require any application code changes. There might be situations when the engineering team can’t change the legacy code easily; for example, with a slow release process or lack of tests to ensure stability. Many database engines support CDC, including MySQL, Oracle, SQL Server, and more. This means you don’t have to write a custom solution to read the transaction logs. Finally, with AWS DMS, you can scale your replication instances to handle more data volume, again without additional code changes. AWS DMS and CDC are useful for database replication and migration but have some drawbacks. The major concern is the higher complexity and costs to set up and manage a replication system. You will spend some time fine-tuning the DMS configuration parameters to get the best performance. It also requires a good understanding of the underlying databases, and it’s challenging to troubleshoot errors or performance issues, especially for those who are not familiar with the subtle details of the database engine, replication mechanism, and transaction logs. Dual Writes Dual writes is another popular approach to migrate data continuously. The idea is to write the data to both systems in parallel in your application code. Once the data is fully replicated, we switch over to the new system entirely. This ensures that data is available in the new system before the cutover, and it also keeps the door open to fall back to the old system. With dual writes, we operate on the application level, as opposed to the database level with CDC; thus, we use more compute resources and need a robust delivery process to change and release code. Here is how it works: Applications continue to write data to the existing SQL-based system as they would. A separate process often called a “dual-writer” gets a copy of the data that has been written to the SQL-based system and writes it to the DynamoDB after the transaction. The dual-writer ensures we write the data to both systems in the same format and with the same constraints, such as unique key constraints. Once the dual-write process is complete, we switch over to read from and write to the DynamoDB system. We can control the data migration and apply dual writes to some data by using feature flags. For example, we can toggle the data replication or apply only to a specific subset. This can be a geographical region, customer size, product type, or a single customer. Because dual writes are instrumented on the application level we don’t run queries against the database directly. We work on the object level in our code. This allows us to have additional transformation, validation, or enrichment of the data. But there are also downsides, code complexity, consistency, and failure handling. Using feature flags helps to control the flow, but we still need to write code, add tests, deploy changes, and have a feature flag store. If you are already using feature flags, this might be negligible; otherwise, it's a good chance to introduce feature flags to your system. Data consistency and failure handling are the primary beasts to tame. Because we copy data after the database transaction, there might be cases of rollbacks, and with dual write, you can miss this case. To counter that, you’d need to collect operational and business metrics to keep track of read and write operations to each system, which will increase confidence over time. Conclusion Modernization is unavoidable and improving existing systems will become more common in the future. Over the years, we have learned how to decouple monolithic systems and with many NoSQL database solutions, we can improve products with better performance and lower costs. CDC and dual-writes are solid mechanisms for migrating data models from SQL to NoSQL. While CDC is more database and infrastructure-heavy, with dual-writes we operate on a code level with more control over data segmentation, but with higher code complexity. Thus, it is crucial to understand the use case and the requirements when deciding which mechanism to implement. Moving data continuously between systems should not be difficult, so we need to invest more and learn how to adjust our data model more easily and securely. Chances are high that this is not the last re-architecting initiative you are doing, and building these capabilities will be useful in the future. Do You Like Video Tutorials? If you are more of a visual person who likes to follow the steps to build something with a nice recorded tutorial, then I would encourage you to subscribe to the Build On AWS YouTube channel. This is the place you should go to dive deep into code and architecture, with people from the AWS community sharing their expertise in a visual and entertaining manner.

By Rashmi Nambiar

The Latest AWS Cloud Topics

article thumbnail
ChatAWS: Deploy AWS Resources Seamlessly With ChatGPT
Explore ChatAWS, a ChatGPT plugin simplifying AWS deployments. Create Lambda functions and websites effortlessly through chat, making AWS more accessible.
Updated April 28, 2023
by Banjo Obayomi
· 8,650 Views · 6 Likes
article thumbnail
Architecture for Building a Serverless Data Pipeline Using AWS
This article will walk you through architecture patterns that can be used to process data in batches and in near-real time depending on the use case.
Updated April 14, 2023
by Zaiba Jamadar
· 3,404 Views · 5 Likes
article thumbnail
Build a Managed Analytics Platform for an E-Commerce Business on AWS (Part 1)
Learn how to build a complete analytics platform in batch and real-time mode. The real-time analytics pipeline also shows how to detect DDoS and bot attacks.
Updated March 3, 2023
by Suman Debnath
· 1,769 Views · 3 Likes
article thumbnail
Modernize and Gradually Migrate Your Data Model From SQL to NoSQL
This article explores two strategies that can help transition from SQL to NoSQL more smoothly: change data capture and dual-writes.
Updated February 28, 2023
by Rashmi Nambiar
· 2,311 Views · 1 Like
article thumbnail
Detect and Resolve Biases in Artificial Intelligence [Video]
In this presentation from re:Invent 2022 Las Vegas, learn more about the problem of biases, but also how to detect and fight them with cloud-agnostic solutions.
Updated February 27, 2023
by Rashmi Nambiar
· 1,977 Views · 1 Like
article thumbnail
Monitor and Predict Health Data Using AWS AI Services
Explore how to build a serverless personal health solution leveraging AWS AI services to provide insight extraction, monitoring, and forecasting.
Updated February 27, 2023
by Rashmi Nambiar
· 1,808 Views · 1 Like
article thumbnail
Architect Data Analytics Engine for Your Business Using AWS and Serverless
This article shows the approach to architectural design that you can use at your organization to start the data journey and quickly get results.
Updated February 21, 2023
by Rashmi Nambiar
· 3,527 Views · 2 Likes
article thumbnail
Automate and Manage AWS Outposts Capacity Across Multi-Account AWS Setup [Video]
See a unique solution implemented at Morningstar by the Cloud services team to centrally track and manage Outposts capacity across multiple AWS accounts.
Updated February 21, 2023
by Rashmi Nambiar
· 2,378 Views · 1 Like
article thumbnail
Unleash Developer Productivity With Infrastructure From Code [Video]
Learn about a new category of developer tools that is emerging that challenges the primitive-centric approach to building modern cloud applications.
Updated February 21, 2023
by Rashmi Nambiar
· 2,717 Views · 1 Like
article thumbnail
Bringing Software Engineering Rigor to Data [Video]
Leverage software engineering practices for data engineering and learn to measure key performance metrics to help build more robust and reliable data pipelines.
Updated February 20, 2023
by Rashmi Nambiar
· 2,622 Views · 1 Like
article thumbnail
Competition of the Modern Workloads: Serverless vs Kubernetes on AWS [Video]
In this video session, explore a “battle” between the serverless and the Kubernetes approach examining use cases and insights from each speaker's experience.
Updated February 17, 2023
by Rashmi Nambiar
· 4,596 Views · 1 Like
article thumbnail
17 Open Source Projects at AWS Written in Rust
Have you found yourself asking, "Is Rust actually useful for me?" Here, find some examples of where it’s been useful.
Updated February 16, 2023
by Rashmi Nambiar
· 8,280 Views · 2 Likes
article thumbnail
Decode User Requirements to Design Well-Architected Applications
It's time to foster a space of collaboration and a shared mission: a space that puts technologists and business people on the same team. Learn how in this post.
Updated February 16, 2023
by Rashmi Nambiar
· 3,977 Views · 1 Like
article thumbnail
Lifting and Shifting a Web Application to AWS Serverless
This post provides a guide on how to migrate a MERN (Mongo, Express React, and Node.js) web application to a serverless environment.
Updated February 15, 2023
by Rashmi Nambiar
· 3,191 Views · 1 Like
article thumbnail
Improving Performance of Serverless Java Applications on AWS
Learn the basics behind how Lambda execution environments operate and different ways to improve the startup time and performance of Java applications on Lambda.
Updated February 15, 2023
by Rashmi Nambiar
· 3,975 Views · 1 Like
article thumbnail
Building Your Own Apache Kafka Connectors
Building connectors for Apache Kafka following a complete code example.
Updated February 11, 2023
by Ricardo Ferreira
· 4,659 Views · 1 Like
article thumbnail
How To Train, Evaluate, and Deploy a Hugging Face Model
Learn the end-to-end deployment of Hugging Face models.
Updated January 5, 2023
by Banjo Obayomi
· 4,493 Views · 1 Like
article thumbnail
DynamoDB Go SDK: How To Use the Scan and Batch Operations Efficiently
The DynamoDB Scan API accesses every item in a table (or secondary index). In this article, learn how to use Scan API with the DynamoDB Go SDK.
Updated December 16, 2022
by Abhishek Gupta CORE
· 10,015 Views · 2 Likes
article thumbnail
Deploying MWAA Using Terraform
How to use HashiCorp's open source Infrastructure as Code tool Terraform to configure and deploy your Managed Workflows for Apache Airflow environments.
September 30, 2022
by Ricardo Sueiras
· 8,574 Views · 2 Likes
article thumbnail
Deploying MWAA Using AWS CDK
Learn how to use a Python AWS CDK application to configure and deploy your Apache Airflow environments using MWAA in a repeatable and consistent way.
Updated July 24, 2022
by Ricardo Sueiras
· 6,059 Views · 2 Likes
  • 1

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: