JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
I recently came across this blog post from Ruud van Asseldonk titled “The YAML Document From Hell.” I’ve always heard that YAML has its pitfalls, but hadn’t looked into the details and thankfully hadn’t been affected, mainly due to my very infrequent and simple use of YAML. If you are in the same boat as me, I recommend reading that article now, as I almost can’t believe I’ve avoided any issues with it. The article digs into the issues in the YAML spec itself and then describes what happens in Python’s PyYAML and Golang’s YAML library with an example file, the titular YAML document from hell. I wanted to see how things were in the JavaScript ecosystem. YAML in JavaScript A search for JavaScript YAML parsers on npm brings up YAML (which I have used in my own project) and js-yaml. js-yaml has the most weekly downloads according to npm and the most stars on GitHub however, YAML seems to be under more active development, having been most recently published (a month ago at the time of writing) compared to js-yaml’s last publish date almost 2 years ago. There is also yamljs, but the project hasn’t received a commitment since November 2019 and hasn’t been released for 6 years, so I am going to disregard it for now. Let’s see what YAML and js-yaml do with the YAML document from hell. The Document Itself To save yourself from going back and forth between van Asseldonk’s article and this one, here is the YAML document. server_config: port_mapping: # Expose only ssh and http to the public internet. - 22:22 - 80:80 - 443:443 serve: - /robots.txt - /favicon.ico - *.html - *.png - !.git # Do not expose our Git repository to the entire world. geoblock_regions: # The legal team has not approved distribution in the Nordics yet. - dk - fi - is - no - se flush_cache: on: [push, memory_pressure] priority: background allow_postgres_versions: - 9.5.25 - 9.6.24 - 10.23 - 12.13 So how do our JavaScript libraries handle this file? The Failures Anchors, Aliases, and Tags Let’s start with the failures. As described in the original article under the subhead “Anchors, aliases, and tags” this section is invalid: serve: - /robots.txt - /favicon.ico - *.html - *.png - !.git # Do not expose our Git repository to the entire world. This causes both of our JavaScript YAML libraries to throw an error, both referencing an undefined alias. This is because the * is a way to reference an anchor created earlier in the document using an &. In our document’s case, that anchor was never created, so this is a parsing error. If you want to learn more about anchors and aliases it seems like something that is important in build pipelines. Both Bitbucket and GitLab have written about how to use anchors to avoid repeating sections in yaml files. For the purposes of trying to get the file to parse, we can make those aliases strings as they were likely intended. serve: - /robots.txt - /favicon.ico - "*.html" - "*.png" - !.git # Do not expose our Git repository to the entire world. Now we get another parsing error from our libraries; both of them complain about an unknown or unresolved tag. The ! at the start of !.git is the character triggering this behaviour. Tags seem to be the most complicated part of YAML to me. They depend on the parser you are using and allow that parser to do something custom with the content that follows the tag. My understanding is that you could use this in JavaScript to, say, tag some content to be parsed into a Map instead of an Object or a Set instead of an Array. Van Asseldonk explains this with this alarming sentence: This means that loading an untrusted YAML document is generally unsafe, as it may lead to arbitrary code execution. PyYaml apparently has a safe_load method that will avoid this, but Go’s yaml package doesn’t. It seems that the JavaScript libraries also lack this feature, so the warning for untrusted YAML documents stands. If you do want to take advantage of the tag feature in yaml, you can check out the yaml package’s documentation on custom data types or js-yaml’s supported yaml types and unsafe type extensions. To make the YAML file parse, let’s encase all the weird yaml artifacts in quotes to make them strings: serve: - /robots.txt - /favicon.ico - "*.html" - "*.png" - "!.git" # Do not expose our Git repository to the entire world. With the serve block looking it does above, the file now parses. So what happens to the rest of the potential yaml gotchas? Accidental Numbers One thing that I am gathering from this investigation so far is that if you need something to be a string, do not be ambiguous about it, surround it in quotes. That counted for the aliases and tags above and it also counts for accidental numbers. In the following section of the yaml file you see a list of version numbers: allow_postgres_versions: - 9.5.25 - 9.6.24 - 10.23 - 12.13 Version numbers are strings, numbers can’t have more than one decimal point in them. But when this is parsed by either JavaScript library the result is as follows: allow_postgres_versions: [ '9.5.25', '9.6.24', 10.23, 12.13 ] Now we have an array of strings and numbers. If a YAML parser thinks something looks like a number it will parse it as such. And when you come to use those values they might not act as you expect. Version Numbers in GitHub Actions I have had this issue within GitHub Actions before. It was in a Ruby project, but this applies to anyone trying to use version numbers in a GitHub Actions YAML file. I tried to use a list of Ruby version numbers, this worked fine up until Ruby version 3.1 was released. I had 3.0 in the array. Within GitHub Actions this was parsed as the integer 3. This might seem fine, except that when you give an integer version to GitHub Actions it picks the latest minor point for that version. So, once Ruby 3.1 was released, the number 3.0 would select version 3.1. I had to make the version number a string, "3.0", and then it was applied correctly. Accidental numbers cause issues. If you need a string, make sure you provide a string. The Successes It’s not all bad in the JavaScript world. After working through the issues above, we might now be in the clear. Let’s take a look now at what parsed correctly from this YAML file. Sexagesimal Numbers Under the port mapping section of the YAML file we see: port_mapping: # Expose only ssh and http to the public internet. - 22:22 - 80:80 - 443:443 That 22:22 is dangerous in yaml version 1.1 and PyYaml parses it as a sexagesimal (base 60) number, giving the result of 1342. Thankfully both JavaScript libraries have implemented YAML 1.2 and 22:22 is parsed correctly as a string in this case. port_mapping: [ '22:22', '80:80', '443:443' ] The Norway Problem In YAML 1.1 no is parsed as false. This is known as “the Norway problem” because listing countries as two character identifiers is fairly common and having this YAML: geoblock_regions: - dk - fi - is - no - se Parsed into this JavaScript: geoblock_regions: [ 'dk', 'fi', 'is', false, 'se' ] It is just not helpful. The good news is that, unlike Go’s YAML library, both JavaScript libraries have implemented YAML 1.2 and dropped no as an alternative for false. The geoblock_regions sections is successfully parsed as follows: geoblock_regions: [ 'dk', 'fi', 'is', 'no', 'se' ] Non-String Keys You might believe that keys in YAML would be parsed as strings, like JSON. However they can be any value. Once again there are values that may trip you up. Much like with the Norway problem in which yes and no can be parsed as true and false, the same goes for on and off. This is manifested in our YAML file in the flush_cache section: flush_cache: on: [push, memory_pressure] priority: background Here the key is on, but in some libraries it is parsed as a boolean. In Python, even more confusingly the boolean is then stringified and appears as the key "True". Thankfully this is handled by the JavaScript libraries and on becomes the key "on". flush_cache: { on: [ 'push', 'memory_pressure' ], priority: 'background' } This is of particular concern in GitHub Actions again, where on is used to determine what events should trigger an Action. I wonder if GitHub had to work around this when implementing their parsing. Parsing as YAML Version 1.1 Many of the issues that our JavaScript libraries sidestep are problems from YAML 1.1 and both libraries have fully implemented YAML 1.2. If you do wish to throw caution to the wind, or you have to parse a yaml file explicitly with YAML 1.1 settings, the YAML library can do that for you. You can pass a second argument to the parse function to tell it to use version 1.1, like so: import { parse } from "yaml"; const yaml = parse(yamlContents, { version: "1.1" }); console.log(yaml); Now you get a result with all of the fun described above: { server_config: { port_mapping: [ 1342, '80:80', '443:443' ], serve: [ '/robots.txt', '/favicon.ico', '*.html', '*.png', '!.git' ], geoblock_regions: [ 'dk', 'fi', 'is', false, 'se' ], flush_cache: { true: [ 'push', 'memory_pressure' ], priority: 'background' }, allow_postgres_versions: [ '9.5.25', '9.6.24', 10.23, 12.13 ] } } Note that in this case I left the aliases and tags quoted as strings so that the file could be parsed successfully. Stick with version 1.2, the default in both JavaScript YAML libraries, and you’ll get a much more sensible result. Isn’t YAML Fun? In this post we’ve seen that it’s easy to write malformed YAML if you weren’t aware of aliases or tags. It’s also easy to write mixed arrays of strings and numbers. There are also languages and libraries in which YAML 1.1 is still hanging around and on. yes, off, and no are booleans and some numbers can be parsed into base 60. My advice, after going through all of this, is to err on the side of caution when writing YAML. If you want a key or a value to be a string, surround it in quotes and explicitly make it a string. On the other hand, if you are parsing someone else’s yaml then you will need to program defensively and try to handle the edge cases, like accidental numbers, that can still cause issues. Finally, if you have the option, choose a different format to YAML. YAML is supposed to be human-friendly, but the surprises and the bugs that it can produce are certainly not developer-friendly and ultimately that defeats the purpose. The conclusion to the original YAML document from hell post suggests many alternatives to YAML that will work better. I can’t help but think that in the world of JavaScript that something JSON based, but friendlier to author, should be the solution. There is a package that simply strips comments from JSON or there’s JSON5 a JSON format that aims to be easier to write and maintain by hand. JSON5 supports comments as well as trailing commas, multiline strings, and various number formats. Either of these are a good start if you want to make authoring JSON easier and parsing hand authored files more consistent. If you can avoid YAML, I recommend it. If you can’t, good luck.
Parsing is an age-old technique used to analyze and extract meaning from languages (both natural and programming). Parser is a type of compiler that converts the stream of text into syntax or parse tree that conforms to some predefined grammar rules. There are various classifications to categorize these techniques, and plenty of content is available to explain them. So, for now, I am focusing on Parser Expression Grammar (which is the most recent research in parsing grammar). Also, I will try to explain the ways to implement a PEG parser. What Is PEG (Parser Expression Grammar)? Parser Expression Grammar (PEG) is the formal grammar that defined a set of recursive rules under the family of top-down parsing language. Its parsers generate an explicitly single parse tree for any input. It is more powerful than regular expressions but might have some performance drawbacks related to memory and time in a few scenarios. Advantages of Using PEG Parsers PEG parsers have some advantages over other types of parsers. Most noticeably, they are unambiguous as they choose the first option among choices. Also, it is scanner less, which means it does not require a separate lexing phase. That makes it easier to implement for parsing needs that have smaller usability than that is required to parse a variety of inputs in an enterprise use case. Understanding the PEG Structure Let's try to understand the PEG structure with the following basic example that can be used for parsing arithmetic expressions. start = additive additive = left:multiplicative "+" right:additive / multiplicative multiplicative = left:primary "*" right:multiplicative / primary primary = integer / "(" additive:additive ")" integer "integer" = digits:[0-9]+ Here, all the rules are recursive and drill down to literals or character classes with regular expressions. As we can see, an `additive` expression is an expansion of a `multiplicative` expression, and a `multiplicative` expression expands to an integer literal or nested additive expression. The integer literal is one or many occurrences of digits. QuickStart The quickest way to write or generate a PEG parser in javascript is to use pegjs. It is the most popular (as per GitHub) library to implement PEG parsers. You can refer to the official documentation for installation instructions. It supports both CLI and API modes for generating the parser. Command Line pegjs -o arithmetics-parser.js arithmetics.pegjs JavaScript API var peg = require("pegjs"); var parser = peg.generate("start = ('a' / 'b')+"); Online Tool Apart from these modes, there is an online mode available that allows you not only to validate your grammar but also allows you to quickly test with sample inputs. Once you are done with testing, you can generate your parser on the fly with a speed or a code-optimized version. Using the Parser The generated parser can be used in both node and browser environments. You can call the `parse` method with test input, and it will either return a parse tree or an error ( for invalid inputs). parser.parse("abba"); // returns ["a", "b", "b", "a"] parser.parse("abcd"); // throws an exception Sample Implementation There is plenty of tools and services that use it in some or another way. The most update to date and advanced implementation is node-sql-parser (built by Zhi Tao). This is a pool of parsers for various modern query languages for databases like BigQuery, Hive, and Flink.
Node.js has been a favorite among serious programmers for the last five years running. The JavaScript Runtime Environment for Maximum Throughput is a free and open-source program that aims to improve the performance of JavaScript across several platforms. Because of its event-driven, non-blocking I/O approach, Node.js is small in size and quick in processing requests, making it an excellent choice for data-intensive, real-time, and distributed applications. Developers are increasingly turning to node.js application optimization services; thus, it's important to streamline the process of designing and releasing cross-platform applications. So, let's get into the context of the article. Suggestions for Containerizing and Optimizing Node Apps Here are listed seven ways of containerizing your node.js application, so let's have a look at them in brief. 1. Use a Specific Base Image Tag Instead of "Version:Latest" Useful tags that define version information, intended destination (prod or test, for example), stability, or other relevant information for distributing your application across environments should always be included when creating Docker images. Outside of the development environment, you shouldn't depend on the most recent tag that Docker automatically downloads. The usage of the most recent version of a program might result in strange or even harmful effects. Suppose you're constantly updating to the most recent version of an image. In that case, eventually, one of those updates is certain to include a brand-new build or untested code that will cause your app to stop functioning as intended. Take this example Dockerfile that targets that node: # Create image based on the official Node image from dockerhub FROM node:lts-buster # Create app directory WORKDIR /usr/src/app # Copy dependency definitions COPY package.json ./package.json COPY package-lock.json ./package-lock.json # Install dependencies #RUN npm set progress=false \ # && npm config set depth 0 \ # && npm i install RUN npm ci # Get all the code needed to run the app COPY . . # Expose the port the app runs in EXPOSE 3000 # Serve the app CMD ["npm", "start"] Instead of using node:latest, you should use the lts-buster Docker image. Considering that lts-buster is a static picture, this method may be preferable. 2. Use a Multi-Stage Build One single Docker base image may be used throughout several stages of a build, including compilation, packaging, and unit testing. However, the actual code that executes the program is stored in a different image. As the finished image won't have any development or debugging tools, it'll be more secure and take up less space. In addition, if you use Docker's multi-stage build process, you can be certain that your builds will be both efficient and repeatable. You can create multiple stages within a Dockerfile to control how you build that image. You can containerize your Node application using a multi-layer approach. Different parts of the application, like code, assets, and even snapshot dependencies, may be located in each of the many layers that make up the program. What if we wish to create an independent image of our application? To see an example Dockerfile of this in action, please check the following: FROM NODE:LTS-BUSTER-SLIM AS DEVELOPMENT WORKDIR /USR/SRC/APP COPY PACKAGE.JSON ./PACKAGE.JSON COPY PACKAGE-LOCK.JSON ./PACKAGE-LOCK.JSON RUN NPM CI COPY . . EXPOSE 3000 CMD [ "NPM", "RUN", "DEV" ] FROM DEVELOPMENT AS DEV-ENVS RUN <<EOF APT-GET UPDATE APT-GET INSTALL -Y --NO-INSTALL-RECOMMENDS GIT EOF # INSTALL DOCKER TOOLS (CLI, BUILDX, COMPOSE) COPY --FROM=GLOURSDOCKER/DOCKER / / CMD [ "NPM", "RUN", "DEV" ] We first add an AS development label to the node:lts-buster-slim statement. This lets us refer to this build stage in other build stages. Next, we add a new development stage labeled dev-envs. We'll use this stage to run our development. Now, let's rebuild our image and run our development. To execute just the development build stage, we'll use the same docker build command as before, but this time we'll use the —target development parameter. docker build -t node-docker --target dev-envs 3. Fix Security Vulnerabilities in Your Node Image In order to create modern services, programmers often use preexisting third-party software. However, it's important to be cautious when integrating third-party software into your project since it may present security holes. Using verified image sources and maintaining vigilant container monitoring are both useful security measures. Docker Desktop will notify you to do security checks on the newly created node:lts-buster-slim Docker image. Let's have a look at our Node.js app with the help of the Snyk Plugin for Docker Desktop. Begin by setting up Docker Desktop 4.8.0+ on your Mac, Windows, or Linux PC. Next, choose the Allow Docker Extensions checkbox under Settings > Extensions. After that, you can search for Snyk in the Extensions Marketplace by selecting the "Add Extensions" option on the left sidebar. Put in the Snyk and log onto the network: lts-buster-slim Type "Node Docker Official Image" into the "Choose image name" box. In order to begin scanning, you will need to log in to Docker Hub. If you don't have an account, don't fret; making one is easy, quick, and completely free. With Docker Desktop, the outcome of a scan looks like this: During this scan, Snyk discovered 70 vulnerabilities of varied severity. After you've identified them, you may start fixing them to improve your reputation. Not just that. Using the docker scan command on your Dockerfile will execute a vulnerability scan: 4. Leverage HEALTHCHECK The HEALTHCHECK directive instructs Docker on how to check the health of a container. For example, this may be used to determine whether or not a web server is in an endless loop and unable to accept new connections, even while the server process is still active. # syntax=docker/dockerfile:1.4 FROM node:lts-buster-slim AS development # Create app directory WORKDIR /usr/src/app COPY package.json ./package.json COPY package-lock.json ./package-lock.json RUN npm ci COPY . . EXPOSE 3000 CMD [ "npm", "run", "dev" ] FROM development as dev-envs RUN <<EOF apt-get update apt-get install -y --no-install-recommends git EOF RUN <<EOF useradd -s /bin/bash -m vscode groupadd docker usermod -aG docker vscode EOF HEALTHCHECK CMD curl --fail http://localhost:3000 || exit 1 # install Docker tools (cli, buildx, compose) COPY --from=gloursdocker/docker / / CMD [ "npm", "run", "dev" ] In the production stage, applications are often managed by an orchestrator such as Kubernetes or a service fabric. HEALTHCHECK allows you to inform the orchestrator about the health of your containers, which may be used for configuration-based management. Here's a case in point: BACKEND: CONTAINER_NAME: BACKEND RESTART: ALWAYS BUILD: BACKEND VOLUMES: - ./BACKEND:/USR/SRC/APP - /USR/SRC/APP/NODE_MODULES DEPENDS_ON: - MONGO NETWORKS: - EXPRESS-MONGO - REACT-EXPRESS EXPOSE: - 3000 HEALTHCHECK: TEST: ["CMD", "CURL", "-F", "HTTP://LOCALHOST:3000"] INTERVAL: 1M30S TIMEOUT: 10S RETRIES: 3 START_PERIOD: 40S 5. Use .dockerignore We suggest creating a.dockerignore file in the same folder as your Dockerfile to improve build times. This guide requires a single line in your.dockerignore file: NODE_MODULES The node modules directory, which includes Maven's output, is not included in the Docker build context, thanks to this line. There are numerous advantages to having a well-organized.dockerignore file, but for the time being, this simple file will suffice. Next, I'll describe the built environment and why it's so important. Docker images may be created using the Docker build command by combining a Dockerfile and a "context." In this setting, everything you do applies to the directory structure or URL you just gave me. Any of these files may be used in the construction process. Meanwhile, the node developer operates in the compilation context. A directory on Mac, Windows, or Linux. Everything required to run the program may be found in this folder, including the source code, settings, libraries, and plugins. If you provide a.dockerignore file, we may use it to skip over certain parts of your project while creating your new image: code, configuration files, libraries, plugins, etc. For example, if you want to keep the node modules directory out of your build, you may do so by adding the following to your.dockerignore file. Backend Frontend 6. Run as a Non-Root User for Security Purposes It is safer to run apps with the user's permission since this helps reduce vulnerabilities. Even with Docker containers. Docker containers and their contents automatically get root access to the host system. That's why it's recommended to never run Docker containers as the root user. This may be accomplished by including certain USER directives in your Dockerfile. When executing the image and for any future RUN, CMD, or ENTRYPOINT instructions, the USER command specifies the desired user name (or UID) and, optionally, the user group (or GID): FROM NODE:LTS-BUSTER AS DEVELOPMENT WORKDIR /USR/SRC/APP COPY PACKAGE.JSON ./PACKAGE.JSON COPY PACKAGE-LOCK.JSON ./PACKAGE-LOCK.JSON RUN NPM CI COPY . . EXPOSE 3000 CMD ["NPM", "START"] FROM DEVELOPMENT AS DEV-ENVS RUN <<EOF APT-GET UPDATE APT-GET INSTALL -Y --NO-INSTALL-RECOMMENDS GIT EOF RUN <<EOF USERADD -S /BIN/BASH -M VSCODE GROUPADD DOCKER USERMOD -AG DOCKER VSCODE EOF # INSTALL DOCKER TOOLS (CLI, BUILDX, COMPOSE) COPY --FROM=GLOURSDOCKER/DOCKER / / CMD [ "NPM", "START" ] 7. Explore Graceful Shutdown Options for Node Temporary storage spaces created in Docker for Node. They are easy to prevent, destroy, and then either replace or repurpose. It is possible to kill containers by giving the process the SIGTERM signal. In order to make the most of this brief window of opportunity, your app must be able to process incoming requests and free up any associated resources without delay. Node.js, on the other hand, is crucial for a successful shutdown of your app since it takes and passes signals like SIGINT and SIGTERM from the OS. Because of Node.js, your app may select how to respond to the signals it receives. If you don't program for them or use a module that does, your app won't terminate properly. However, it will continue to operate normally until Docker or Kubernetes terminates it due to a timeout. If you're unable to modify your application's code, you may still use the docker run —init or tini init option inside your Dockerfile. It is recommended, however, that you provide code to manage appropriate signal handling for graceful shutdowns. Conclusion In this tutorial, we covered a wide range of topics related to Docker image optimization, from constructing a solid Dockerfile to using Snyk to check for vulnerabilities. It's not difficult to make better Node.js applications. If you master some basic skills, you'll be in good condition.
Welcome back to this series about uploading files to the web. If you missed the first post, I recommend you check it out because it’s all about uploading files via HTML. The full series will look like this: Upload files With HTML Upload files With JavaScript Receiving File Uploads With Node.js (Nuxt.js) Optimizing Storage Costs With Object Storage Optimizing Delivery With a CDN Securing File Uploads With Malware Scans In this article, we’ll do the same thing using JavaScript. Previous Article Info We left the project off with the form that looks like this: <form action="/api" method="post" enctype="multipart/form-data"> <label for="file">File</label> <input id="file" name="file" type="file" /> <button>Upload</button> </form> In the previous article, we learned that in order to access a file on the user’s device, we had to use an <input> with the “file” type. To create the HTTP request to upload the file, we had to use a <form> element. When dealing with JavaScript, the first part is still true. We still need the file input to access the files on the device. However, browsers have a Fetch API we can use to make HTTP requests without forms. I still like to include a form because: Progressive enhancement: If JavaScript fails for whatever reason, the HTML form will still work. I’m lazy: The form will actually make my work easier later on, as we’ll see. With that in mind, for JavaScript to submit this form, I’ll set up a “submit” event handler: const form = document.querySelector('form'); form.addEventListener('submit', handleSubmit); /** @param {Event} event */ function handleSubmit(event) { // The rest of the logic will go here. } handleSubmit Function Throughout the rest of this article, we’ll only be looking at the logic within the event handler function, handleSubmit. The first thing I need to do in this submit handler is call the event’s preventDefault method to stop the browser from reloading the page to submit the form. I like to put this at the end of the event handler so if there is an exception thrown within the body of this function, preventDefault will not be called, and the browser will fall back to the default behavior: /** @param {Event} event */ function handleSubmit(event) { // Any JS that could fail goes here event.preventDefault(); } Next, we’ll want to construct the HTTP request using the Fetch API. The Fetch API expects the first argument to be a URL, and a second, optional argument as an Object. We can get the URL from the form’s action property. It’s available on any form DOM node, which we can access using the event’s currentTarget property. If the action is not defined in the HTML, it will default to the browser’s current URL: /** @param {Event} event */ function handleSubmit(event) { const form = event.currentTarget; const url = new URL(form.action); fetch(url); event.preventDefault(); } Relying on the HTML to define the URL makes it more declarative, keeps our event handler reusable, and our JavaScript bundles smaller. It also maintains functionality if the JavaScript fails. By default, Fetch sends HTTP requests using the GET method, but to upload a file, we need to use a POST method. We can change the method using fetch’s optional second argument. I’ll create a variable for that object and assign the method property, but once again, I’ll grab the value from the form’s method attribute in the HTML: const url = new URL(form.action); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, }; fetch(url, fetchOptions); Now the only missing piece is including the payload in the body of the request. If you’ve ever created a Fetch request in the past, you may have included the body as a JSON string or a URLSearchParams object. Unfortunately, neither of those will work to send a file, as they don’t have access to the binary file contents. Fortunately, there is the FormData browser API. We can use it to construct the request body from the form DOM node. And conveniently, when we do so, it even sets the request’s Content-Type header to multipart/form-data; also a necessary step to transmit the binary data: const url = new URL(form.action); const formData = new FormData(form); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, body: formData, }; fetch(url, fetchOptions); Recap That’s really the bare minimum needed to upload files with JavaScript. Let’s do a little recap: Access to the file system using a file type input. Construct an HTTP request using the Fetch (or XMLHttpRequest) API. Set the request method to POST. Include the file in the request body. Set the HTTP Content-Type header to multipart/form-data. Today, we looked at a convenient way of doing that, using an HTML form element with a submit event handler, and using a FormData object in the body of the request. The current handleSumit function should look like this: /** @param {Event} event */ function handleSubmit(event) { const url = new URL(form.action); const formData = new FormData(form); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, body: formData, }; fetch(url, fetchOptions); event.preventDefault(); } GET and POST Requests Unfortunately, the current submit handler is not very reusable. Every request will include a body set to a FormData object and a “Content-Type” header set to multipart/form-data. This is too brittle. Bodies are not allowed in GET requests, and we may want to support different content types in other POST requests. We can make our code more robust to handle GET and POST requests, and send the appropriate Content-Type header. We’ll do so by creating a URLSearchParams object in addition to the FormData, and running some logic based on whether the request method should be POST or GET. I’ll try to lay out the logic below: Is the request using a POSTmethod? Yes: Is the form’s enctype attribute multipart/form-data? Yes: set the body of the request to the FormData object. The browser will automatically set the “Content-Type” header to multipart/form-data. No: set the body of the request to the URLSearchParams object. The browser will automatically set the “Content-Type” header to application/x-www-form-urlencoded. No: We can assume it’s a GET request. Modify the URL to include the data as query string parameters. The refactored solution looks like: /** @param {Event} event */ function handleSubmit(event) { /** @type {HTMLFormElement} */ const form = event.currentTarget; const url = new URL(form.action); const formData = new FormData(form); const searchParams = new URLSearchParams(formData); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, }; if (form.method.toLowerCase() === 'post') { if (form.enctype === 'multipart/form-data') { fetchOptions.body = formData; } else { fetchOptions.body = searchParams; } } else { url.search = searchParams; } fetch(url, fetchOptions); event.preventDefault(); } I really like this solution for a number of reasons: It can be used for any form. It relies on the underlying HTML as the declarative source of configuration. The HTTP request behaves the same as with an HTML form. This follows the principle of progressive enhancement, so file upload works the same when JavaScript is working properly or when it fails. Conclusion So, that’s it. That’s uploading files with JavaScript. I hope you found this useful and plan to stick around for the whole series. In the next article, we’ll move to the back end to see what we need to do to receive files. Thank you so much for reading. If you liked this article, please share it. It's one of the best ways to support me.
In JavaScript, the Temporal Dead Zone (TDZ) is a behavior that occurs when trying to access a variable that has been declared but not yet initialized. This behavior can cause unexpected errors in your code if you’re not aware of it, so it’s important to understand how it works. In this blog post, we’ll explore what the Temporal Dead Zone is, why it happens, and how to avoid common pitfalls related to it. What Is the Temporal Dead Zone? The Temporal Dead Zone is a behavior that occurs when trying to access a variable before it has been initialized. When a variable is declared using the let or const keyword, it is hoisted to the top of its scope, but it is not initialized until the line where it was declared is executed. This means that any code that tries to access the variable before that line is executed will result in an error. For example, let’s declare a variable called logicSparkMessage using the let keyword: console.log(logicSparkMessage); // ReferenceError: Cannot access 'logicSparkMessage' before initialization let logicSparkMessage = "Welcome to LogicSpark!"; In this example, we’re trying to log the value of logicSparkMessage before it has been initialized, which results in a ReferenceError. This error occurs because we’re trying to access the variable within its Temporal Dead Zone, which is the time between the variable’s declaration and initialization. Why Does the Temporal Dead Zone Happen? The Temporal Dead Zone happens because of the way variables are hoisted in JavaScript. When a variable is declared using let or const, it is hoisted to the top of its scope, but it is not initialized until the line where it was declared is executed. This means that any code that tries to access the variable before that line is executed will result in an error. How to Avoid Common Pitfalls Related to the Temporal Dead Zone To avoid common pitfalls related to the Temporal Dead Zone, it’s important to always declare your variables before using them and to use the var keyword instead of let or const if you need to access the variable before it has been initialized. For example, let’s declare a variable called logicSparkGreeting using the var keyword: console.log(logicSparkGreeting); // undefined var logicSparkGreeting = "Hello, from LogicSpark!"; console.log(logicSparkGreeting); // "Hello, from LogicSpark!" In this example, we’re declaring a variable called logicSparkGreeting using the var keyword, which is hoisted to the top of its scope and initialized to undefined. When we try to log the value of logicSparkGreeting, it returns undefined. However, after we initialize the variable with a value, we can log its value without any errors. Conclusion The Temporal Dead Zone is a behavior that occurs when trying to access a variable before it has been initialized. It happens because of the way variables are hoisted in JavaScript and can cause unexpected errors if you’re not aware of it. To avoid common pitfalls related to the Temporal Dead Zone, it’s important to always declare your variables before using them and to use the var keyword instead of let or const if you need to access the variable before it has been initialized. By understanding this behavior and taking the necessary precautions, you can write cleaner and more reliable code in your LogicSpark projects.
As per StackOverflow insights, JavaScript is the most popular programming language. As the power of web and mobile is increasing day by day, JavaScript and JavaScript frameworks are becoming more popular. It would not be surprising to hear that JavaScript has become a preference for test automation as well. Over the past few years, a lot of development has happened in the open-source JavaScript based test automation framework development and now we have multiple JavaScript testing frameworks that are robust enough to be used professionally. There are scalable frameworks that can be used by web developers and testers to automate even unit test cases and create complete end-to-end automation test suites. Mocha is one JavaScript testing framework that has been well renowned since 2016, as per StateofJS. With that said, when we talk about JavaScript automation testing, we can’t afford not to loop Selenium into the discussion. So I thought coming up with a step-by-step Mocha testing tutorial on the framework will be beneficial for you to kickstart your JavaScript automation testing with Mocha and Selenium. We will also be looking into how you can run it on the LambdaTest automation testing platform to get a better browser coverage and faster execution times. By the end of this Mocha testing tutorial, you will have a clear understanding of the setup, installation, and execution of your first automation script with Mocha for JavaScript testing. What Will You Learn From This Mocha Testing Tutorial? In this article, we are going to deep dive into Mocha JavaScript testing to perform automated browser testing with Selenium and JavaScript. We will: Start with the installation and prerequisites for the Mocha framework and explore its advantages. Execute our first Selenium JavaScript test through Mocha with examples. Execute group tests. Use the assertion library. Encounter possible issues along with their resolutions. Execute some Mocha test scripts on the Selenium cloud grid platform with minimal configuration changes and tests on various browsers and operating systems. What Makes Mocha Prevalent? Mochajs, or simply Mocha, is a feature-affluent JavaScript test framework that runs test cases on Node.js and in the browser, making testing simple and fun. By running serially, Mocha JavaScript testing warrants flexibility and precise reporting, while mapping uncaught exceptions to the correct test cases. Mocha provides a categorical way to write a structured code for testing the applications by thoroughly classifying them into test suites and test cases modules for execution and to produce a test report after the run by mapping errors to corresponding test cases. What Makes Mocha a Better Choice Compared To Other JavaScript Testing Frameworks? Range of installation methods: It can be installed globally or as a development dependency for the project. Also, it can be set up to run test cases directly on the web browser. Various browser support: Can be used to run test cases seamlessly on all major web browsers and provides many browser-specific methods and options. Each revision of Mocha provides upgraded JavaScript and CSS build for different web browsers. Number of ways to offer test reports: It provides users with a variety of reporting options, like list, progress and JSON, to choose from with a default reporter displaying the output based on the hierarchy of test cases. Support for several JavaScript assertion libraries: It helps users cut testing cost and speed-up the process by having compatibility for a set of JavaScript assertion libraries—Express.js, Should.js, Chai. This multiple library support makes it easier for testers to write lengthy complex test cases and use them if everything works fine. Works in TDD and BDD environments: Mocha supports behavior driven development (BDD) and test driven development (TDD), allowing developers to write high quality test cases and enhance test coverage. Support for synchronous and asynchronous testing: Unlike other JavaScript testing frameworks, Mocha is designed with features to fortify asynchronous testing utilizing async/await by invoking the callback once the test is finished. It enables synchronous testing by omitting the callback. Setting Up Mocha and Initial Requirements Before we start our endeavor and explore more of Mocha testing, there are some important prerequisites we need to set up to get started with this Mocha testing tutorial for automation testing with Selenium and JavaScript: Node.js and (npm): The Mocha module requires Node.js to be installed on the system. If it is not already present on the system, it can be installed using the npm manager or by downloading the Windows installer directly from the official Node.js website . Mocha package module: Once we have successfully installed Node.js on the system, we can make use of the node package manager, i.e. npm, to install the required package, which is Mocha. To install the latest version using the npm command line tool, we will first initialize the npm using the below command : $ npm init Next, we will install the Mocha module using npm with the below command: $ npm install -g mocha Here, “g” is for installing the module globally, it allows us to access and use the module like a command line tool and does not limit its use to the current project. The –save-dev command below will place the Mocha executable in our ./node_modules/.bin folder: $ npm install --save-dev mocha We will now be able to run the commands in our command line using the mocha keyword: Java—SDK: Since Mocha is a Selenium test framework and Selenium is built upon Java, we would also be installing the Java Development Kit (preferably JDK 7.0 or above) on the system and configure the Java environment. Selenium WebDriver: We require a Selenium WebDriver and that should be already present in our npm node modules. If it is not found in the module, we can install the latest version of Selenium WebDriver using the below command: $ npm install selenium-webdriver Browser driver: Lastly, we will be installing the driver of the specific browser we are going to use. This executable also needs to be placed inside the same bin folder: $ npm install -g chromedriver Writing Our First Mocha JavaScript Testing Script We will create a project directory named mocha_test and then we will create a subfolder name scripts with a test script name single_test.js inside it. Finally, we will initialize our project by hitting the command npm init. This will create a package.json file in an interactive way, which will contain all our required project configurations. It will be required to execute our test script single_test.js. Finally, we will have a file structure that looks like the below: mocha_test | -- scripts | -- single_test.js | -- package.json { "name": "mocha selenium test sample", "version": "1.0.0", "description": " Getting Started with Our First New Mocha Selenium Test Script and Executing it on a Local Selenium Setup ", "scripts": { "test": "npm run single", "single": "./node_modules/.bin/mocha scripts/single_test.js", }, "author": "rohit", "license": "" , "homepage": "https://mochajs.org", "keywords": [ "mocha", "bdd", "selenium", "examples", "test", "bdd", "tdd", "tap" "framework" ], "dependencies": { "bluebird": "^3.7.2", "mocha": "^6.2.2", "selenium-webdriver": "^3.6.0" } } You have successfully configured your project and are ready to execute your first Mocha JavaScript testing script.You can now write your first test script in the file single_test.js that was created earlier: var assert = require('assert'); describe('IndexArray', function() { describe('#checkIndex negative()', function() { it('the function should return -1 when the value is not present', function(){ assert.equal(-1, [4,5,6].indexOf(7)); }); }); }); Code Walkthrough of Our Mocha JavaScript Testing Script We will now walk through the test script and understand what exactly is happening in the script we just wrote. When writing any Mocha test case in JavaScript, there are two basic function calls we should remember that does the job for us under the hood. These functions are: describe() it() We have used both of them in the test script we wrote above. describe(): Is mainly used to define the creation of test groups in Mocha in a simple way. The describe() function takes in two arguments as the input. The first argument is the name of the test group, and the second argument is a callback function. We can also have a nested test group in our test as per the requirement of the test case. If we look at our test case now, we see that we have a test group named IndexArray, which has a callback function that has inside it a nested test group named #checkIndex negative() and inside of that, is another callback function that contains our actual test. it(): This function is used for writing individual Mocha JavaScript test cases. It should be written in a layman way conveying what the test does. The It() function also takes in two arguments as the input, the first argument is a string explaining what the test should do, and the second argument is a callback function, which contains our actual test. In the above Mocha JavaScript testing script, we see that we have the first argument of the it() function that is written as “the function should return -1 when the value is not present” and the second argument is a callback function that contains our test condition with the assertion. Assertion: The assertion libraries are used to verify whether the condition given to it is true or false. It verifies the test results with the assert.equal(actual, expected) method and makes the equality tests between our actual and expected parameters. It makes our testing easier by using the Node.js built-in assert module. In our Mocha JavaScript testing script, we are not using the entire assert library as we only require the assert module with one line of code for this Mocha testing tutorial. If the expected parameter equals our actual parameter, the test is passed, and the assert returns true. If it doesn’t equal, then the test fails, and the assert returns false. It is important to check whether the below section is present in our package.json file as this contains the configurations of our Mocha JavaScript testing script: "scripts": { "test": "npm run single", "single": "./node_modules/.bin/mocha scripts/single_test.js" }, Finally, we can run our test in the command line and execute from the base directory of the project using the below command: $ npm test or $ npm run single The output of the above test is : This indicates we have successfully passed our test and the assert condition is giving us the proper return value of the function based on our test input passed. Let us extend it further and add one more test case to our test suite and execute the test. Now, our Mocha JavaScript testing script: single_test.js will have one more test that will check the positive scenario of the test and give the corresponding output: var assert = require('assert'); describe('IndexArray', function() { describe('#checkIndex negative()', function() { it('the function should return -1 when the value is not present', function(){ assert.equal(-1, [4,5,6].indexOf(7)); }); }); describe('#checkIndex positive()', function() { it('the function should return 0 when the value is present', function(){ assert.equal(0, [8,9,10].indexOf(8)); }); }); }); The output of the above Mocha JavaScript testing script is : You have successfully executed your first Mocha JavaScript testing script in your local machine for Selenium and JavaScript execution. Note: If you have a larger test suite for cross browser testing with Selenium JavaScript, the execution on local infrastructure is not your best call. Drawbacks of Local Automated Testing Setup As you expand your web application, bring in new code changes, overnight hotfixes, and more. With these new changes, comes new testing requirements, so your Selenium automation testing scripts are bound to go bigger, you may need to test across more browsers, more browser versions, and more operating systems. This becomes a challenge when you perform JavaScript Selenium testing through the local setup. Some of the major pain points of performing Selenium JavaScript testing on the local setup are: There is a limitation that the testing can only be performed locally, i.e., on the browsers that are installed locally in the system. This is not beneficial when there is a requirement to execute cross browser testing and perform the test on all the major browsers available for successful results. The test team might not be aware of all the new browsers versions and the compatibility with them will be tested properly. There is a need to devise a proper cross browser testing strategy to ensure satisfactory test coverage. There arise certain scenarios when it is required to execute tests on some of the legacy browsers or browser versions for a specific set of users and operating systems. It might be necessary to test the application on various combinations of browsers and operating systems, and that is not easily available with the local inhouse system setup. Now, you may be wondering about a way to overcome these challenges. Well, don’t stress too much because an online Selenium Grid is there for your rescue. Executing Mocha Script Using Remote Selenium WebDriver on LambdaTest Selenium Grid Since we know that executing our test script on the cloud grid has great benefits to offer, let us get our hands dirty on the same. The process of executing a script on the LambdaTest Selenium Grid is fairly straightforward and exciting. We can execute our local test script by adding a few lines of code that is required to connect to the LambdaTest platform: It gives us the privilege to execute our test on different browsers seamlessly. It has all the popular operating systems and also provides us the flexibility to make various combinations of the operating system and browsers. We can pass on our environment and config details from within the script itself. The test scripts can be executed parallelly and save on executing time. It provides us with an interactive user interface and dashboard to view and analyze test logs. It also provides us the desired capabilities generator with an interactive user interface, which is used to select the environment specification details with various combinations to choose from. So, in our case, the multiCapabilities class in the single.conf.js and parallel.conf.js configuration file will look similar to the below: multiCapabilities: [ { // Desired Capabilities build: "Mocha Selenium Automation Parallel Test", name: "Mocha Selenium Test Firefox", platform: "Windows 10", browserName: "firefox", version: "71.0", visual: false, tunnel: false, network: false, console: false } Next, the most important thing is to generate our access key token, which is basically a secret key to connect to the platform and execute automated tests on LambdaTest. This access key is unique to every user and can be copied and regenerated from the profile section of the user account as shown below. The information regarding the access key, username, and hub can be alternatively fetched from the LambdaTest user profile page Automation dashboard, which looks like the one as mentioned in the screenshot below. Accelerating With Parallel Testing Using LambdaTest Selenium Grid In our demonstration, we will be creating a script that uses the Selenium WebDriver to make a search, open a website, and assert whether the correct website is open. If the assert returns true, it indicates that the test case passed successfully and will show up in the automation logs dashboard. If the assert returns false, the test case fails, and the errors will be displayed in the automation logs. Now, since we are using LambdaTest, we would like to leverage it and execute our tests on different browsers and operating systems. We will execute our test script as below: Single test: On a single environment (Windows 10) and single browser (Chrome). Parallel test: On a parallel environment, i.e., different operating system (Windows 10 and Mac OS Catalina) and different browsers (Chrome, Mozilla Firefox, and Safari). Here we will create a new subfolder in our project directory, i.e., conf. This folder will contain the configurations that are required to connect to the LambdaTest platform. We will create single.conf.js and parallel.conf.js where we need to declare the user configuration, i.e, username and access key along with the desired capabilities for both our single test and parallel test cases. Now, we will have a file structure that looks like below: LT_USERNAME = process.env.LT_USERNAME || "irohitgoyal"; // Lambda Test User name LT_ACCESS_KEY = process.env.LT_ACCESS_KEY || "1267367484683738"; // Lambda Test Access key //Configurations var config = { commanCapabilities: { build: "Mocha Selenium Automation Parallel Test", // Build Name to be displayed in the test logs tunnel: false // It is required if we need to run the localhost through the tunnel }, multiCapabilities: [ { // Desired Capabilities , this is very important to configure name: "Mocha Selenium Test Firefox", // Test name that to distinguish amongst test cases platform: "Windows 10", // Name of the Operating System browserName: "firefox", // Name of the browser version: "71.0", // browser version to be used visual: false, // whether to take step by step screenshot, we made it false for now network: false, // whether to capture network logs, we made it false for now console: false // whether to capture console logs, we made it false for now }, { name: "Mocha Selenium Test Chrome", // Test name that to distinguish amongst test cases platform: "Windows 10", // Name of the Operating System browserName: "chrome",// Name of the browser version: "79.0", // browser version to be used visual: false, // // whether to take step by step screenshot, we made it false for now network: false, // // whether to capture network logs, we made it false for now console: false // // whether to capture console logs, we made it false for now }, { name: "Mocha Selenium Test Safari", // Test name that to distinguish amongst test cases platform: "MacOS Catalina", // Name of the Operating System browserName: "safari",// Name of the browser version: "13.0", // browser version to be used visual: false, // // whether to take step by step screenshot, we made it false for now network: false, // // whether to capture network logs, we made it false for now console: false // // whether tocapture console logs., we made it false for now } ] }; exports.capabilities = []; // Code to integrate and support common capabilities config.multiCapabilities.forEach(function(caps) { var temp_caps = JSON.parse(JSON.stringify(config.commanCapabilities)); for (var i in caps) temp_caps[i] = caps[i]; exports.capabilities.push(temp_caps); }); var assert = require("assert"),// declaring assert webdriver = require("selenium-webdriver"), // declaring selenium web driver conf_file = process.argv[3] || "conf/single.conf.js"; // passing the configuration file var caps = require("../" + conf_file).capabilities; // Build the web driver that we will be using in Lambda Test var buildDriver = function(caps) { return new webdriver.Builder() .usingServer( "http://" + LT_USERNAME + ":" + LT_ACCESS_KEY + "@hub.lambdatest.com/wd/hub" ) .withCapabilities(caps) .build(); }; // declaring the test group Search Engine Functionality for Single Test Using Mocha in Browser describe("Search Engine Functionality for Single Test Using Mocha in Browser " + caps.browserName, function() { var driver; this.timeout(0); // adding the before an event that triggers before the rest execution beforeEach(function(done) { caps.name = this.currentTest.title; driver = buildDriver(caps); done(); }); // defining the test case to be executed it("should find the required search result in the browser ", function(done) { driver.get("https://www.mochajs.org").then(function() { driver.getTitle().then(function(title) { setTimeout(function() { console.log(title); assert( title.match( "Mocha | The fun simple flexible JavaScript test framework | JavaScript | Automated Browser Test" ) != null ); done(); }, 10000); }); }); }); // adding the after event that triggers to check if the test passed or failed afterEach(function(done) { if (this.currentTest.isPassed) { driver.executeScript("lambda-status=passed"); } else { driver.executeScript("lambda-status=failed"); } driver.quit().then(function() { done(); }); }); }); var assert = require("assert"), // declaring assert webdriver = require("selenium-webdriver"), // declaring selenium web driver conf_file = process.argv[3] || "conf/parallel.conf.js"; // passing the configuration file var capabilities = require("../" + conf_file).capabilities; // Build the web driver that we will be using in Lambda Test var buildDriver = function(caps) { return new webdriver.Builder() .usingServer( "http://" + LT_USERNAME + ":" + LT_ACCESS_KEY + "@hub.lambdatest.com/wd/hub" ) .withCapabilities(caps) .build(); }; capabilities.forEach(function(caps) { // declaring the test group Search Engine Functionality for Parallel Test Using Mocha in Browser describe("Search Engine Functionality for Parallel Test Using Mocha in Browser " + caps.browserName, function() { var driver; this.timeout(0); // adding the before event that triggers before the rest execution beforeEach(function(done) { caps.name = this.currentTest.title; driver = buildDriver(caps); done(); }); // defining the test case to be executed it("should find the required search result in the browser " + caps.browserName, function(done) { driver.get("https://www.mochajs.org").then(function() { driver.getTitle().then(function(title) { setTimeout(function() { console.log(title); assert( title.match( "Mocha | The fun simple flexible JavaScript test framework | JavaScript | Automated Browser Test" ) != null ); done(); }, 10000); }); }); }); // adding the after event that triggers to check if the test passed or failed afterEach(function(done) { if (this.currentTest.isPassed) { driver.executeScript("lambda-status=passed"); } else { driver.executeScript("lambda-status=failed"); } driver.quit().then(function() { done(); }); }); }); }); Finally, we have our package.json that has an additional added configuration for parallel testing and required files: "scripts": { "test": "npm run single && npm run parallel", "single": "./node_modules/.bin/mocha specs/single_test.js conf/single.conf.js", "parallel": "./node_modules/.bin/mocha specs/parallel_test.js conf/parallel.conf.js --timeout=50000" }, { "name": "mocha selenium automation test sample", "version": "1.0.0", "description": " Getting Started with Our First New Mocha Selenium Test Script and Executing it on a Local Selenium Setup", "scripts": { "test": "npm run single && npm run parallel", "single": "./node_modules/.bin/mocha scripts/single_test.js conf/single.conf.js", "parallel": "./node_modules/.bin/mocha scripts/parallel_test.js conf/parallel.conf.js --timeout=50000" }, "author": "rohit", "license": "" , "homepage": "https://mochajs.org", "keywords": [ "mocha", "bdd", "selenium", "examples", "test", "bdd", "tdd", "tap" ], "dependencies": { "bluebird": "^3.7.2", "mocha": "^6.2.2", "selenium-webdriver": "^3.6.0" } } The final thing we should do is execute our tests from the base project directory by using the below command: $ npm test This command will validate the test cases and execute our test suite, i.e., the single test and parallel test cases. Below is the output from the command line. Now, if we open the LambdaTest platform and check the user interface, we will see the test runs on Chrome, Firefox, and Safari browsers on the environment specified, i.e., Windows 10 and Mac OS, and the test is passed successfully with positive results. Below, we see a screenshot that depicts our Mocha code is running over different browsers, i.e Chrome, Firefox, and Safari, on the LambdaTest Selenium Grid Platform. The results of the test script execution along with the logs can be accessed from the LambdaTest Automation dashboard. Alternatively, if we want to execute the single test, we can execute the following command: $ npm run single To execute the test cases in different environments in a parallel way, run the below command: $ npm run parallel Wrap Up! This concludes our Mocha testing tutorial and now, we have a clear idea about what Mocha is and how to set it up. It allows us to automate the entire test suite and get started quickly with the minimal configuration and is well readable and also easy to update. We are now able to perform an end-to-end test using group tests and the assertion library. The test cases results can be fetched directly from the command line terminal.
If you're new to coding, the term 'slice method' may be daunting. Put simply, the slice method is a powerful JavaScript tool that lets you extract sections of an array or string. It's one of those methods that, once you understand and can use, can make your developer life much easier! To start off with the basics, imagine a JavaScript array as a bunch of books on a shelf. The JavaScript slice method allows you to take out one or more books from the shelf without rearranging the remaining ones. It takes two arguments, a start index and an end index, which determine which part of the array will be returned in the new one. Both these indexes are completely optional, so if you leave them blank, they will default to 0 (the first element) and length (the last element). By using this powerful method, you can easily retrieve part of an array or string, create substrings and arrays from existing ones, and even remove elements from a given array without mutating it. As an example, let's say we have an array called sentenceArray containing a sentence broken down into individual words: const sentenceArray = ['The', 'slice', 'method', 'is', 'super', 'useful'] Using the JavaScript slice method with only one argument--sentenceArray.slice(2)--we can create a new array containing all elements starting from index 2: result = ['method', 'is', 'super', 'useful']. Pretty neat, huh? Stay tuned for more practical examples! Syntax and Parameters for the JavaScript Slice Method The JavaScript slice() method is used to select a portion of an array. It copies the selected elements from the original array and returns them as a new array. This makes it easy to pull out only the data you need without having to iterate through the entire array and select the elements by hand. The syntax for using this method looks like this: array.slice(start, end) where start is the index which specifies where to start slicing (default is 0) and end is the index at which to end slicing (defaults to array length). You can also leave out one of either parameter, in which case start or end will default as described above. For example, let's say you have an array of pets, and you want to select only dogs from it. Using the JavaScript slice() method, you could write something like this: const pets = ["dog", "cat", "hamster", "gerbil", "parakeet"]; const dogs = pets.slice(0, 1); // returns ["dog"] Using the JS slice() in this way, you can quickly and easily organize your data however you need it! 3 Practical Use Cases for the Javascript Slice Method Your JavaScript journey isn't complete until you understand the JS slice method. It's a powerful tool that can do lots of amazing things, so let's jump into three practical use cases for the JS slice method. Extracting a Subarray From an Array const arr = [1, 2, 3, 4, 5]; const subArr = arr.slice(2, 4); // [3, 4] In this example, the JS slice() method is used to extract a subarray of arr starting from index 2 and ending at index 4 (exclusive), which gives us the values [3, 4]. Removing Elements From an Array const arr = [1, 2, 3, 4, 5]; const newArr = arr.slice(0, 2).concat(arr.slice(3)); // [1, 2, 4, 5] In this example, the JS slice() method is used to remove the element at index 2 from arr. We first extract a subarray of arr containing the elements before the one we want to remove (that is, [1, 2]) and another subarray containing the elements after the one we want to remove (that is, [4, 5]). We then concatenate these two subarrays using the concat() method to get the new array [1, 2, 4, 5]. Extracting a Substring From a String const str = "Hello, world!"; const subStr = str.slice(0, 5); // "Hello" In this example, the JS slice() method is used to extract a substring of str starting from index 0 and ending at index 5 (exclusive), which gives us the string "Hello". Difference Between the slice(), splice() and Substring Methods Have you ever wondered what the difference is between the JS slice(), splice() and substring methods? If so, you're not alone. Let's look at a quick comparison of the three methods to help you understand how they differ. The JavaScript slice() method extracts a part of an array from the starting index to the end index but does not change the existing array, while splice() changes the original array by adding/removing elements from it and substring() extracts characters from a string and does not change the original string. slice(): This method takes two arguments; startIndex and endIndex. It returns a shallow copy of an array that starts at startIndex and ends before endIndex. It copies up to but not including endIndex. If startIndex is undefined, this method will copy all elements from beginning to endIndex; if no arguments are provided, then it will return a shallow copy of the entire array. splice(): This method takes two arguments; startIndex and deleteCount (optional). It removes elements from an array from startIndex up to deleteCount items. It returns an array containing deleted elements or an empty array if no elements were deleted. This method changes the original array as it mutates it by adding/removing specified elements. substring(): This method takes two arguments; startIndex (optional) and endIndex (optional). It returns characters in a string starting at startIndex until before endIndex without altering or changing the original string. If no arguments are provided, then this method returns a copy of the entire string. Best Practices for slice() Method The JavaScript slice() method is a powerful tool for manipulating arrays, but there are some best practices you should know about if you want to get the most out of it. Let’s take a look at a few: Use positive numbers when referring to the index If you need to refer to elements in an array, it's always best to use positive numbers rather than negative numbers (which start from the end of the array). This is because if you later change the size of your array, it might throw off your code if you use negative numbers. Use If Statement for modifying data in an array If you’re looking to modify data within an array, use If Statements instead of slice(). Say If you want to delete an element on index two and update all other elements in the array by subtracting one from their index number; use an if statement combined with splice(). This will give you more control over what happens with each element of your array. Convert strings into arrays before using slice(). If your data is stored as a string rather than an array, convert it into an array before using slice(). This will make it easier and faster for browsers to process the data and give you more precise results when performing slices on strings. Conclusion All in all, the JavaScript Slice method is a great way to quickly and efficiently manipulate and extract data from JavaScript arrays. Not only is it relatively straightforward to use, but it also has some great features, like the ability to work with negative values and the “start” and “end” parameters, making it a very powerful tool. It’s important to remember the differences between “slice” and “splice” and to use the right tool for the right job. But with a bit of practice, JavaScript Slice will become an integral part of your web development toolkit, making it easy to control and manipulate data arrays.
In this article, I’ll explain how to use database hooks in your Node.js applications to solve specific problems that might arise in your development journey. Many applications require little more than establishing a connection pool between a server, database, and executing queries. However, depending on your application and database deployments, additional configurations might be necessary. For example, multi-region distributed SQL databases can be deployed with different topologies depending on the application use case. Some topologies require setting properties on the database on a per-session basis. Let’s explore some of the hooks made available by some of the most popular database clients and ORMs in the Node.js ecosystem. Laying the Foundation The Node.js community has many drivers to choose from when working with the most popular relational databases. Here, I’m going to focus on PostgreSQL-compatible database clients, which can be used to connect to YugabyteDB or another PostgreSQL database. Sequelize, Prisma, Knex and node-postgres are popular clients with varying feature sets depending on your needs. I encourage you to read through their documentation to determine which best suits your needs. These clients come with hooks for different use cases. For instance: Connection hooks: Execute a function immediately before or after connecting and disconnecting from your database. Logging hooks: Log messages to stdout at various log levels. Lifecycle hooks: Execute a function immediately before or after making calls to the database. In this article, I’ll cover some of the hooks made available by these clients and how you can benefit from using them in your distributed SQL applications. I’ll also demonstrate how to use hooks to hash a user's password before creation and how to set runtime configuration parameters after connecting to a multi-region database with read replicas. Sequelize The Sequelize ORM has a number of hooks for managing the entire lifecycle of your database transactions. The beforeCreate lifecycle hook can be used to hash a password before creating a new user: JavaScript User.beforeCreate(async (user, options) => { const hashedPassword = await hashPassword(user.password); user.password = hashedPassword; }); Next, I’m using the afterConnect connection hook to set session parameters. With this YugabyteDB deployment, you can execute reads from followers to reduce latencies, and eliminate the need to read from the primary cluster nodes: JavaScript const config = { host: process.env.DB_HOST, port: 5433, dialect: "postgres", dialectOptions: { ssl: { require: true, rejectUnauthorized: true, ca: [CERTIFICATE], }, }, pool: { max: 5, min: 1, acquire: 30000, idle: 10000, }, hooks: { async afterConnect(connection) { if (process.env.DB_DEPLOYMENT_TYPE === "multi_region_with_read_replicas") { await connection.query("set yb_read_from_followers = true; set session characteristics as transaction read only;"); } }, }, }; const connection = new Sequelize( process.env.DATABASE_NAME, process.env.DATABASE_USER, process.env.DATABASE_PASSWORD, config ); By using this hook, each database session in the connection pool will set these parameters upon establishing a new connection: set yb_read_from_followers = true;: This parameter controls whether or not reading from followers is enabled. set session characteristics as transaction read only;: This parameter applies the read-only setting to all statements and transaction blocks that follow. Prisma Despite being the ORM of choice for many in the Node.js community, at the time of writing, Prisma doesn’t contain many of the built-in hooks found in Sequelize. Currently, the library contains hooks to handle the query lifecycle, logging, and disconnecting, but offers no help before or after establishing connections. Here’s how you can use Prisma’s lifecycle middleware to hash a password before creating a user: JavaScript prisma.$use(async (params, next) => { if (params.model == 'User' && params.action == 'create') { params.args.data.password = await hashPassword(params.args.data.password); } return next(params) }) const create = await prisma.user.create({ data: { username: 'bhoyer', password: 'abc123' }, }) To set session parameters to make use of our read replicas, we’ll have to execute a statement before querying our database: JavaScript await prisma.$executeRaw(`set yb_read_from_followers = true; set session characteristics as transaction read only;`); const users = await prisma.user.findMany(); If you need to immediately establish a connection in your connection pool to set a parameter, you can connect explicitly with Prisma to forgo the lazy connection typical of connection pooling. Prisma has the log levels of query , error, info, and warn. Queries can be handled as events using event-based logging: JavaScript const prisma = new PrismaClient({ log: [ { emit: 'event', level: 'query', }, { emit: 'stdout', level: 'error', }, { emit: 'stdout', level: 'info', }, { emit: 'stdout', level: 'warn', }, ], }); prisma.$on('query', (e) => { console.log('Query: ' + e.query); console.log('Params: ' + e.params); console.log('Duration: ' + e.duration + 'ms'); }); This can be helpful in development when working on query tuning in a distributed system. Here’s how you can make use of the beforeExit hook to access the database before exiting: JavaScript const prisma = new PrismaClient(); prisma.$on('beforeExit', async () => { // PrismaClient still available await prisma.issue.create({ data: { message: 'Connection exiting.' }, }) }); Knex Knex is a lightweight query builder, but it does not have the query middleware found in more full-featured ORMs. To hash a password, you can process this manually using a custom function: JavaScript async function handlePassword(password) { const hashedPassword = await hashPassword(password); return hashedPassword; } const password = await handlePassword(params.password); knex('users').insert({...params, password}); The syntax required to achieve a connection hook in the Knex.js query builder is similar to that of Sequelize. Here’s how we can set our session parameters to read from YugabyteDB’s replica nodes: JavaScript const knex = require('knex')({ client: 'pg', connection: {/*...*/}, pool: { afterCreate: function (connection, done) { connection.query('set yb_read_from_followers = true; set session characteristics as transaction read only;', function (err) { if (err) { //Query failed done(err, conn); } else { console.log("Reading from replicas."); done(); } }); } } }); node-postgres The node-postgres library is the most low-level of all of the libraries discussed. Under the hood, the Node.js EventEmitter is used to emit connection events. A connect event is triggered when a new connection is established in the connection pool. Let’s use it to set our session parameters. I’ve also added an error hook to catch and log all error messages: JavaScript const config = { user: process.env.DB_USER, host: process.env.DB_HOST, password: process.env.DB_PASSWORD, port: 5433, database: process.env.DB_NAME, min: 1, max: 10, idleTimeoutMillis: 5000, connectionTimeoutMillis: 5000, ssl: { rejectUnauthorized: true, ca: [CERTIFICATE], servername: process.env.DB_HOST, } }; const pool = new Pool(config); pool.on("connect", (c) => { c.query("set yb_read_from_followers = true; set session characteristics as transaction read only;"); }); pool.on("error", (e) => { console.log("Connection error: ", e); }); There aren’t any lifecycle hooks at our disposal with node-postgres, so hashing our password will have to be done manually, like with Prisma: JavaScript async function handlePassword(password) { const hashedPassword = await hashPassword(password); return hashedPassword; } const password = await handlePassword(params.password); const user = await pool.query('INSERT INTO user(username, password) VALUES ($1, $2) RETURNING *', [params.username, password]); Wrapping Up As you can see, hooks can solve a lot of the problems previously addressed by complicated and error-prone application code. Each application has a different set of requirements and brings new challenges. You might go years before you need to utilize a particular hook in your development process, but now, you’ll be ready when that day comes. Look out for more from me on Node.js and distributed application development. Until then, keep on coding!
There are multiple ways you can deploy your Nodejs app, be it On-Cloud or On-Premises. However, it is not just about deploying your application, but deploying it correctly. Security is also an important aspect that must not be ignored, and if you do so, the application won’t stand long, meaning there is a high chance of it getting compromised. Hence, here we are to help you with the steps to deploy a Nodejs app to AWS. We will show you exactly how to deploy a Nodejs app to the server using Docker containers, RDS Amazon Aurora, Nginx with HTTPS, and access it using the Domain Name. Tool Stack To Deploy a Node.js App to AWS Nodejs sample app: A sample Nodejs app with three APIs viz, status, insert, and list. These APIs will be used to check the status of the app, insert data in the database and fetch and display the data from the database. AWS EC2 instance: An Ubuntu 20.04 LTS Amazon Elastic Compute Cloud (Amazon EC2) instance will be used to deploy the containerized Nodejs App. We will install Docker in this instance on top of which the containers will be created. We will also install a MySQL Client on the instance. A MySQL client is required to connect to the Aurora instance to create a required table. AWS RDS Amazon Aurora: Our data will be stored in AWS RDS Amazon Aurora. We will store simple fields like username, email-id, and age will be stored in the AWS RDS Amazon Aurora instance.Amazon Aurora is a MySQL and PostgreSQL-compatible relational database available on AWS. Docker: Docker is a containerization platform to build Docker Images and deploy them using containers. We will deploy a Nodejs app to the server, Nginx, and Certbot as Docker containers. Docker-Compose: To spin up the Nodejs, Nginx, and Certbot containers, we will use Docker-Compose. Docker-Compose helps reduce container deployment and management time. Nginx: This will be used to enable HTTPS for the sample Nodejs app and redirect all user requests to the Nodejs app. It will act as a reverse proxy to redirect user requests to the application and help secure the connection by providing the configuration to enable SSL/HTTPS. Certbot: This will enable us to automatically use “Let’s Encrypt” for Domain Validation and issuing SSL certificates. Domain: At the end of the doc, you will be able to access the sample Nodejs Application using your domain name over HTTPS, i.e., your sample Nodejs will be secured over the internet. PostMan: We will use PostMan to test our APIs, i.e., to check status, insert data, and list data from the database. As I said, we will “deploy a Nodejs app to the server using Docker containers, RDS Amazon Aurora, Nginx with HTTPS, and access it using the Domain Name.” Let’s first understand the architecture before we get our hands dirty. Architecture Deploying a Nodejs app to an EC2 instance using Docker will be available on port 3000. This sample Nodejs app fetches data from the RDS Amazon Aurora instance created in the same VPC as that of the EC2 instance. An Amazon Aurora DB instance will be private and, hence, accessible within the same VPC. The Nodejs application deployed on the EC2 instance can be accessed using its public IP on port 3000, but we won’t. Accessing applications on non-standard ports is not recommended, so we will have Nginx that will act as a Reverse Proxy and enable SSL Termination. Users will try to access the application using the Domain Name and these requests will be forwarded to Nginx. Nginx will check the request, and, based on the API, it will redirect that request to the Nodejs app. The application will also be terminated with the SSL. As a result, the communication between the client and server will be secured and protected. Here is the architecture diagram that gives the clarity of deploying a Nodejs app to AWS: Prerequisites Before we proceed to deploying a Nodejs app to AWS, it is assumed that you already have the following prerequisites: AWS account PostMan or any other alternative on your machine to test APIs. A registered Domain in your AWS account. Create an Ubuntu 20.04 LTS EC2 Instance on AWS Go to AWS’ management console sign-in page and log into your account. After you log in successfully, go to the search bar and search for “EC2.” Next, click on the result to visit the EC2 dashboard to create an EC2 instance: Here, click on “Launch instances” to configure and create an EC2 instance: Select the “Ubuntu Server 20.04 LTS” AMI: I would recommend you select t3.small only for test purposes. This will have two CPUs and 2GB RAM. You can choose the instance type as per your need and choice: You can keep the default settings and proceed ahead. Here, I have selected the default VPC. If you want, you can select your VPC. Note: Here, I will be creating an instance in the public subnet: It’s better to put a larger disk space at 30GB. The rest can be the default: Assign a “Name” and “Environment” tag to any values of your choice. You may even skip this step: Allow the connection to port 22 only from your IP. If you allow it from 0.0.0.0/0, your instance will allow anyone on port 22: Review the configuration once, and click on “Launch” if everything looks fine to create an instance: Before the instance gets created, it needs a key-pair. You can create a new key-pair or use the existing one. Click on the “Launch instances” button that will initiate the instance creation: To go to the console and check your instance, click on the “View instances” button: Here, you can see that the instance has been created and is in the “Initiating” phase. Within a minute or two, you can see your instance up and running. Meanwhile, let’s create an RDS instance: Create an RDS Aurora With a MySQL Instance on AWS Go to the search bar at the top of the page and search for “RDS.” Click on the result to visit the “RDS Dashboard.” On the RDS Dashboard, click on the “Create database” button to configure and create the RDS instance: Choose the “Easy create” method, “Amazon Aurora” engine type, and the “Dev/Test” DB instance size as follows: Scroll down a bit and specify the “DB cluster identifier” as “my-Nodejs-database.” You can specify any name of your choice as it is just a name given to the RDS instance; however, I would suggest using the same name so you do not get confused while following the next steps. Also, specify a master username as “admin,” its password, and then click on “Create database.” This will initiate the RDS Amazon Aurora instance creation. Note: For production or live environments, you must not set simple usernames and passwords: Here, you can see that the instance is in the “Creating” state. In around 5-10 minutes, you should have the instance up and running: Make a few notes here: The RDS Amazon Aurora instance will be private by default, which means the RDS Amazon Aurora instance will not be reachable from the outside world and will only be available within the VPC. The EC2 instance and RDS instance belong to the same VPC. The RDS instance is reachable from the EC2 instance. Install Dependencies on the EC2 Instance Now, you can connect to the instance we created. I will not get into details on how to connect to the instance and I believe that you already know it. MySQL Client We will need a MySQL client to connect to the RDS Amazon Aurora instance and create a database in it. Connect to the EC2 instance and execute the following commands from it: sudo apt update sudo apt install mysql-client Create a Table We will need a table in our RDS Amazon Aurora instance to store our application data. To create a table, connect to the Amazon RDS Aurora instance using the MySQL client we installed on the EC2 instance in the previous step. Copy the Database Endpoint from the Amazon Aurora Instance: Execute the following common with the correct values: mysql -u <user-name> -p<password> -h <host-endpoint> Here, my command looks as follows: mysql -u admin -padmin1234 -h (here). Once you get connected to the Amazon RDS Aurora instance, execute the following commands to create a table named “users:” show databases; use main; CREATE TABLE IF NOT EXISTS users(id int NOT NULL AUTO_INCREMENT, username varchar(30), email varchar(255), age int, PRIMARY KEY(id)); select * from users; Refer to the following screenshot to understand command executions: Create an Application Directory Now, let’s create a directory where we will store all our codebase and configuration files: pwd cd /home/ubuntu/ mkdir Nodejs-docker cd Nodejs-docker Clone the Code Repository on the EC2 Instance Clone my Github repository containing all the code. This is an optional step, I have included all the code in this document: pwd cd /home/ubuntu/ git clone cp /home/ubuntu/DevOps/AWS/Nodejs-docker/* /home/ubuntu/Nodejs-docker Note: This is an optional step. If you copy all the files from the repository to the application directory, you do not need to create files in the upcoming steps; however, you will still need to make the necessary changes. Deploying Why Should You Use Docker in Your EC2 Instance? Docker is a containerization tool used to package our software application into an image that can be used to create Docker Containers. Docker helps to build, share and deploy our applications easily. The first step of Dockerization is installing Docker: Install Docker Check Linux Version: cat /etc/issue Update the apt package index: sudo apt-get update Install packages to allow apt to use a repository over HTTPS: sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release Add Docker’s official GPG key: curl -fsSL (here) | sudo gpg –dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg Set up the stable repository: echo “deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] (here) $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null Update the apt package index: sudo apt-get update Install the latest version of Docker Engine and containerd: sudo apt-get install docker-ce docker-ce-cli containerd.io Check Docker version: docker—version Manage Docker as a non-root user: Create ‘docker’ group: sudo groupadd docker Add your user to the docker group: sudo usermod -aG docker <your-user-name> Exit: exit Login back to the terminal. Verify that you can run Docker commands without sudo: docker run hello-world Upon executing the above run command, you should see the output as follows: 14. Refer to the following screenshot to see the command that I have executed: Dockerize Your Node.js Application in the EC2 Instance Once you have Docker installed, the next step is to Dockerize the app. Dockerizing a Nodejs app means writing a Dockerfile with a set of instructions to create a Docker Image. Let’s create Dockerfile and a sample Nodejs app: pwd cd /home/ubuntu/Nodejs-docker Create Dockerfile and paste the following it it; alternatively, you can copy the content from my GitHub repository here: vim Dockerfile: #Base Image node:12.18.4-alpine FROM node:12.18.4-alpine #Set working directory to /app WORKDIR /app #Set PATH /app/node_modules/.bin ENV PATH /app/node_modules/.bin:$PATH #Copy package.json in the image COPY package.json ./ #Install Packages RUN npm install express --save RUN npm install mysql --save #Copy the app COPY . ./ #Expose application port EXPOSE 3000 #Start the app CMD ["node", "index.js"] Create index.js and paste the following in it; alternatively, you can copy the content from my GitHub repository here. This will be our sample Nodejs app: vim index.js: const express = require('express'); const app = express(); const port = 3000; const mysql = require('mysql'); const con = mysql.createConnection({ host: "my-Nodejs-database.cluster-cxxjkzcl1hwb.eu-west3.rds.amazonAWS.com", user: "admin", password: "admin1234" }); app.get('/status', (req, res) => res.send({status: "I'm up and running"})); app.listen(port, () => console.log(`Dockerized Nodejs Applications is listening on port ${port}!`)); app.post('/insert', (req, res) => { if (req.query.username && req.query.email && req.query.age) { console.log('Received an insert call'); con.connect(function(err) { con.query(`INSERT INTO main.users (username, email, age) VALUES ('${req.query.username}', '${req.query.email}', '${req.query.age}')`, function(err, result, fields) { if (err) res.send(err); if (result) res.send({username: req.query.username, email: req.query.email, age: req.query.age}); if (fields) console.log(fields); }); }); } else { console.log('Something went wrong, Missing a parameter'); } }); app.get('/list', (req, res) => { console.log('Received a list call'); con.connect(function(err) { con.query(`SELECT * FROM main.users`, function(err, result, fields) { if (err) res.send(err); if (result) res.send(result); }); }); }); In the above file, change the values of the following variables with the one applicable to your RDS Amazon Aurora instance: host: (here) user: “admin” password: “admin1234” Create package.json and paste the following in it; alternatively, you can copy the content from my GitHub repository here: vim package.json: { “name”: “Nodejs-docker”, “version”: “12.18.4”, “description”: “Nodejs on ec2 using docker container”, “main”: “index.js”, “scripts”: { “test”: “echo \”Error: no test specified\” && exit 1″ }, “author”: “Rahul Shivalkar”, “license”: “ISC” } Update the AWS Security Group To access the application, we need to add a rule in the “Security Group” to allow connections on port 3000. As I said earlier, we can access the application on port 3000, but it is not recommended. Keep reading to understand our recommendations: 1. Go to the “EC2 dashboard,” select the instance, switch to the “Security” tab, and then click on the “Security groups link:” 2. Select the “Inbound rules” tab and click on the “Edit inbound rules” button: 3. Add a new rule that will allow external connection from “MyIp” on the “3000” port: Deploy the Node.js Server on the EC2 Server (Instance) Let’s build a Docker image from the code that we have: cd /home/ubuntu/Nodejs-docker docker build -t Nodejs: 2. Start a container using the image we just build and expose it on port 3000: docker run –name Nodejs -d -p 3000:3000 Nodejs 3. You can see the container is running: docker ps 4. You can even check the logs of the container: docker logs Nodejs Now we have our Nodejs App Docker Container running. 5. Now, you can access the application from your browser on port 3000: Check the status of the application on /status api using the browser: You can insert some data in the application on /insert API using the Postman app using POST request: You can list the data from your application by using /list API from the browser: Alternatively, you can use the curl command from within the EC2 instance to check status, insert data, list data: curl -XGET “here” curl -XPOST “here” Stop and remove the container: docker stop Nodejs docker rm Nodejs In this section, we tried to access APIs available for the application directly using the Public IP:Port of the EC2 instance. However, exposing non-standard ports to the external world in the Security Group is not at all recommended. Also, we tried to access the application over the HTTP protocol, which means the communication that took place from the “Browser” to the “Application” was not secure and an attacker can read the network packets. To overcome this scenario, it is recommended to use Nginx. Nginx Setup Let’s create an Nginx conf that will be used within the Nginx container through a Docker Volume. Create a file and copy the following content in the file; alternatively, you can copy the content from here as well: cd /home/ubuntu/Nodejs-docker mkdir nginx-conf vim nginx-conf/nginx.conf server { listen 80; listen [::]:80; location ~ /.well-known/acme-challenge { allow all; root /var/www/html; } location / { rewrite ^ https://$host$request_uri? permanent; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name Nodejs.devopslee.com www.Nodejs.devopslee.com; server_tokens off; ssl_certificate /etc/letsencrypt/live/Nodejs.devopslee.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/Nodejs.devopslee.com/privkey.pem; ssl_buffer_size 8k; ssl_dhparam /etc/ssl/certs/dhparam-2048.pem; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_prefer_server_ciphers on; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5; ssl_ecdh_curve secp384r1; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8; location / { try_files $uri @Nodejs; } location @Nodejs { proxy_pass http://Nodejs:3000; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-XSS-Protection "1; mode=block" always; add_header X-Content-Type-Options "nosniff" always; add_header Referrer-Policy "no-referrer-when-downgrade" always; add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always; } root /var/www/html; index index.html index.htm index.nginx-debian.html; } In the above file, make changes in the three lines mentioned below. Replace my subdomain.domain, i.e., Nodejs.devopslee, with the one that you want and have: server_name: (here) ssl_certificate: /etc/letsencrypt/live/Nodejs.devopslee.com/fullchain.pem; ssl_certificate_key: /etc/letsencrypt/live/Nodejs.devopslee.com/privkey.pem; Why do you need Nginx in front of the node.js service? Our Nodejs application runs on a non-standard port 3000. Nodejs provides a way to use HTTPS; however, configuring the protocol and managing SSL certificates that expire periodically within the application code base, is something we should not be concerned about. To overcome these scenarios, we need to have Nginx in front of it with an SSL termination and forward user requests to Nodejs. Nginx is a special type of web server that can act as a reverse proxy, load balancer, mail proxy, and HTTP cache. Here, we will be using Nginx as a reverse proxy to redirect requests to our Nodejs application and have SSL termination. Why not Apache? Apache is also a web server and can act as a reverse proxy. It also supports SSL termination; however, there are a few things that differentiate Nginx from Apache. Due to the following reasons, mostly Nginx is preferred over Apache. Let’s see them in short: Nginx has a single or a low number of processes, is asynchronous and event-based, whereas Apache tries to make new processes and new threads for every request in every connection. Nginx is lightweight, scalable, and easy to configure. On the other hand, Apache is great but has a higher barrier to learning. Docker-Compose Let’s install docker-compose as we will need it: Download the current stable release of Docker Compose: sudo curl -L “(uname -s)-$(uname -m)” -o /usr/local/bin/docker-compose Apply executable permissions to the docker-composebinary we just downloaded in the above step: sudo chmod +x /usr/local/bin/docker-compose Test to see if the installation was successful by checking the docker-composeversion: docker-compose –version Create a docker-compose.yaml file; alternatively, you can copy the content from my GitHub repository here. This will be used to spin the Docker containers of our application tech stack we have: cd /home/ubuntu/Nodejs-docker vim docker-compose.yml version: '3' services: Nodejs: build: context: . dockerfile: Dockerfile image: Nodejs container_name: Nodejs restart: unless-stopped networks: - app-network webserver: image: nginx:mainline-alpine container_name: webserver restart: unless-stopped ports: - "80:80" - "443:443" volumes: - web-root:/var/www/html - ./nginx-conf:/etc/nginx/conf.d - certbot-etc:/etc/letsencrypt - certbot-var:/var/lib/letsencrypt - dhparam:/etc/ssl/certs depends_on: - Nodejs networks: - app-network certbot: image: certbot/certbot container_name: certbot volumes: - certbot-etc:/etc/letsencrypt - certbot-var:/var/lib/letsencrypt - web-root:/var/www/html depends_on: - webserver command: certonly --webroot --webroot-path=/var/www/html --email my@email.com --agree-tos --no-eff-email --staging -d Nodejs.devopslee.com -d www.Nodejs.devopslee.com #command: certonly --webroot --webroot-path=/var/www/html --email my@email.com --agree-tos --no-eff-email --force-renewal -d Nodejs.devopslee.com -d www.Nodejs.devopslee.com volumes: certbot-etc: certbot-var: web-root: driver: local driver_opts: type: none device: /home/ubuntu/Nodejs-docker/views/ o: bind dhparam: driver: local driver_opts: type: none device: /home/ubuntu/Nodejs-docker/dhparam/ o: bind networks: app-network: driver: bridge In the above file, make changes in the line mentioned below. Replace my subdomain.domain, i.e., Nodejs.devopslee, with the one you want and have. Change IP for your personal email: –email EMAIL: Email used for registration and recovery contact. command: certonly –webroot –webroot-path=/var/www/html –email my@email.com –agree-tos –no-eff-email –staging -d Nodejs.devopslee.com -d www.Nodejs.devopslee.com Update the AWS Security Groups This time, expose ports 80 and 443 in the security group attached to the EC2 instance. Also, remove 3000 since it is not necessary because the application works through port 443: Include the DNS change Here, I have created a sub-domain “here” that will be used to access the sample Nodejs application using the domain name rather than accessing using an IP. You can create your sub-domain on AWS if you already have your domain: Create 2 “Type A Recordsets” in the hosted zone with a value as EC2 instances’ public IP. One Recordset will be “subdomain.domain.com” and the other will be “www.subdomain.domain.com.” Here, I have created “Nodejs.devopslee.com” and “www.Nodejs.devopslee.com,” both pointing to the Public IP of the EC2 instance. Note: I have not assigned any Elastic IP to the EC2 instance. It is recommended to assign an Elastic IP and then use it in the Recordset so that when you restart your EC2 instance, you don’t need to update the IP in the Recordset because public IPs change after the EC2 instance is restarted. Now, copy values of the “Type NS Recordset” we will need these in the next steps: Go to the “Hosted zone” of your domain and create a new “Record” with your “subdomain.domain.com” adding the NS values you copied in the previous step: Now, you have a sub-domain that you can use to access your application. In my case, I can use “Nodejs.devopslee.com” to access the Nodejs application. We are not done yet. Now, the next step is to secure our Nodejs web application. Include the SSL Certificate Let’s generate our key that will be used in Nginx: cd /home/ubuntu/Nodejs-docker mkdir views mkdir dhparam sudo openssl dhparam -out /home/ubuntu/Nodejs-docker/dhparam/dhparam-2048.pem 2048 Deploy Nodejs App to EC2 Instance We are all set to start our Nodejs app using docker-compose. This will start our Nodejs app on port 3000, Nginx with SSL on port 80 and 443. Nginx will redirect requests to the Nodejs app when accessed using the domain. It will also have a Certbot client that will enable us to obtain our certificates. docker-compose up After you hit the above command, you will see some output as follows. You must see a message as “Successfully received certificates.” Note: The above docker-compose command will start containers and will stay attached to the terminal. We have not used the -d option to detach it from the terminal: You are all set, now hit the URL in the browser and you should have your Nodejs application available on HTTPS: You can also try to hit the application using the curl command: List the data from the application: curl (here) Insert an entry in the application: curl -XPOST (here) Again list the data to verify if the data has been inserted or not: curl (here) Check the status of the application: (Here) Hit the URL in the browser to get a list of entries in the database: (Here) Auto-Renewal of SSL Certificates Certificates we generate using “Let’s Encrypt” are valid for 90 days, so we need to have a way to renew our certificates automatically so that we don’t end up with expired certificates. To automate this process, let’s create a script that will renew certificates for us and a cronjob to schedule the execution of this script. Create a script with –dry-runto test our script: vim renew-cert.sh #!/bin/bash COMPOSE="/usr/local/bin/docker-compose --no-ansi" DOCKER="/usr/bin/docker" cd /home/ubuntu/Nodejs-docker/ $COMPOSE run certbot renew --dry-run && $COMPOSE kill -s SIGHUP webserver $DOCKER system prune -af Change permissions of the script to make it executable: chmod 774 renew-cert.sh Create a cronjob: sudo crontab -e */5 * * * * /home/ubuntu/Nodejs-docker/renew-cert.sh >> /var/log/cron.log 2>&1 List the cronjobs: sudo crontab -l 5. Check logs of the cronjob after five mins, as we have set a cronjob to be executed on every fifth minute: tail -f /var/log/cron.lo In the above screenshot, you can see a “Simulating renewal of an existing certificate….” message. This is because we have specified the “–dry-run” option in the script. Let’s remove the “–dry-run” option from the script: vim renew-cert.sh #!/bin/bash COMPOSE="/usr/local/bin/docker-compose --no-ansi" DOCKER="/usr/bin/docker" cd /home/ubuntu/Nodejs-docker/ $COMPOSE run certbot renew && $COMPOSE kill -s SIGHUP webserver $DOCKER system prune -af This time you won’t see such a “Simulating renewal of an existing certificate….” message. This time the script will check if there is any need to renew the certificates, and if required will renew the certificates else will ignore and say “Certificates not yet due for renewal.” What Is Next on How To deploy the Nodejs App to AWS? We are done with setting up our Nodejs application using Docker on AWS EC2 instance; however, there are other things that come into the picture when you want to deploy a highly available application for production and other environments. The next step is to use an Orchestrator, like ECS or EKS, to manage our Nodejs application at the production level. Replication, auto-scaling, load balancing, traffic routing, and monitoring container health does not come out of the box with Docker and Docker-Compose. For managing containers and microservices architecture at scale, you need a container orchestration tool like ECS or EKS. Also, we did not use any Docker repository to store our Nodejs app Docker Image. You can use AWS ECR, a fully managed AWS container registry offering high-performance hosting. Conclusion To deploy Nodejs app to AWS does not mean just creating a Nodejs application and deploying it on the AWS EC2 instance with a self-managed database. There are various aspects like containerizing the Nodejs App, SSL termination, and domain for the app that come into the picture when you want to speed up your software development, deployment, security, reliability, and data redundancy. In this article, we saw the steps to dockerize the sample Nodejs application, using AWS RDS Amazon Aurora and deploying a Nodejs app to EC2 instance using Docker and Docker-Compose. We enabled SSL termination to our sub-domain to be used to access the Nodejs application. We saw the steps to automate domain validation and SSL certificate creation using Certbot along with a way to automate certificate renewal that is valid for 90 days. This is enough to get started with a sample Nodejs application; however, when it comes to managing your real-time applications, 100s of microservices, 1000s of containers, volumes, networking, secrets, egress-ingress, you need a container orchestration tool. There are various tools, like self-hosted Kubernetes, AWS ECS, AWS EKS, that you can leverage to manage the container life cycle in your real-world applications.
Hi there, my name is Rahul, and I am 18 years old, learning development and designing sometimes. Today, I'd like to share some useful JavaScript code snippets I have saved that I think can help make your life as a developer easier. Let's get started! Generate a random number between two values: const randomNumber = Math.random() * (max - min) + min Check if a number is an integer: const isInteger = (num) => num % 1 === 0 Check if a value is null or undefined: const isNil = (value) => value === null || value === undefined Check if a value is a truthy value: const isTruthy = (value) => !!value Check if a value is a falsy value: const isFalsy = (value) => !value Check if a value is a valid credit card number: JavaScript const isCreditCard = (cc) => { const regex = /(?:4[0-9]{12}(?:[0-9]{3})?|[25][1-7][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})/; return regex.test(cc); } Check if a value is an object: const isObject = (obj) => obj === Object(obj) Check if a value is a function: const isFunction = (fn) => typeof fn === 'function' Remove Duplicated from Array const removeDuplicates = (arr) => [...new Set(arr)]; Check if a value is a promise: const isPromise = (promise) => promise instanceof Promise Check if a value is a valid email address: JavaScript const isEmail = (email) => { const regex = /(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))/; return regex.test(email); } Check if a string ends with a given suffix: const endsWith = (str, suffix) => str.endsWith(suffix) Check if a string starts with a given prefix: const startsWith = (str, prefix) => str.startsWith(prefix) Check if a value is a valid URL: JavaScript const isURL = (url) => { const regex = /(?:http(s)?:\/\/)?[\w.-]+(?:\.[\w\.-]+)+[\w\-\._~:/?#[\]@!\$&'\(\)\*\+,;=.]+/; return regex.test(url); } Check if a value is a valid hexadecimal color code: JavaScript const isHexColor = (hex) => { const regex = /#?([0-9A-Fa-f]{6}|[0-9A-Fa-f]{3})/; return regex.test(hex); } Check if a value is a valid postal code: JavaScript const isPostalCode = (postalCode, countryCode) => { if (countryCode === 'US') { const regex = /[0-9]{5}(?:-[0-9]{4})?/; return regex.test(postalCode); } else if (countryCode === 'CA') { const regex = /[ABCEGHJKLMNPRSTVXY][0-9][ABCEGHJKLMNPRSTVWXYZ] [0-9][ABCEGHJKLMNPRSTVWXYZ][0-9]/; return regex.test(postalCode.toUpperCase()); } else { // Add regex for other country codes as needed return false; } } Check if a value is a DOM element: JavaScript const isDOMElement = (value) => typeof value === 'object' && value.nodeType === 1 && typeof value.style === 'object' && typeof value.ownerDocument === 'object'; Check if a value is a valid CSS length (e.g. 10px, 1em, 50%): JavaScript const isCSSLength = (value) => /([-+]?[\d.]+)(%|[a-z]{1,2})/.test(String(value)); Check if a value is a valid date string (e.g. 2022-09-01, September 1, 2022, 9/1/2022): JavaScript const isDateString = (value) => !isNaN(Date.parse(value)); Check if a value is a number representing a safe integer (those integers that can be accurately represented in JavaScript): const isSafeInteger = (num) => Number.isSafeInteger(num) Check if a value is a valid Crypto address: JavaScript //Ethereum const isEthereumAddress = (address) => { const regex = /0x[a-fA-F0-9]{40}/; return regex.test(address); } //bitcoin const isBitcoinAddress = (address) => { const regex = /[13][a-km-zA-HJ-NP-Z0-9]{25,34}/; return regex.test(address); } // ripple const isRippleAddress = (address) => { const regex = /r[0-9a-zA-Z]{33}/; return regex.test(address); } Check if a value is a valid RGB color code: JavaScript const isRGBColor = (rgb) => { const regex = /rgb\(\s*([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\s*,\s*([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\s*,\s*([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\s*\)/; return regex.test(rgb); } Quickly create an array of characters from a string: JavaScript const string = "abcdefg"; const array = [...string]; Quickly create an object with all the properties and values of another object but with a different key for each property: JavaScript const original = {a: 1, b: 2, c: 3}; const mapped = {...original, ...Object.keys(original).reduce((obj, key) => ({...obj, [key.toUpperCase()]: original[key]}), {})}; Quickly create an array of numbers from 1 to 10: JavaScript const array = [...Array(10).keys()].map(i => i + 1); Quickly shuffle an array: JavaScript const shuffle = (array) => array.sort(() => Math.random() - 0.5); Convert an array-like object (such as a NodeList) to an array: JavaScript const toArray = (arrayLike) => Array.prototype.slice.call(arrayLike); Sort Arrays: JavaScript //Ascending const sortAscending = (array) => array.sort((a, b) => a - b); //Descending const sortDescending = (array) => array.sort((a, b) => b - a); Debounce a function: JavaScript const debounce = (fn, time) => { let timeout; return function(...args) { clearTimeout(timeout); timeout = setTimeout(() => fn.apply(this, args), time); }; }; Open a new tab with a given URL: JavaScript const openTab = (url) => { window.open(url, "_blank"); }; Get the difference between two dates: JavaScript const dateDiff = (date1, date2) => Math.abs(new Date(date1) - new Date(date2)); Generate a random string of a given length: JavaScript const randomString = (length) => { let result = ""; const characters = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; for (let i = 0; i < length; i++) { result += characters.charAt(Math.floor(Math.random() * characters.length)); } return result; }; Get value of cookie: JavaScript const getCookie = (name) => { const value = `; ${document.cookie}`; const parts = value.split(`; ${name}=`); if (parts.length === 2) return parts.pop().split(";").shift(); }; Thank you for Reading. It is important to note that simply copying and pasting code without understanding how it works can lead to problems down the line. It is always a good idea to test the code and ensure that it functions properly in the context of your project. Also, don't be afraid to customize the code to fit your needs. As a helpful tip, consider saving a collection of useful code snippets for quick reference in the future. I am also learning if I am going wrong somewhere, let me know.
Anthony Gore
Founder,
Vue.js Developers
John Vester
Lead Software Engineer,
Marqeta @JohnJVester
Justin Albano
Software Engineer,
IBM
Swizec Teller
CEO,
preona