The topic of security covers many different facets within the SDLC. From focusing on secure application design to designing systems to protect computers, data, and networks against potential attacks, it is clear that security should be top of mind for all developers. This Zone provides the latest information on application vulnerabilities, how to incorporate security earlier in your SDLC practices, data governance, and more.
If you're developing applications for a business, then one of your most important tasks is collecting payment for goods or services. Sure, providing those goods or services is essential to keeping customers happy. But if you don’t collect payments, your business won’t be around for very long. In the dev world, when we talk about infrastructure, we often consider the resiliency of our servers and APIs. We don’t talk about payment processing in the same way. But we should. Payment processing is something companies take for granted as long as it’s working smoothly. Once they’ve put some sort of solution in place, the cash starts flowing. Then, they forget about it until they encounter issues with their payment processor, or need to expand into a new region. With steady cashflows being essential to so many businesses, it’s worth thinking about resiliency for this critical piece of your business operations. In this post, we’ll look at a few reasons businesses should put time into improving the resiliency of their payment processing, and how to approach this problem from a technical perspective. Why Bother Building Resiliency Into Payments? If your company is like most others, you’ve probably been using a single payment processor for a while. In that case, you might ask: Why should I build more resiliency than my processor already has in place? After all, that’s why you pay them their processing fees. It’s up to them to make sure things work properly. Even if you set aside the present resiliency of whatever payment processor you’re using, you’ll still find many benefits from adding more processing options to your application. Of course, this isn’t possible if all of your customer PCI data is stored with a single payment processor, so you’ll need technical solutions to allow you to work with multiple payment processors without increasing your PCI compliance scope. Potential Cost Benefits For example, if you have only a single payment processor, you’re stuck paying whatever fees they charge you for the transactions you send their way. If you have multiple processors in place, you can route payments based on whichever service charges the lowest transaction cost and has the highest authorization rates. Maybe one processor has better pricing on higher volumes of transactions, but a different one has better rates for high-amount transactions. In this situation, you could send the majority of your customers’ purchases through the higher volume processor, but send large transactions through the processor that gives you better rates based on the individual payment amount. It’s a great way to boost profits without passing along costs to your customers. Overcoming Geographical or Regional Restrictions Certain individual payment processors may be constrained by geographical restrictions, giving you the ability to process payments only from specific countries. If you’re seeking to expand your business into other markets, you’ll encounter less friction if you already have several options on hand. This way, you can route customer payments to specific processors based on their location. It’s also possible that you can benefit from different processing costs across different regions, continuing to find savings from that difference. Greater Control Over the Process Another benefit of adding more payment processing services to your stack is that you gain greater control over the details of how your payments are processed By controlling how payments are processed in your systems, you can run more analytics to better understand the types of purchases your customers are making. With these insights, you can make even better decisions about which processors should receive your transactions. Greater control also means that you can make sure you provide a better customer experience when any one of your payment processors experiences an outage. If you only have a single payment processor and it experiences an outage, then you’ll be unable to accept payments. You’ll be scrambling to find a workaround. And for companies that make most of their annual business during a few key days—like many US retailers who rely on Black Friday shopping—such an outage can be disastrous. If your business already has control over your payments stack, then you could be able to design your system to fail over automatically if transaction decline rates increase. Along with protecting your sales, you’ll also benefit your customers by ensuring that they have a seamless payments experience, even if you’re rerouting payments to a backup processor behind the scenes. With payment processing resiliency, your customers will experience no problems even as you failover to another payment processor. How Do You Build for Payment Processing Resiliency? By now, you’re probably thinking: How is this even possible? Doesn’t PCI compliance require passing your customers’ payment information straight from your purchase page to your payment processor, bypassing any systems that aren’t PCI compliant? At the very least, wouldn’t introducing this type of resiliency widen the scope of compliance, causing headaches for any business that has offloaded PCI compliance to a single payment processor? If you don’t have the right technology, then yes — you definitely could end up increasing how much of your infrastructure falls under the scope of PCI compliance. Fortunately, using an architectural approach to this problem with a data privacy vault provides business flexibility without adding to your PCI compliance scope. But, without a good solution for keeping customer financial data safe, your hands are tied. Instead of being able to enjoy true payment processing resiliency, you have to hope that your current payment processor is resilient. But there’s a better way. Using a well-designed data privacy vault can unlock all of the benefits described above. A data privacy vault lets you isolate, protect, and govern any type of sensitive information, including PCI data—while still remaining fully PCI compliant and without increasing your PCI compliance scope. Instead of introducing greater risk, data privacy vaults significantly reduce the risk of using PCI data to process payments and help you to ensure PCI compliance across your business applications. By enabling you to completely separate sensitive data — not just financial information, but also PHI and PII — from the rest of the transactional data in your systems, a data privacy vault gives your sensitive PCI data an extra layer of protection while easing compliance with data privacy regulations. An Example Implementation What does it look like to implement this technology in your systems? We’ll outline one example, from Skyflow (described in more detail here), looking at some of their diagrams to illustrate how this works. To start with, let’s consider what a single credit card transaction looks like. How PCI Tokenization Works for Card Transactions (source) From the outset, when a merchant seeks to carry out a transaction with a credit card, the credit card data they send to the processor is already tokenized. This point is significant as it relates to introducing other processors since it means that you can store tokens in your systems, rather than sensitive, plain-text PCI data. With these tokens, you can reference the PCI data that’s stored in the vault — credit card details like the PANs and expiration dates. Based on the routing logic for your application, you can send the PCI data to the appropriate processor at the appropriate time. High-level Architecture for Using Multiple Payment Gateways with Skyflow (source) By using a data privacy vault to store PCI data you no longer need to store sensitive information within your own infrastructure or bet your business on a single payment processor that is storing PCI data on your behalf. One company that is offering this type of payment resiliency, as well as data residency, is Apaya, a merchant-enabling payment automation platform based in Dubai. As Apaya emphasized building for payment processing resiliency, it leaned heavily on data privacy vaults to get the job done. Conclusion As a software developer, you build business applications that depend on payments to keep the faucet running. That means that heavy dependence on one payment processor introduces a single point of failure that could cripple your business if an issue arises. For this reason, many enterprises are building resiliency into their payment processing, leveraging multiple payment processors. Of course, building in this flexibility without adding to your PCI compliance burden is only possible when companies leverage tools like a data privacy vault. Designing your systems to isolate sensitive data and ease compliance with a data privacy vault is good design, and good for business.
As IoT becomes increasingly common in our lives in areas such as healthcare, smart homes, smart cities, and self-driving vehicles, the security of the devices becomes more important. Not only do we need to protect the data that all these billions of devices are sending, but we also need to be concerned with the safety of individuals who are using these devices. An intruder who hacks into an IoT system can potentially cause serious physical harm to humans. Therefore, IoT security has become an inevitable topic in IoT development. Why Is Security So Important for IoT Systems? We hear in the news about hackers who exploit vulnerabilities in IoT systems such as children's smart toys. The intruder is able to gain access to the toy's camera, speakers, and microphone and is able to spy on the child. In another instance, hackers were able to hack into a pacemaker and manipulate the heart rate and drain the battery, potentially causing serious harm to the patient. The reason that these IoT systems were compromised was a lack of security. Weak passwords and no encryption made it easier for intruders to compromise these systems. If security measures had been followed, the likelihood of these intrusions occurring would have been reduced. It is easy to see how security can be ignored. The system works just fine without security, so why bother with it? Besides, once the system is designed, tested, and working, you want to get the product to market, right? However, ignoring security in your IoT system can be short-sighted and costly. For your clients, it could mean loss of property, data privacy, and, in the worst case, loss of personal safety or even life. For your company, it could mean the cost of a product recall, possible legal costs, and loss of brand reputation and trust. These are all consequences that could be avoided or mitigated with a few simple up-front security measures. Common Security Risks in IoT Network Here are some common security risks we may need to keep in mind when building an IoT network. Insufficient authentication and authorization mechanisms: IoT devices that have weak or no authentication mechanisms can make them vulnerable to unauthorized access. It is important to not only control the device access but to also control what a device is allowed to do once it is connected to the network. Weak passwords: Some vendors may use the same password for the same device model. Other vendors may use weak passwords that are easy to guess, such as “admin” or “password”. As we will show in a later article, the most sophisticated encryption algorithms cannot overcome a password that is easily guessable, such as making the password the same as the username. Weak passwords make it easy for attackers to gain access to the device and its data. IoT vendors should enforce strong password policies and require users to change default passwords. Insecure communication protocols: Using plaintext communication such as TCP instead of TLS in IoT communications makes it easy for attackers to intercept the data. For example, the Man-In-The-Middle attack where the attacker can eavesdrop on IoT communications to gather personal data such as passwords, health information, and other personal data. Lack of user training: IoT vendors may not provide proper security awareness to their users. This leaves the uneducated user vulnerable to attacks. It is important that IoT vendors provide proper security training to their users. Denial of Service (DoS) Attacks: The IoT network can be vulnerable to DoS or distributed DoS attacks where a large number of devices are used to exploit software defects or simply flood a network with malicious traffic. To prevent such attacks, IoT networks need to have robust security measures in place, including firewalls, intrusion detection and prevention systems, and access controls. Additionally, IoT networks should be designed to be resilient, with the ability to automatically detect and mitigate attacks without requiring significant administrative effort. IoT vendors need to prioritize security at all aspects of their IoT design, from the initial design phase, through to deployment, including post-sales to ensure that their devices are secure and resilient to attacks. What Can We Do in MQTT to Secure Our IoT Systems? There are several aspects of security that we need to consider when building an IoT system. They can be broken down by the different protocol layers where they reside. Namely, the networking layer, transport layer, and application layer. Networking layer: MQTT runs in IP network, so the networking layer security best practices all apply to MQTT. Namely, the proper use of firewalls, VPNs, and IPsec to help prevent intruders from accessing the data on the IoT network. If you are new to MQTT, you may refer to the following blogs for a quick understanding: What is the MQTT and Why is it the Best Protocol for IoT? The Easiest Guide to Getting Started with MQTT Transport layer: At the transport layer, we do not recommend sending plaintext data directly through protocols such as TCP or WebSocket. For example, sensitive data such as user names and passwords used for authentication in the application layer may make the security mechanism of the application layer useless. Because when an intruder steals data directly from the transport layer, he can directly know the username and password we are using. It’s better to provide end-to-end security for our data with the help of the TLS encryption protocol. In addition to turning data into ciphertext data that is difficult to crack, TLS can also provide multiple protections, such as supporting the client in confirming the legality of the server's identity. When the client is required to use a certificate, the server can also confirm whether the client is legal. This will effectively avoid man-in-the-middle attacks. Application layer: Although it seems that we have provided enough security protection at the transport layer, not all systems support TLS. The MQTT protocol running on the application layer also provides support for password authentication and token authentication through the username and password fields, ensuring that only legitimate devices can access the MQTT broker. MQTT 5.0 also introduces an enhanced authentication mechanism to provide two-way identity confirmation. On the other hand, the security mechanism at the application layer is usually the last layer of the security guarantee. In addition to verifying the identity of the accessor, we'd better check the operations that the accessor can perform, such as which topics the accessor can publish messages to, and from which topics messages can be consumed. Summary All in all, MQTT security is a crucial factor in protecting IoT systems from various attacks and threats. This article has provided a comprehensive overview of MQTT security, covering the importance of MQTT security and some common security challenges that developers and system administrators may encounter. Also, as mentioned in the article, there’re many measures we can implement with MQTT to enhance the security of your IoT systems devices. In the coming article, I'll introduce the password-based authentication method. Please stay tuned.
In today's digital age, organizations rely on a variety of applications and systems to carry out their business operations. However, managing user identities and access across multiple systems can be a complex and time-consuming process. This is where identity federation comes in, offering a solution to simplify authentication and authorization across systems. Identity federation is a mechanism that allows different identity management systems to share authentication and authorization information in a secure and standardized way. It enables users to access multiple applications or systems using a single set of credentials without having to sign in to each individual system separately. This not only simplifies the user experience but also reduces the administrative burden of managing user identities and access across multiple systems. What Is Identity Federation? Identity federation is a system that enables the sharing of authentication and authorization data between different identity management systems. It allows users to authenticate once and then access multiple applications or systems without needing to sign in to each one separately. The process is made possible by a central identity provider (IdP) that manages and stores user identity information, including authentication credentials. The IdP issues security tokens that contain information about the user's identity and permissions, which can be used to gain access to other applications or systems. In an identity federation scenario, a central identity provider (IdP) manages and stores the user's identity information, including their authentication credentials. When a user attempts to access a federated application or system, the IdP authenticates the user and issues a security token that contains information about the user and their permissions. The user then presents this token to the application or system to gain access. How Does Identity Federation Work? Identity federation works by establishing trust relationships between different systems. This is achieved by using standard protocols, such as Security Assertion Markup Language (SAML) or OpenID Connect, which define how information is exchanged between systems. The process typically involves the following steps: User attempts to access a federated application or system. The application or system redirects the user to the IdP for authentication. The IdP authenticates the user and issues a security token containing information about the user's identity and permissions. The user presents the security token to the application or system to gain access. Identity federation can be implemented using various standards, such as Security Assertion Markup Language (SAML), OpenID Connect, and OAuth. SAML is one of the most widely used standards for identity federation, providing a framework for exchanging authentication and authorization data between different systems. OpenID Connect and OAuth are newer standards that build on top of SAML, providing additional features such as user profile information and delegated authorization. Benefits of Identity Federation Identity federation offers numerous benefits to organizations. First, it simplifies the user experience by reducing the number of logins required to access multiple systems. This not only saves time but also enhances user productivity and satisfaction. Second, it improves security by centralizing authentication and authorization, reducing the risk of password-based attacks such as phishing and credential stuffing. Third, it streamlines administration by reducing the need to manage user identities and access across multiple systems. Simplifies User Access Management:Identity federation eliminates the need for users to remember multiple sets of login credentials for different applications, which can significantly reduce password fatigue and helpdesk calls related to password resets. Enhances Security: Identity federation improves security by enabling centralized control over access management. This means that administrators can quickly revoke access permissions when a user leaves the organization or if their role changes. Increases Productivity: Identity federation simplifies the login process, which can improve productivity by reducing the time users spend logging in to different applications and systems. Reduces Administrative Overhead: Identity federation can reduce the administrative overhead of managing user access across multiple systems. This can free up IT resources to focus on other critical tasks. Identity federation is particularly beneficial in multi-cloud environments where organizations use multiple cloud services from different providers. Each cloud service may have its own identity management system, which can make it difficult to manage user identities and access across multiple services. Identity federation enables organizations to use a single identity management system across multiple cloud services, simplifying administration and enhancing security. Implementing identity federation requires careful planning and coordination among different systems and stakeholders. Organizations must ensure that the systems they want to federate support the same standards and protocols. They must also establish trust relationships between the IdP and other systems to ensure secure exchange of authentication and authorization data. Finally, they must ensure that proper controls and monitoring are in place to detect and prevent unauthorized access. Key Considerations for Identity Federation Compatibility: Organizations must ensure that their applications and systems are compatible with the identity federation protocol they plan to use. This may require upgrading or configuring the systems to support the protocol. Trust Relationships: Establishing trust relationships between different systems requires careful planning and implementation. Organizations must ensure that the trust relationships are secure and that access to sensitive information is appropriately controlled. Governance:Identity federation requires a robust governance framework to ensure that access management policies are consistent across all applications and systems. This includes monitoring and auditing access activity to identify and remediate any security threats. Conclusion In conclusion, identity federation is a powerful mechanism that simplifies authentication and authorization across systems. It offers numerous benefits, including improved user experience, enhanced security, and streamlined administration. As organizations continue to rely on multiple systems to carry out their business operations, identity federation will become increasingly important to ensure the efficient and secure management of user identities and access. Identity federation is a powerful tool that can simplify access management across multiple systems while enhancing security and productivity. It is a critical capability for organizations that rely on multiple applications and systems and is increasingly essential in a world of cloud computing and remote work. Organizations must carefully consider the benefits and key considerations of identity federation to ensure successful implementation and ongoing management.
Authentication is the process of identifying a user and verifying that they have access to a system or server. It is a security measure that protects the system from unauthorized access and guarantees that only valid users are using the system. Given the expansive nature of the IoT industry, it is crucial to verify the identity of those seeking access to its infrastructure. Unauthorized entry poses significant security threats and must be prevented. And that's why IoT developers should possess a comprehensive understanding of the various authentication methods. Today, I'll explain how authentication works in MQTT, what security risks it solves, and introduce the first authentication method: password-based authentication. What Is Authentication in MQTT? Authentication in MQTT refers to the process of verifying the identity of a client or a broker before allowing them to establish a connection or interact with the MQTT network. It is only about the right to connect to the broker and is separate from authorization, which determines which topics a client is allowed to publish and subscribe to. The authorization will be discussed in a separate article in this series. The MQTT broker can authenticate clients mainly in the following ways: Password-based authentication: The broker verifies that the client has the correct connecting credentials: username, client ID, and password. The broker can verify either the username or client ID against the password. Enhanced authentication (SCRAM): This authenticates the clients using a back-and-forth challenge-based mechanism known as Salted Challenge Response Authentication Mechanism. Other methods include Token Based Authentication like JWT, and also HTTP hooks, and more. In this article, we will focus on password-based authentication. Password-Based Authentication Password-based authentication aims to determine if the connecting party is legitimate by verifying that he has the correct password credentials. In MQTT, password-based authentication generally refers to using a username and password to authenticate clients, which is also recommended. However, in some scenarios, some clients may not carry a username, so the client ID can also be used as a unique identifier to represent the identity. When an MQTT client connects to the broker, it sends its username and password in the CONNECT packet. The example below shows a Wireshark capture of the CONNECT packet for a client with the corresponding values of client1, user, and MySecretPassword. After the broker gets the username (or client ID) and password from the CONNECT packet, it needs to look up the previously stored credentials in the corresponding database according to the username, and then compare it with the password provided by the client. If the username is not found in the database, or the password does not match the credentials in the database, the broker will reject the client's connection request. This diagram shows a broker using PostgreSQL to authenticate the client's username and password. The password-based authentication solves one security risk. Clients that do not hold the correct credentials (Username and Password) will not be able to connect to the broker. However, as you can see in the Wireshark capture, a hacker who has access to the communication channel can easily sniff the packets and see the connect credentials because everything is in plaintext. We will see in a later article in this series how we can solve this problem using TLS (Transport Layer Security). Secure Your Passwords With Salt and Hash Storing passwords in plaintext is not considered secure practice because it leaves passwords vulnerable to attacks. If an attacker gains access to a password database or file, they can easily read and use the passwords to gain unauthorized access to the system. To prevent this from happening, passwords should instead be stored in a hashed and salted format. What is a hash? It is a function that takes some input data, applies a mathematical algorithm to the data, and then generates an output that looks like complete nonsense. The idea is to obfuscate the original input data and also the function should be one-way. That means that there is no way to calculate the input given the output. However, hashes by themselves are not secure and can be vulnerable to dictionary attacks as shown in the following example. Consider this sha256 hash: 8f0e2f76e22b43e2855189877e7dc1e1e7d98c226c95db247cd1d547928334a9 It looks secure; you cannot tell what the password is by looking at it. However, the problem is that for a given password, the hash always produces the same result. So, it is easy to create a database of common passwords and their hash values. Here is an example: A hacker could look up this hash in an online hash database and learn that the password is passw0rd. "Salting" a password solves this problem. A salt is a random string of characters that is added to the password before hashing. This makes each password hash unique, even if the passwords themselves are the same. The salt value is stored alongside the hashed password in the database. When a user logs in, the salt is added to their password, and the resulting hash is compared to the hash stored in the database. If the hashes match, the user is granted access. Suppose that we add a random string of text to the password before we perform the hash function. The random string is called the salt value. For example with a salt value of az34ty1, sha256(passw0rdaz34ty1) is 6be5b74fa9a7bd0c496867919f3bb93406e21b0f7e1dab64c038769ef578419d This is unlikely to be in a hash database since this would require a large number of database hash entries just for the single plaintext passw0rd value. Best Practices for Password-Based Authentication in MQTT Here are some key takeaways from what we’ve mentioned in this article, which can be the best practices for password-based authentication in MQTT: One of the most important aspects of password-based authentication in MQTT is choosing strong and unique passwords. Passwords that are easily guessable or reused across multiple accounts can compromise the security of the entire MQTT network. It is also crucial to securely store and transmit passwords to prevent them from falling into the wrong hands. For instance, passwords should be hashed and salted before storage, and transmitted over secure channels like TLS. In addition, it's a good practice to limit password exposure by avoiding hard-coding passwords in code or configuration files, and instead using environment variables or other secure storage mechanisms. Summary In conclusion, password-based authentication plays a critical role in securing MQTT connections and protecting the integrity of IoT systems. By following best practices for password selection, storage, and transmission, and being aware of common issues like brute-force attacks, IoT developers can help ensure the security of their MQTT networks. However, it's important to note that password-based authentication is just one of many authentication methods available in MQTT, and may not always be the best fit for every use case. For instance, more advanced methods like digital certificates or OAuth 2.0 may provide stronger security in certain scenarios. Therefore, it's important for IoT developers to stay up-to-date with the latest authentication methods and choose the one that best meets the needs of their particular application. Next, I'll introduce another authentication method: SCRAM. Stay tuned for it!
There are many new L2s emerging in the web3 ecosystem with the goal of improving Ethereum’s scalability. These L2s use a variety of solutions to create layers on top of Ethereum that are faster, cheaper, and yet still benefit from the base Ethereum blockchain layer to secure transactions. Among this new set of layer two solutions, ZK-rollups (or zero-knowledge rollups) have risen to the top. In this article, we’ll explore one of those solutions: Linea, launched by ConsenSys. Then, we’ll walk through a tutorial on how to build a dApp on the Linea testnet. Finally, we’ll create our own cryptocurrency on Linea using Solidity, MetaMask, and Truffle. Let’s get started. What Is Linea? ZK-rollups are a layer 2 solution that greatly reduces the amount of data that needs to be stored and processed on a blockchain. ZK-rollups work by conducting computations off-chain (where it’s cheaper and faster) and creating zero-knowledge proofs to validate these transactions, which are then recorded on-chain on the main Ethereum network. Some of the current projects using zk-proofs include Starknet (zk-starks), Loopring (zk-snarks), Immutable X, and zkSync. But, among the ZK-rollups, the zkEVM is arguably the most exciting development in the world of blockchain. zkEVM combines ZK-rollups with EVMs (Ethereum Virtual Machines). zkEVMs boast incredibly high throughput and super low transaction costs thanks to their rollup scaling solution. At the same time, they are EVM-compatible which makes it possible for Ethereum developers to use ZK-rollups with the knowledge and tools they already have. Although zkEVMs are a nascent technology, ConsenSys recently launched Linea, a public testnet for its zkEVM chain. Linea is a developer-first ZK-rollup, focused on not only delivering zkEVM, but doing it in a way that supports developers with native integrations to existing tools. Create Your Own Currency on the Linea ZK-rollup Let’s jump in and see how it all works by deploying a token contract on Linea. Along the way, we’ll see how cheap it is to deploy our contract and how we can take advantage of Linea’s EVM-equivalency with the knowledge and tools Ethereum developers are already familiar with. Step 1: Install MetaMask The first thing we’re going to do is set up a MetaMask wallet and add the Linea test network to it. MetaMask is the world’s most popular, secure, and easy-to-use self-custodial wallet. You can download the MetaMask extension for your browser here. After you install the extension, MetaMask will set up the wallet for you. In the process, you will be given a secret phrase. Keep that safe, and under no circumstances should you make it public. Once you’ve set up MetaMask, click on the Network tab in the top-right corner of your screen. You will see an option to show/hide test networks. MetaMask comes automatically configured with the Linea network. Once you turn the test networks on, you should be able to see the Linea Goerli test network in the dropdown. Step 2: Get Some goerliETH In order to deploy our smart contract and interact with it, we will require some free test ETH. The first step of this process is to acquire some goerliETH on the main Goerli test network. You can obtain this for free from the list of faucets available here. Once you fund your wallet, switch back to the Goerli test network on MetaMask. You should now see a non-zero balance. Step 3: Bridge goerliETH to Linea Now that we have funds on Goerli, let’s bridge them over to Linea using the Hop protocol. Visit the Hop exchange here and connect your MetaMask wallet (using the Connect Wallet button on the upper right). Once your wallet is connected, select the From network as Goerli and the To network as Linea. For this tutorial, around 0.2 ETH should be sufficient. Once you click Send, the bridging should take a few minutes. Once it is done, switch to the Linea network on MetaMask. You should see a non-zero balance. Step 4: Install npm and Node Like all Ethereum dApps, we will build our project using node and npm. In case you don't have these installed on your local machine, you can do so here. To ensure everything is working correctly, run the following command: Shell $ node -v If all goes well, you should see a version number for the node. Step 5: Sign Up for an Infura Account In order to deploy our contract to the Linea network, we will require an Infura account. Infura gives us access to RPC endpoints that allow for fast, reliable, and easy access to the blockchain of our choice. Sign up for an Infura free account. Once you’ve created your account, navigate to the dashboard and select Create New Key. For the network, choose Web3 API and name it Linea. Once you click on Create, Infura will generate an API key for you, and give you RPC endpoints to Ethereum, Linea, other L2s, and non-EVM L1s (and their corresponding testnets) automatically. For this tutorial, we are only interested in the Linea RPC endpoint. This URL is of the form <>. Step 6: Create a Node Project and Install the Necessary Packages Let's set up an empty project repository by running the following commands: Shell $ mkdir sunshine-coin && cd sunshine-coin $ npm init -y We will be using Truffle, a world-class development environment and testing framework for EVM smart contracts, to build and deploy our cryptocurrency smart contract. Install Truffle by running: Shell $ npm install —save-dev truffle We can now create a barebones Truffle project by running the following command: Shell $ npx truffle init To check if everything works properly, run: Shell $ npx truffle test We now have Truffle successfully configured. Now, let’s install the OpenZeppelin contracts package. This package will give us access to the ERC-20 base implementation (the standard for fungible tokens) as well as a few helpful additional functionalities. Shell $ npm install @openzeppelin/contracts To allow Truffle to use our MetaMask wallet, sign transactions, and pay for gas on our behalf, we will need another package called hdwalletprovider. Install it by using the following command: Shell $ npm install –save-dev @truffle/hdwallet-provider Finally, in order to keep our sensitive wallet information safe, we will use the dotenv package. Shell $ npm install dotenv Step 7: Create the “Sunshine” Coin Contract Open the project repository in your favorite code editor (e.g., VS Code). In the contracts folder, create a new file called SunshineCoin.sol. We’re going to write an ERC-20 contract that inherits default features offered by OpenZeppelin and mints 10,000 coins to the deployer (or owner) of the contract. We’ll call it “sunshine coin” just for fun! We’re also going to implement functionality that allows a wallet to mint 100 coins for free, on a one-time basis. Add the following code to SunshineCoin.sol. Plain Text // SPDX-License-Identifier: MIT pragma solidity ^0.8.9; import "@openzeppelin/contracts/token/ERC20/ERC20.sol"; import "@openzeppelin/contracts/access/Ownable.sol"; contract SunshineCoin is Ownable, ERC20 { // Mapping to check if a wallet has claimed its free coins mapping(address => bool) public hasClaimed; constructor() ERC20("Sunshine COin", "SC") { _mint(msg.sender, 1000000 * 10 ** ERC20.decimals()); } // Let owner mint tokens freely function mintTokens(uint _amount) public onlyOwner { _mint(msg.sender, _amount * 10 ** ERC20.decimals()); } // Let a wallet claim 100 tokens for free function claimTokens() public { require(hasClaimed[msg.sender] == false); _mint(msg.sender, 100 * 10 ** ERC20.decimals()); hasClaimed[msg.sender] = true; } } Make sure the contract is compiling correctly by running: Shell npx truffle compile Step 8: Update Truffle Config and Create a .env File Create a new file in the project’s root directory called .env and add the following contents: Properties files MNEMONIC = "<Your-MetaMask-Secret-Recovery-Phrase>" Next, let’s add information about our wallet, the Infura RPC endpoint, and the Linea network to our Truffle config file. Replace the contents of truffle.config.js with the following: JavaScript require('dotenv').config(); const HDWalletProvider = require('@truffle/hdwallet-provider'); const { MNEMONIC } = process.env; module.exports = { networks: { development: { host: "127.0.0.1", port: 8545, network_id: "*" }, linea: { provider: () => new HDWalletProvider(MNEMONIC, `https://rpc.goerli.linea.build/`), network_id: '59140', } } }; Step 9: Deploy the Contract Let us now write a script to deploy our contract to the Linea zkEVM blockchain. In the migrations folder, create a new file called 1_deploy_contract.js and add the following code: JavaScript // Get instance of the Sunshine Coin contract const lineaContract = artifacts.require("SunshineCoin"); module.exports = function (deployer) { // Deploy the contract deployer.deploy(lineaContract); }; We’re all set! Deploy the contract by running the following command: Shell truffle migrate --network linea If all goes well, you should see an output (containing the contract address) that looks something like this: Shell Compiling your contracts... =========================== > Everything is up to date, there is nothing to compile. Starting migrations... ====================== > Network name: 'linea' > Network id: 59140 > Block gas limit: 30000000 (0x1c9c380) 1_deploy_contract.js ==================== Deploying 'SunshineCoin' ---------------------- > transaction hash: 0x865db376d1c8de21f4a882b9c0678e419708481eda4234a8f98c4f4975ee6373 > Blocks: 2 Seconds: 18 > contract address: 0x64ccE52898F5d61380D2Ec8C02F2EF16F28436de > block number: 414030 > block timestamp: 1680726601 > account: 0xc361Fc33b99F88612257ac8cC2d852A5CEe0E217 > balance: 0.185605297028804606 > gas used: 1704607 (0x1a029f) > gas price: 2.500000007 gwei > value sent: 0 ETH > total cost: 0.004261517511932249 ETH > Saving artifacts ------------------------------------- > Total cost: 0.004261517511932249 ETH Summary ======= > Total deployments: 1 > Final cost: 0.004261517511932249 ETH Notice how incredibly cheap the deployment was! Transaction fees are minimal. Also, notice that the steps we followed were almost identical to what we would’ve done if we were deploying on the Ethereum mainnet. Step 9: Add Your Token to MetaMask As a last step, let’s add our token to MetaMask so we are able to send, receive, and view the balance. Open MetaMask, and in the assets section at the bottom, click on Import Tokens. Here you will be asked to add the contract address and symbol of your token. Once you do that, you should see your token's correct balance in the Assets tab. Keeping Building With Linea Congratulations! You’ve successfully deployed a smart contract to the Linea testnet. Because Linea is EVM-equivalent, we were able to leverage the tools already available for blockchain development. We didn’t need to learn a whole new stack to take advantage of a ZK-rollup solution or to develop for Linea. More importantly, Linea as an L2 gave us much-improved speed and low gas fees over the main chain- a massive step towards making blockchains and dApps more accessible and usable by the masses. To learn more about Linea and to start building apps, refer to the Linea documentation. There are lots of use cases you can explore, including NFTs, DeFi, decentralized exchanges, and more. Have fun!
It’s well-known that Ethereum needs support in order to scale. A variety of L2s (layer twos) have launched or are in development to improve Ethereum’s scalability. Among the most popular L2s are zero-knowledge-based rollups (also known as zk-rollups). Zk-rollups offer a solution that has both high scalability and minimal costs. In this article, we’ll define what zk-rollups are and review the latest in the market, the new ConsenSys zkEVM. This new zk-rollup—a fully EVM-equivalent L2 by ConsenSys— makes building with zero-knowledge proofs easier than ever. ConsenSys achieves this by allowing developers to port smart contracts easily, stay with the same toolset they already use, and bring users along with them smoothly—all while staying highly performant and cost-effective. If you don’t know a lot about zk-rollups, you’ll find how they work fascinating. They’re at the cutting edge of computer science. And if you do already know about zk-rollups, and you’re a Solidity developer, you’ll be interested in how the new ConsenSys zkEVM makes your dApp development a whole lot easier. It’s zk-rollup time! So let’s jump in. The Power of Zero-Knowledge Proofs Zk-rollups depend on zero-knowledge proofs. But what is a zero-knowledge proof? A zero-knowledge proof allows you to prove a statement is true—without sharing what the actual statement is, or how the truth was discovered. At its most basic, a prover passes secret information to an algorithm to compute the zero-knowledge proof. Then a verifier uses this proof with another algorithm to check that the prover actually knows the secret information. All this happens without revealing the actual information. There are a lot of details behind that above statement. Check out this article if you want to understand the cryptographic magic behind how it all works. But for our purpose, what’s important are the use cases of zero-knowledge proofs. A few examples: Anonymous payments—Traditional digital payments are not private, and even most crypto payments are on public blockchains. Zero-knowledge proofs offer a way to make truly private transactions. You can prove you paid for something … without revealing any details of the transaction. Identity protection—With zero-knowledge proofs, you can prove details of your personal identity while still keeping them private. For example, you can prove citizenship … without revealing your passport. And the most important use case for our purposes: Verifiable computation. What Is Verifiable Computation? Verifiable computation means you can have some other entity process computations for you and trust that the results are true … without knowing any of the details of the transaction. That means a layer 2 blockchain, such as the ConsenSys zkEVM, can become the outsourced computation layer for Ethereum. It can process a batch of transactions (much faster than Ethereum), create the proof for the validity of the transactions, and submit just the results and the proof to Ethereum. Ethereum, since it has the proof, doesn’t need the details—nor does it need a way to prove that the results are true. So instead of processing every transaction, Ethereum offloads the work to a separate chain. All Ethereum has to do is apply the results to its state. This vastly improves the speed and scalability of Ethereum. Exploring the New ConsenSys zkEVM and Why It’s Important Several zk-rollup L2s for Ethereum have already been released or are in progress. But the ConsenSys zkEVM could be the king. Let’s look at why: Type 2 ZK-EVM For one thing, it’s a Type 2 ZK-EVM—an evolution of zk-rollups. It’s faster and easier to use than Type 1 zk solutions. It offers better scalability and performance while still being fully EVM-equivalent. Traditionally with zk-proofs, it’s computationally expensive and slow for the prover to create proofs, which limits the capabilities and usefulness of the rollup. However, the ConsenSys zkEVM uses a recursion-friendly, lattice-based zkSNARK prover—which means faster finality and seamless withdraws, all while retaining the security of Ethereum settlements. And it delivers ultra-low gas fees. Solves the Problems of Traditional L2s Second, the ConsenSys zkEVM solves many of the practical problems of other L2s: Zero switching costs - It’s super easy to port smart contracts to the zkEVM. The zkEVM is EVM-equivalent down to the bytecode. So no rewriting code or smart contracts. You already know what you need to know to get started, and your current smart contracts already work. Easy to move your dApp users to the L2 - The zkEVM is supported by MetaMask, the leading web3 wallet. So most of your users are probably already able to access the zkEVM. Easy for devs - The zkEVM supports most popular tools out of the box. You can build, test, debug, and deploy your smart contracts with Hardhat, Infura, Truffle, etc. All the tools you use now, you can keep using. And there is already a bridge to move tokens onto and off the network. It uses ETH for gas - There’s no native token to the zkEVM, so you don’t need to worry about new tokens, third-party transpilers, or custom middleware. It’s all open source! How To Get Started Using the ConsenSys zkEVM The zkEVM private testnet was released in December 2022 and is moving to public testnet on March 28th, 2023. It’s already processed 774,000 transactions(and growing). There are lots of dApps already: uniswap, the graph, hop, and others. You can read the documentation for the zkEVM and deploy your own smart contract. Conclusion It’s definitely time for zk-rollups to shine. They are evolving quickly and leading the way in helping Ethereum to scale. It’s a great time to jump in and learn how they work—and building with the ConsenSys zkEVM is a great place to start! Have a really great day!
In a microservices architecture, it’s common to have multiple services that need access to sensitive information, such as API keys, passwords, or certificates. Storing this sensitive information in code or configuration files is not secure because it’s easy for attackers to gain access to this information if they can access your source code or configuration files. To protect sensitive information, microservices often use a secrets management system, such as Amazon Secrets Manager, to securely store and manage this information. Secrets management systems provide a secure and centralized way to store and manage secrets, and they typically provide features such as encryption, access control, and auditing. Amazon Secrets Manager is a fully managed service that makes it easy to store and retrieve secrets, such as database credentials, API keys, and other sensitive information. It provides a secure and scalable way to store secrets, and integrates with other AWS services to enable secure access to these secrets from your applications and services. Some benefits of using Amazon Secrets Manager in your microservices include: Centralized management: You can store all your secrets in a central location, which makes it easier to manage and rotate them. Fine-grained access control: You can control who has access to your secrets, and use AWS Identity and Access Management (IAM) policies to grant or revoke access as needed. Automatic rotation: You can configure Amazon Secrets Manager to automatically rotate your secrets on a schedule, which reduces the risk of compromised secrets. Integration with other AWS services: You can use Amazon Secrets Manager to securely access secrets from other AWS services, such as Amazon RDS or AWS Lambda. Overall, using a secrets management system, like Amazon Secrets Manager, can help improve the security of your microservices by reducing the risk of sensitive information being exposed or compromised. In this article, we will discuss how you can define a secret in Amazon Secrets Manager and later pull it using the Spring Boot microservice. Creating the Secret To create a new secret in Amazon Secrets Manager, you can follow these steps: Open the Amazon Secrets Manager console by navigating to the “AWS Management Console,” selecting “Secrets Manager” from the list of services, and then clicking “Create secret” on the main page. Choose the type of secret you want to create: You can choose between “Credentials for RDS database” or “Other type of secrets.” If you select “Other type of secrets,” you will need to enter a custom name for your secret. Enter the secret details: The information you need to enter will depend on the type of secret you are creating. For example, if you are creating a database credential, you will need to enter the username and password for the database. Configure the encryption settings: By default, Amazon Secrets Manager uses AWS KMS to encrypt your secrets. You can choose to use the default KMS key or select a custom key. Define the secret permissions: You can define who can access the secret by adding one or more AWS Identity and Access Management (IAM) policies. Review and create the secret: Once you have entered all the required information, review your settings and click “Create secret” to create the secret. Alternatively, you can also create secrets programmatically using AWS SDK or CLI. Here’s an example of how you can create a new secret using the AWS CLI: Shell aws secretsmanager create-secret --name my-secret --secret-string '{"username": "myuser", "password": "mypassword"}' This command creates a new secret called “my-secret” with a JSON-formatted secret string containing a username and password. You can replace the secret string with any other JSON-formatted data you want to store as a secret. You can also create these secrets from your microservice as well: Add the AWS SDK for Java dependency to your project: You can do this by adding the following dependency to your pom.xml file: XML <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-secretsmanager</artifactId> <version>1.12.83</version> </dependency> Initialize the AWS Secrets Manager client: You can do this by adding the following code to your Spring Boot application’s configuration class: Java @Configuration public class AwsConfig { @Value("${aws.region}") private String awsRegion; @Bean public AWSSecretsManager awsSecretsManager() { return AWSSecretsManagerClientBuilder.standard() .withRegion(awsRegion) .build(); } } This code creates a new bean for the AWS Secrets Manager client and injects the AWS region from the application.properties file. Create a new secret: You can do this by adding the following code to your Spring Boot service class: Java @Autowired private AWSSecretsManager awsSecretsManager; public void createSecret(String secretName, String secretValue) { CreateSecretRequest request = new CreateSecretRequest() .withName(secretName) .withSecretString(secretValue); CreateSecretResult result = awsSecretsManager.createSecret(request); String arn = result.getARN(); System.out.println("Created secret with ARN: " + arn); } This code creates a new secret with the specified name and value. It uses the CreateSecretRequest class to specify the name and value of the secret and then calls the createSecret method of the AWS Secrets Manager client to create the secret. The method returns a CreateSecretResult object, which contains the ARN (Amazon Resource Name) of the newly created secret. These are just some basic steps to create secrets in Amazon Secrets Manager. Depending on your use case and requirements, there may be additional configuration or setup needed. Pulling the Secret Using Microservices Here are the complete steps for pulling a secret from the Amazon Secrets Manager using Spring Boot: First, you need to add the following dependencies to your Spring Boot project: XML <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-secretsmanager</artifactId> <version>1.12.37</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-core</artifactId> <version>1.12.37</version> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-aws</artifactId> <version>2.3.2.RELEASE</version> </dependency> Next, you need to configure the AWS credentials and region in your application.yml file: YAML aws: accessKey: <your-access-key> secretKey: <your-secret-key> region: <your-region> Create a configuration class for pulling the secret: Java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.cloud.aws.secretsmanager.AwsSecretsManagerPropertySource; import org.springframework.context.annotation.Configuration; import com.amazonaws.services.secretsmanager.AWSSecretsManager; import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder; import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest; import com.amazonaws.services.secretsmanager.model.GetSecretValueResult; import com.fasterxml.jackson.databind.ObjectMapper; @Configuration public class SecretsManagerPullConfig { @Autowired private AwsSecretsManagerPropertySource awsSecretsManagerPropertySource; public <T> T getSecret(String secretName, Class<T> valueType) throws Exception { AWSSecretsManager client = AWSSecretsManagerClientBuilder.defaultClient(); String secretId = awsSecretsManagerPropertySource.getProperty(secretName); GetSecretValueRequest getSecretValueRequest = new GetSecretValueRequest() .withSecretId(secretId); GetSecretValueResult getSecretValueResult = client.getSecretValue(getSecretValueRequest); String secretString = getSecretValueResult.getSecretString(); ObjectMapper objectMapper = new ObjectMapper(); return objectMapper.readValue(secretString, valueType); } } In your Spring Boot service, you can inject the SecretsManagerPullConfig class and call the getSecret method to retrieve the secret: Java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; @Service public class MyService { @Autowired private SecretsManagerPullConfig secretsManagerPullConfig; public void myMethod() throws Exception { MySecrets mySecrets = secretsManagerPullConfig.getSecret("mySecrets", MySecrets.class); System.out.println(mySecrets.getUsername()); System.out.println(mySecrets.getPassword()); } } In the above example, MySecrets is a Java class that represents the structure of the secret in the Amazon Secrets Manager. The getSecret method returns an instance of MySecrets that contains the values of the secret. Note: The above code assumes the Spring Boot application is running on an EC2 instance with an IAM role that has permission to read the secret from the Amazon Secrets Manager. If you are running the application locally or on a different environment, you will need to provide AWS credentials with the necessary permissions to read the secret. Conclusion Amazon Secrets Manager is a secure and convenient way to store and manage secrets such as API keys, database credentials, and other sensitive information in the cloud. By using Amazon Secrets Manager, you can avoid hardcoding secrets in your Spring Boot application and, instead, retrieve them securely at runtime. This reduces the risk of exposing sensitive data in your code and makes it easier to manage secrets across different environments. Integrating Amazon Secrets Manager with Spring Boot is a straightforward process thanks to AWS SDK for Java. With just a few lines of code, you can create and retrieve secrets from Amazon Secrets Manager in your Spring Boot application. This allows you to build more secure and scalable applications that can be easily deployed to the cloud. Overall, Amazon Secrets Manager is a powerful tool that can help you manage your application secrets in a more secure and efficient way. By integrating it with Spring Boot, you can take advantage of its features and benefits without compromising on the performance or functionality of your application.
One of the biggest concerns when using Kubernetes is whether we are complying with the security posture and taking into account all possible threats. For this reason, OWASP has created the OWASP Kubernetes Top 10, which helps identify the most likely risks. OWASP's Top 10 projects are useful awareness and guidance resources for security practitioners and engineers. They can also map to other security frameworks that help incident response engineers understand Kubernetes threats. For example, MITRE ATT&CK techniques are also commonly used to register the attacker's techniques and help blue teams to understand the best ways to protect an environment. In addition, we can check the Kubernetes threat model to understand all the attack surfaces and main attack vectors. The OWASP Kubernetes Top 10 puts all possible risks in an order of overall commonality or probability. In this research, we modify the order slightly. We group some of them within the same category, such as misconfigurations, monitoring, or vulnerabilities. And we recommend some tools or techniques to audit your configuration and ensure your security posture is the most appropriate. What Is OWASP Kubernetes? The Open Web Application Security Project (OWASP) is a nonprofit foundation that works to improve the security of software. OWASP is focused on web application security (thus its name), but over time, it has broadened its scope because of the nature of modern systems design. As application development moves from monolithic architectures running traditionally on VMs hidden behind firewalls to modern-day microservice workloads running on cloud infrastructure, it's important to update the security requirements for each application environment. That's why the OWASP Foundation has created the OWASP Kubernetes Top 10 – a list of the ten most common attack vectors specifically for the Kubernetes environment. In the visual above, we spotlight which component or part is impacted by each of the risks that appear in OWASP Kubernetes mapped to a generalized Kubernetes threat model to aid in understanding. This analysis also dives into each OWASP risk, providing technical details on why the threat is prominent and common mitigations. It's also helpful to group the risks into three categories in order of likelihood. The risk categories are: Misconfigurations K01:2022 Insecure Workload Configurations K09:2022 Misconfigured Cluster Components K03:2022 Overly Permissive RBAC Configurations K07:2022 Missing Network Segmentation Controls Lack of visibility K05:2022 Inadequate Logging and Monitoring K04:2022 Lack of Centralized Policy Enforcement K08:2022 Secrets Management Failures Vulnerability management K02:2022 Supply Chain Vulnerabilities K06:2022 Broken Authentication Mechanisms K10:2022 Outdated and Vulnerable Kubernetes Components Misconfigurations Insecure Workload Configurations Security is at the forefront of all cloud provider offerings. Cloud service providers such as AWS, GCP, and Azure implement an array of sandboxing features, virtual firewall features, and automatic updates to underlying services to ensure your business stays secure whenever and wherever possible. These measures also alleviate some of the traditional security burdens of on-premises environments. However, the cloud environments apply what is known as a shared security model, which means part of the responsibility is on the cloud service consumer to implement these security guardrails in their response environment. Responsibilities also vary based on the cloud consumption model and type of offering. The administrators of a tenant have to ultimately ensure workloads are using safe images, run on a patched/updated operating system (OS), and ensure infrastructure configurations are audited and remediated continuously. Misconfigurations in cloud-native workloads are one of the most common approaches for adversaries to gain access to your environment. Operating System The nice thing about containerized workloads is that the images you choose often come preloaded with the dependencies necessary to function with your applications' base image that is built for a particular OS. These images pre-package some general system libraries and other third-party components that are not exactly required for the workload. And in some cases, such as within microservices architecture (MSA), a given container image may be too bloated to facilitate a performant container that operates the microservice. We recommend running minimal, streamlined images in your containerized workloads, such as Alpine Linux images, which are much smaller in file size. These lightweight images are ideal in most cases. Since there are fewer components packaged into it, there are also less possibilities for compromise. If you need additional packages or libraries, consider starting with the base Alpine image and gradually adding packages/libraries where needed to maintain the expected behavior/performance. Audit Workloads The CIS Benchmark for Kubernetes can be used as a starting point for discovering misconfigurations. The open-source project kube-bench, for instance, can check your cluster against the (CIS) Kubernetes Benchmark using YAML files to set up the tests. Example CIS Benchmark Control Minimize the admission of root containers (5.2.6) Linux container workloads have the ability to be run by any Linux user. However, containers that run as the root user increase the possibility of container escape (privilege escalation and then lateral movement in the Linux host). The CIS benchmark recommends that all containers should run as a defined non-UID 0 user. One example of a Kubernetes auditing tool that can help to minimize the admission of root containers is kube-admission-webhook. This is a Kubernetes admission controller webhook that allows you to validate and mutate incoming Kubernetes API requests. In addition, you can use it to enforce security policies, such as prohibiting the creation of root containers in your cluster. How to Prevent Workload Misconfigurations With OPA Tools such as Open Policy Agent (OPA) can be used as a policy engine to detect these common misconfigurations. The OPA admission controller gives you high-level declarative language to the author and enforces policies across your stack. Let's say you want to build an admission controller for the previously mentioned alpine image. However, one of the users of Kubernetes wants to set the securityContext to privileged=true. apiVersion: v1 kind: Pod metadata: name: alpine namespace: default spec: containers: - image: alpine:3.2 command: - /bin/sh - "-c" - "sleep 60m" imagePullPolicy: IfNotPresent name: alpine securityContext: privileged: true restartPolicy: AlwaysCode language: YAML (yaml) This is an example of a privileged pod in Kubernetes. Running a pod in a privileged mode means that the pod can access the host's resources and kernel capabilities. To prevent privileged pods, the .rego file from the OPA Gatekeeper admission controller should look something like this: package kubernetes.admission deny[msg] { c := input_containers[_] c.securityContext.privileged msg := sprintf("Privileged container is not allowed: %v, securityContext: %v", [c.name, c.securityContext]) }Code language: CSS (css) In this case, the output should look something like the below: Error from server (Privileged container is not allowed: alpine, securityContext: {"privileged": true}): error when creating "STDIN": admission webhook "validating-webhook.openpolicyagent.org"Code language: Perl (perl) Misconfigured Cluster Components Misconfigurations in core Kubernetes components are much more common than expected. To prevent this, continuous and automatic auditing of IaC and K8s (YAML) manifests instead of having to check them manually will reduce configuration errors. One of the riskiest misconfigurations is the Anonymous Authentication setting in Kubelet, which allows non-authenticated requests to the Kubelet. It's strongly recommended to check your Kubelet configuration and ensure the flag described below is set to false. When auditing workloads, it's important to keep in mind that there are different ways in which to deploy an application. With the configuration file of the various cluster components, you can authorize specific read/write permissions on those components. In the case of Kubelet, by default, all requests to the kubelet's HTTPS endpoint that are not rejected by other configured authentication methods are treated as anonymous requests and given a username of system:anonymous and a group of system:unauthenticated. To disable this anonymous access for these unauthenticated requests, simply start kubelet with the feature flag –anonymous-auth=false. When auditing cluster components like kubelet, we can see that kubelet authorizes API requests using the same request attributes approach as the API Server. As a result, we can define the permissions such as: POST GET PUT PATCH DELETE However, there are many other cluster components to focus on, not just kubelet. For instance, kubectl plugins run with the same privileges as the kubectl command itself, so if a plugin is compromised, it could potentially be used to escalate privileges and gain access to sensitive resources in your cluster. Based on the CIS Benchmark report for Kubernetes, we would recommend enabling the following settings for all cluster components. Etcd The etcd database offers a highly-available key/value store that Kubernetes uses to centrally house all cluster data. It is important to keep etcd safe, as it stores config data as well as K8s Secrets. We strongly recommend regularly backing up etcd data to avoid data loss. Thankfully, etcd supports a built-in snapshot feature. The snapshot can be taken from an active cluster member with the etcdctl snapshot save command. Taking the snapshot will have no performance impact. Below is an example of taking a snapshot of the keyspace served by $ENDPOINT to the file snapshotdb: ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdbCode language: Perl (perl) Kube-apiserver The Kubernetes API server validates and configures data for the API objects, which include pods, services, ReplicationControllers, and others. The API Server services REST operations and provide the front end to the cluster's shared state through which all other components interact. It's critical to cluster operation and has a high value, as an attack target can't be understated. From a security standpoint, all connections to the API server, communication made inside the Control Plane, and communication between the Control Plane and kubelet components should only be provisioned to be reachable using TLS connections. By default, TLS is unconfigured for the kube-apiserver. If this is flagged within the Kube-bench results, simply enable TLS with the feature flags --tls-cert-file=[file] and --tls-private-key-file=[file] in the kube-apiserver. Since Kubernetes clusters tend to scale up and scale down regularly, we recommend using the TLS bootstrapping feature of Kubernetes. This allows automatic certificate signing and TLS configuration inside a Kubernetes cluster rather than following the above manual workflow.It is also important to regularly rotate these certificates, especially for long-lived Kubernetes clusters.Fortunately, there is automation to help rotate these certificates in Kubernetes v.1.8 or higher versions. API Server requests should also be authenticated, which we cover later in the section Broken Authentication Mechanisms. CoreDNS CoreDNS is a DNS server technology that can serve as the Kubernetes cluster DNS and is hosted by the CNCF. CoreDNS superseded kube-dns since version v.1.11 of Kubernetes. Name resolution within a cluster is critical for locating the orchestrated and ephemeral workloads and services inherent in K8s. CoreDNS addressed a bunch of security vulnerabilities found in kube-dns, specifically in dnsmasq (the DNS resolver). This DNS resolver was responsible for caching responses from SkyDNS, the component responsible for performing the eventual DNS resolution services. Aside from addressing security vulnerabilities in kube-dns's dnsmasq feature, CoreDNS addressed performance issues in SkyDNS. When using kube-dns, it also involves a sidecar proxy to monitor health and handle the metrics reporting for the DNS service. CoreDNS addresses a lot of these security and performance-related issues by providing all the functions of kube-dns within a single container. However, it can still be compromised. As a result, it's important to again use kube-bench for compliance checks on CoreDNS. Overly-Permissive RBAC Configurations Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. An RBAC misconfiguration could allow an attacker to elevate privileges and gain full control of the entire cluster. Creating RBAC rules is rather straightforward. For instance, to create a permissive policy to allow read-only CRUD actions (i.e., get, watch, list) for pods in the Kubernetes cluster's 'default' network namespace, but to prevent Create, Updated, or Delete actions against those pods, the policy would look something like this: apiVersion: rbac.authorization.k8s.io/v1 kind: Role Metadata: namespace: default name: pod-reader Rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"]Code language: YAML (yaml) Issues arise when managing these RBAC rules in the long run. Admins will likely need to manage ClusterRole resources to avoid building individual roles on each network namespace, as seen above. ClusterRoles allow us to build cluster-scoped rules to grant access to those workloads. RoleBindings can then be used to bind the above-mentioned roles to users. Similar to other Identity and Access Management (IAM) practices, we need to ensure each user has the correct access to resources within Kubernetes without granting excessive permissions to individual resources. The below manifest should show how we recommend binding a role to a Service Account or user in Kubernetes. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding Metadata: name: read-pods namespace: default Subjects: - kind: User name: nigeldouglas apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.ioCode language: YAML (yaml) By scanning for RBAC misconfigurations, we can proactively bolster the security posture of our cluster and simultaneously streamline the process of granting permissions. One of the major reasons cloud-native teams grant excessive permissions is due to the complexity of managing individual RBAC policies in production. In other words, there may be too many users and roles within a cluster to manage by manually reviewing manifest code. That's why there are tools specifically built to handle the management, auditing, and compliance checks of your RBAC. Audit RBAC RBAC Audit is a tool created by the team at CyberArk. This tool is designed to scan the Kubernetes cluster for risky roles within RBAC and requires python3. This python tool can be run via a single command: ExtensiveRoleCheck.py --clusterRole clusterroles.json --role Roles.json --rolebindings rolebindings.json --cluseterolebindings clusterrolebindings.jsonCode language: Perl (perl) Kubiscan Kubiscan is another tool built by the team at CyberArk. Unlike RBAC Audit, this tool is designed for scanning Kubernetes clusters for risky permissions in the Kubernetes' RBAC authorization model – not the RBAC roles. Again, Python v.3.6 or higher is required for this tool to work. To see all the examples, run python3 KubiScan.py -e or, within the container, run kubiscan -e. Krane Krane is a static analysis tool for Kubernetes RBAC. Similar to Kubiscan, it identifies potential security risks in K8s RBAC design and makes suggestions on how to mitigate them. The major difference between these tools is the way Krane provides a dashboard of the cluster's current RBAC security posture and lets you navigate through its definition. If you'd like to run an RBAC report against a running cluster, you must provide a kubectl context, as seen below: krane report -k <kubectl-context>Code language: Perl (perl) If you'd like to view your RBAC design in the above tree design, with a network topology graph and the latest report findings, you need to start the dashboard server via the following command: krane dashboard -c nigel-eks-clusterCode language: Perl (perl) The -c feature flag points to a cluster name in your environment. If you would like a dashboard of all clusters, simply drop the -c reference from the above command. Missing Network Segmentation Controls Kubernetes, by default, defines what is known as a "flat network" design. This allows workloads to freely communicate with each other without any prior configuration. However, they can do this without any restrictions. If an attacker were able to exploit a running workload, they would essentially have access to perform data exfiltration against all other pods in the cluster. Cluster operators that are focused on zero trust architecture in their organization will want to take a closer look at Kubernetes Network Policy to ensure services are properly restricted. Kubernetes offers solutions to address the right configuration of network segmentation controls. Here, we show you two of them. Service Mesh With Istio Istio provides a service mesh solution. This allows security and network teams to manage traffic flow across microservices, enforce policies and aggregate telemetry data in order to enforce microsegmentation on the network traffic going in and out of our microservices. At the time of writing, the service relies on implementing a set of sidecar proxies for each microservice in your cluster. However, the Istio project is looking to move to a sidecar-less approach sometime in the year. The sidecar technology is called 'Envoy.' We rely on Envoy to handle ingress/egress traffic between services in the cluster and from a service to external services in the service mesh architecture. The clear advantage of using proxies is that they provide a secure microservice mesh, offering functions like traffic mirroring, discovery, rich layer-7 traffic routing, circuit breakers, policy enforcement, telemetry recording/reporting functions, and – most importantly – automatic mTLS for all communication with automatic certificate rotation! apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy Metadata: name: httpbin namespace: default Spec: action: DENY Rules: - from: - source: namespaces: ["prod"] To: - operation: methods: ["POST"]Code language: YAML (yaml) The above Istio AuthorizationPolicy sets the action to "DENY" on all requests from the "prod" production namespace to the "POST" method on all workloads in the "default" namespace. This policy is incredibly useful. Unlike Calico network policies that can only drop the traffic based on IP address and port at the L3/L4 (network layer), the authorization policy is denying the traffic based on HTTP/S verbs such as POST/GET at L7 (application layer). This is important when implementing a Web Application Firewall (WAF). CNI It's worth noting that although there are huge advantages to a service mesh, such as encryption of traffic between workloads via Mutual TLS (mTLS) as well as HTTP/s traffic controls, there are also some complexities to managing a service mesh. The use of sidecars beside each workload adds additional overhead in your cluster, as well as unwanted issues troubleshooting those sidecars when they experience issues in production. Many organizations opt to only implement the Container Network Interface (CNI) by default. The CNI, as the name suggests, is the networking interface for the cluster. CNI's like Project Calico and Cilium come with their own policy enforcement. Whereas Istio enforces traffic controls on L7 traffic, the CNI tends to be focused more on network-layer traffic (L3/L4). The following CiliumNetworkPolicy, as an example, limits all endpoints with the label app=frontend to only be able to emit packets using TCP on port 80 to any layer three destinations: apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy Metadata: name: "l4-rule" Spec: endpointSelector: matchLabels: app: frontend Egress: - toPorts: - ports: - port: "80" protocol: TCPCode language: YAML (yaml) We mentioned using the Istio AuthorizationPolicy to provide WAF-like capabilities at the L7/application layer. However, a Distributed Denial-of-Service (DDoS) attack can still happen at the network layer if the adversary floods the pods/endpoint with excessive TCP/UDP traffic. Similarly, it can be used to prevent compromised workloads from speaking to known/malicious C2 servers based on fixed IPs and ports. Lack of Visibility Inadequate Logging and Monitoring Kubernetes provides an audit logging feature by default. Audit logging shows a variety of security-related events in chronological order. These activities can be generated by users, by applications that use the Kubernetes API, or by the Control Plane itself. However, there are other log sources to focus on – not limited to Kubernetes Audit Logs. They can include host-specific Operating System logs, Network Activity logs (such as DNS, which you can monitor the Kubernetes add-ons CoreDNS), and Cloud Providers that also work as the foundation for the Kubernetes Cloud. Without a centralized tool for storing all of these sporadic log sources, we would have a hard time using them in the case of a breach. That's where tools like Prometheus, Grafana, and Falco are useful. Prometheus Prometheus is an open-source, community-driven project for monitoring modern cloud-native applications and Kubernetes. It is a graduated member of the CNCF and has an active developer and user community. Grafana Like Prometheus, Grafana is open-source tooling with a large community backing. Grafana allows you to query, visualize, alert, and understand your metrics no matter where they are stored. Users can create, explore, and share dashboards with their teams. Falco (Runtime Detection) Falco, the cloud-native runtime security project, is the de facto standard for Kubernetes threat detection. Falco detects threats at runtime by observing the behavior of your applications and containers. Falco extends threat detection across cloud environments with Falco Plugins. Falco is the first runtime security project to join CNCF as an incubation-level project. Falco acts as a security camera, detecting unexpected behavior, intrusions, and data theft in real-time in all Kubernetes environments. Falco v.0.13 added Kubernetes Audit Events to the list of supported event sources. This is in addition to the existing support for system call events. Improved implementation of audit events was introduced in Kubernetes v1.11, and it provides a log of requests and responses to kube-apiserver. Because almost all the cluster management tasks are performed through the API Server, the audit log can effectively track the changes made to your cluster. Examples of this include: Creating and destroying pods, services, deployments, daemonsets, etc. Creating, updating, and removing ConfigMaps or secrets. Subscribing to the changes introduced to any endpoint. Lack of Centralized Policy Enforcement Enforcing security policies becomes a difficult task when we need to enforce rules across multi-cluster and multi-cloud environments. By default, security teams would need to manage risk across each of these heterogeneous environments separately. There's no default way to detect, remediate, and prevent misconfigurations from a centralized location, meaning clusters could potentially be left open to compromise. Admission Controller An admission controller intercepts requests to the Kubernetes API Server prior to persistence. The request must first be authenticated and authorized, and then a decision is made on whether to allow the request to be performed. For example, you can create the following Admission controller configuration: apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: ImagePolicyWebhook configuration: imagePolicy: kubeConfigFile: <path-to-kubeconfig-file> allowTTL: 50 denyTTL: 50 retryBackoff: 500 defaultAllow: trueCode language: YAML (yaml) The ImagePolicyWebhook configuration is referencing a kubeconfig formatted file which sets up the connection to the backend. The point of this admission controller is to ensure the backend communicates over TLS. The allowTTL: 50 sets the amount of time in seconds to cache the approval, and similarly, the denyTTL: 50 sets the amount of time in seconds to cache the denial. Admission controllers can be used to limit requests to create, delete, modify objects, or connect to proxies. Unfortunately, the AdmissionConfiguration resource still needs to be managed individually on each cluster. If we forget to apply this file on one of our clusters, it will lose this policy condition. Thankfully, projects like Open Policy Agent (OPA's) Kube-Mgmt tool help manage the policies and data of OPA instances within Kubernetes – instead of managing admission controllers individually. The kube-mgmt tool automatically discovers policies and JSON data stored in ConfigMaps in Kubernetes and loads them into OPA. Policies can easily be disabled using the feature flag --enable-policy=false, or you could similarly disable data via a single flag: --enable-data=false. Admission control is an important element of the container security strategy to enforce policies that need Kubernetes context and create a last line of defense for your cluster. We touch on image scanning later in this research but know that image scanning can also be enforced via a Kubernetes admission controller. Runtime Detection We need to standardize the deployment of security policy configurations to all clusters if they mirror the same configuration. In the case of radically different cluster configurations, they might require uniquely designed security policies. In either instance, how do we know which security policies are deployed in each cluster environment? That's where Falco comes into play. Let's assume the cluster is not using kube-mgmt, and there's no centralized way of managing these admission controllers. A user accidentally creates a ConfigMap with private credentials exposed within the ConfigMap manifest. Unfortunately, no admission controller was configured in the newly-created cluster to prevent this behavior. In a single rule, Falco can alert administrators when this very behavior occurs: - rule: Create/Modify Configmap With Private Credentials desc: > Detect creating/modifying a configmap containing a private credential condition: kevt and configmap and kmodify and contains_private_credentials output: >- K8s configmap with private credential (user=%ka.user.name verb=%ka.verb configmap=%ka.req.configmap.name namespace=%ka.target.namespace) priority: warning source: k8s_audit append: false exceptions: - name: configmaps fields: - ka.target.namespace - ka.req.configmap.nameCode language: Perl (perl) In the above Falco rule, we are sourcing the Kubernetes audit logs to show examples of private credentials that might be exposed in ConfigMaps in any Namespace. The private credentials are defined as any of the below conditions condition: (ka.req.configmap.obj contains "aws_access_key_id" or ka.req.configmap.obj contains "aws-access-key-id" or ka.req.configmap.obj contains "aws_s3_access_key_id" or ka.req.configmap.obj contains "aws-s3-access-key-id" or ka.req.configmap.obj contains "password" or ka.req.configmap.obj contains "passphrase")Code language: Perl (perl) Secrets Management Failures In Kubernetes, a Secret is an object designed to hold sensitive data, like passwords or tokens. To avoid putting this type of sensitive data in your application code, we can simply reference the K8s secret within the pod specification. This enables engineers to avoid hardcoding credentials and sensitive data directly in the pod manifest or container image. Regardless of this design, K8s Secrets can still be compromised. The native K8s secrets mechanism is essentially an abstraction – the data still gets stored in the aforementioned etcd database, and it's turtles all the way down. As such, it's important for businesses to assess how credentials and keys are stored and accessed within K8s secrets as part of a broader secrets management strategy. K8s provides other security controls, which include data-at-rest encryption, access control, and logging. Encrypt Secrets at Rest One major weakness with the etcd database used by Kubernetes is that it contains all data accessible via the Kubernetes API and, therefore, can allow an attacker extended visibility into secrets. That's why it's incredibly important to encrypt secrets at rest. As of v.1.7, Kubernetes supports encryption at rest. This option will encrypt Secret resources in etcd, preventing parties that gain access to your etcd backups from viewing the content of those secrets. While this feature is currently in beta and not enabled by default, it offers an additional level of defense when backups are not encrypted, or an attacker gains read access, etc. Here's an example of creating the EncryptionConfiguration custom resource: apiVersion: apiserver.config.k8s.io/v1 kind: EncryptionConfiguration resources: - resources: - secrets providers: - aescbc: Keys: - name: key1 secret: <BASE 64 ENCODED SECRET> - identity: {}Code language: Perl (perl) Address Security Misconfigurations Aside from ensuring secrets are encrypted at rest, we need to prevent secrets from getting into the wrong hands. We discussed how vulnerability management, image scanning, and network policy enforcement are used to protect the applications from compromise. However, to prevent secrets (sensitive credentials) from being leaked, we should lock down RBAC wherever possible. Keep all Service Accounts and user access to the least privilege. There should be no scenario where users are "credential sharing" – essentially using a Service Account like "admin" or "default." Each user should have clearly defined Service Account names such as 'Nigel,' 'William,' or 'Douglas.' In that scenario, if a Service Account is doing something that it shouldn't be, we can easily audit the account activity and/or audit the RBAC configuration of third-party plugins and software installed in the cluster to ensure access to Kubernetes secrets is not granted unnecessarily to a user like 'Nigel' who does not require full elevated administrative privileges. In the following scenario, we will create a ClusterRole that is used to grant read access to secrets in the 'test' namespace. In this case, the user assigned to this cluster role will have no access to secrets outside of this oddly-specific namespace. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole Metadata: name: secret-reader namespace: test Rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"]Code language: YAML (yaml) Ensure Logging and Auditing Is in Place Application logs help developers, and security teams better understand what is happening inside the application. The primary use case for developers is to assist with debugging problems that affect the performance of their applications. In many cases, logs are shipped to a monitoring solution, like Grafana or Prometheus, to improve the time to respond to cluster events such as availability or performance issues. Most modern applications, including container engines, have some kind of logging mechanism supported by default. The easiest and most adopted logging method for containerized applications is writing to standard output (stdout) and standard error streams. In the below example for Falco, a line is printed for each alert. stdout_output: enabled: trueCode language: Perl (perl) For identification of potential security issues that arise from events, Kubernetes admins can simply stream event data like cloud audit logs or general host syscalls to the Falco threat detection engine. By streaming the standard output (stdout) from the Falco security engine to Fluentd or Logstash, additional teams such as platform engineering or security operations can capture event data easily from cloud and container environments. Organizations can store more useful security signals as opposed to just raw event data in Elasticsearch or other SIEM solutions. Dashboards can also be created to visualize security events and alert incident response teams: 10:20:22.408091526: File created below /dev by untrusted program (user=nigel.douglas command=%proc.cmdline file=%fd.name)Code language: Bash (bash) Vulnerability Management Supply Chain Vulnerabilities After the four risks arising from misconfigurations, we will now detail those related to vulnerabilities. Supply chain attacks are on the rise, as seen with the SolarWinds breach. The SolarWinds software solution 'Orion' was compromised by the Russian threat group APT29 (commonly known as Cozy Bear). This was a long-running zero-day attack, which means the SolarWinds customers who had Orion running in their environments were not aware of the compromise. APT29 adversaries would potentially have access to non-air gapped Orion instances via this SolarWinds exploit. SolarWinds is just one example of a compromised solution within the enterprise security stack. In the case of Kubernetes, a single containerized workload alone can rely on hundreds of third-party components and dependencies, making trust of origin at each phase extremely difficult. These challenges include but are not limited to image integrity, image composition, and known software vulnerabilities. Let's dig deeper into each of these. Images A container image represents binary data that encapsulates an application and all of its software dependencies. Container images are executable software bundles that can run standalone (once instantiated into a running container) and that make very well-defined assumptions about their runtime environment. The Sysdig Threat Research Team performed an analysis of over 250,000 Linux images in order to understand what kind of malicious payloads are hiding in the container images on Docker Hub. The Sysdig TRT collected malicious images based on several categories, as shown above. The analysis focused on two main categories: malicious IP addresses or domains and secrets. Both represent threats to people downloading and deploying images that are available in public registries, such as Docker Hub, exposing their environments to high risks. Additional guidance on image scanning can be found in the research of 12 images scanning best practices. This advice is useful whether you're just starting to run containers and Kubernetes in production or you want to embed more security into your current DevOps workflows. Dependencies When you have a large number of resources in your cluster, you can easily lose track of all relationships between them. Even "small" clusters can have way more services than anticipated by virtue of containerization and orchestration. Keeping track of all services, resources, and dependencies is even more challenging when you're managing distributed teams over multi-cluster or multi-cloud environments. Kubernetes doesn't provide a mechanism by default to visualize the dependencies between your Deployments, Services, Persistent Volume Claims (PVCs), etc. KubeView is a great open-source tool to view and audit intra-cluster dependencies. It maps out the API objects and how they are interconnected. Data is fetched in real-time from the Kubernetes API. The status of some objects (Pods, ReplicaSets, Deployments) is color-coded red/green to represent their status and health Registry The registry is a stateless, scalable server-side application that stores and lets you distribute container images. Kubernetes resources, which implement images (such as pods, deployments, etc.), will use imagePull secrets to hold the credentials necessary to authenticate to the various image registries. Like many of the problems we have discussed in this section, there's no inherent way to scan images for vulnerabilities in standard Kubernetes deployments. But even on a private, dedicated image registry, you should scan images for vulnerabilities. But Kubernetes doesn't provide a default, integrated way to do this out of the box. You should scan your images in the CI/CD pipelines used to build them as part of a shift-left security approach. See the research Shift-Left: Developer-Driven Security for more details. Sysdig has authored detailed technical guidance with examples of how to do it for common CI/CD services, providing another layer of security to prevent vulnerabilities in your pipelines: GitHub actions Gitlab pipelines Azure pipelines Jenkins Another layer of security we can add is a process of signing and verifying the images we send to our registries or repositories. This reduces supply chain attacks by ensuring authenticity and integrity. It protects our Kubernetes development and deployments and provides better control of the inventory of containers we are running at any given time. Broken Authentication Mechanisms How to securely access your Kubernetes cluster should be a priority, and proper authentication in Kubernetes is key to avoiding most threats in the initial attack phase. K8s administrators may interact with a cluster directly through K8s APIs or via the K8s dashboard. Technically speaking, the K8s dashboard, in turn, communicates to those APIs, such as the API server or Kubelet APIs. Enforcing authentication universally is a critical security best practice. As seen with the Tesla crypto mining incident in 2019, the attacker infiltrated the Kubernetes dashboard, which was not protected by a password. Since Kubernetes is highly-configurable, many components end up not being enabled or using basic authentication so that they can work in a number of different environments. This presents challenges when it comes to cluster and cloud security posture. If it's a person who wants to authenticate against our cluster, a main area of concern will be credentials management. The most likely case is that they will be exposed by an accidental error, leaking in one of the configuration files, such as .kubeconfig. Inside your Kubernetes cluster, the authentication between services and machines is based on Service Accounts. It's important to avoid using certificates for end-user authentication or Service Account tokens from outside of the cluster because we would increase the risk. Therefore, it is recommended to continuously scan for secrets or certificates that may be exposed by mistake. OWASP recommends that, no matter what authentication mechanism is chosen, we should force humans to provide a second method of authentication. If you use a cloud IAM capability and 2FA is not enabled, for instance, we should be able to detect it at runtime in your cloud or Kubernetes environment to speed up detection and response. For this purpose, we can use Falco, an open-source threat detection engine that triggers alerts at run-time according to a set of YAML formatted rules. - rule: Console Login Without Multi Factor Authentication desc: Detects a console login without using MFA. condition: >- aws.eventName="ConsoleLogin" and not aws.errorCode exists and jevt.value[/userIdentity/type]!="AssumedRole" and jevt.value[/responseElements/ConsoleLogin]="Success" and jevt.value[/additionalEventData/MFAUsed]="No" output: >- Detected a console login without MFA (requesting user=%aws.user, requesting IP=%aws.sourceIP, AWS region=%aws.region) priority: critical source: aws_cloudtrail append: false exceptions: []Code language: YAML (yaml) Falco helps us identify where insecure logins exist. In this case, it's a login to the AWS console without MFA. However, if an adversary were able to access the cloud console without additional authorization required, they would likely be able to then access Amazon's Elastic Kubernetes Service (EKS) via the CloudShell. That's why it's important to have MFA for cluster access, as well as the managed services powering the cluster – GKE, EKS, AKS, IKS, etc. But it is not only important to protect access to Kubernetes. If we use other tools on top of Kubernetes to, for example, monitor events, we must protect those as well. As we explained at KubeCon 2022, an attacker could exploit an exposed Prometheus instance and compromise your Kubernetes cluster. Outdated and Vulnerable Kubernetes Components Effective vulnerability management in Kubernetes is difficult. However, there are a set of best practices to follow. Kubernetes admins must follow the latest up-to-date CVE databases, monitor vulnerability disclosures, and apply relevant patches where applicable. If not, Kubernetes clusters may be exposed to these known vulnerabilities that make it easier for an attacker to perform techniques to take full control of your infrastructure and potentially pivot to your cloud tenant, where you've deployed clusters. The large number of open-source components in Kubernetes, as well as the project release cadence, makes CVE management particularly difficult. In version 1.25 of Kubernetes, a new security feed was released to Alpha that groups and updates the list of CVEs that affect Kubernetes components. Here is a list of the most famous ones: CVE-2021-25735 – Kubernetes validating admission webhook bypass. CVE-2020-8554 – Unpatched Man-In-The-Middle (MITM) Attack in Kubernetes. CVE-2019-11246 – High-severity vulnerability affecting kubectl tool. If exploited, it could lead to a directory traversal. CVE-2018-18264 – Privilege escalation through Kubernetes dashboard. To detect these vulnerable components, you should use tools that check or scan your Kubernetes cluster, such as kubescape or kubeclarity – or look to a commercial platform. Today, the vulnerabilities released directly target the Linux Kernel, affecting the containers running on our cluster rather than the Kubernetes components themselves. Even so, we must keep an eye on each new vulnerability discovered and have the plan to mitigate the risk as soon as possible. Conclusion The OWASP Kubernetes Top 10 is aimed at helping security practitioners, system administrators, and software developers prioritize risks around the Kubernetes ecosystem. The Top 10 is a prioritized list of common risks backed by data collected from organizations varying in maturity and complexity. We covered a large number of open-source projects that can help address the gaps outlined in the OWASP Kubernetes Top 10. However, the deployment and operation of these sporadic tools require a large amount of manpower and an extensive skill set to manage effectively.
Developers and DevOps teams have embraced the use of containers for application development and deployment. They offer a lightweight and scalable solution to package software applications. The popularity of containerization is due to its apparent benefits, but it has also created a new attack surface for cybercriminals, which must be protected against. Industry-leading statistics demonstrate the wide adoption of this technology. For example, a 2020 study from Forrester mentioned, "container security spending is set to reach $1.3 billion by 2024". In another report, Gartner stated, "by 2025, over 85% of organizations worldwide will be running containerized applications in production, a significant increase from less than 35% in 2019". On the flip side, various statistics indicate that the popularity of containers has also made them a target for cybercriminals who have been successful in exploiting them. In the 2019 report, Aqua Security published that 94% of US organizations use containers for production applications, up from 68% in 2018. The same survey reported that 65% of organizations had experienced at least one container-related security incident, a steep increase from 60% in the previous year. A more recent study conducted by StackRox in 2021 found that 94% of surveyed organizations had experienced a security incident in their container environment in the past 12 months. Finally, in a survey by Red Hat, 60% of respondents cited security as the top concern when adopting containerization. These data points emphasize the significance of container security, making it a critical and pressing topic for discussion among organizations that are currently using or planning to adopt containerized applications. To comprehend the security implications of a containerized environment, it is crucial to understand the fundamental elements of a container deployment network. Container Deployment Network The above illustration outlines a standard container deployment using Kubernetes. However, before designing a strong security framework for this system, it is crucial to understand its basic components and how they interact with each other. Load Balancers are the entry point for the ingress traffic. They help in distributing incoming traffic to nodes that reside within a cluster. In general, their purpose is to maintain a balanced flow of requests inside the container environment. Kubernetes Cluster consists of a master node that manages the cluster, multiple worker nodes that run containerized applications, and a Kubernetes control plane that is utilized to manage all nodes. The cluster's primary job is managing, scaling, and orchestrating the containerized environment. Nodes are physical or virtual machines that use 'container runtime' to manage the containers. The nodes work in close coordination with the control plane using Kubelet (agent used to schedule and manage pods using the control plane), Kube Proxy (network proxy used to route traffic to the right pod), and cAdvisor (container monitoring tool used to send performance metrics from containers to control plane). Pods contain one or more containers within it. They are the smallest deployment units in Kubernetes that run on worker nodes. Container is an executable software package that provides everything to run an application or a service. It contains code, libraries, system tools, and settings. The system is built using container images, which are read-only templates that are utilized to run applications. They are isolated from other containers and also from the host operating system. High-level container environment traffic flow: Load Balancer receives the ingress traffic and distributes it across various nodes. Based on the service or requested application, the cluster directs the traffic to the appropriate container. Once the container processes the request and generates a response, it is sent back to the requesting entity through the same route. For the egress traffic, the container sends information to the cluster and directs it to the load balancer. The balancer then transmits the request to the required entity. Containers do provide some built-in security controls. Each containerized environment is isolated, and traffic does not travel within the host network. This prevents lateral movement of data that aids in improving overall security. These environments can be further segregated using network segmentation to control traffic flow within the container environment. However, this architecture may also introduce many security risks if adequate measures are not taken during its implementation. We should comprehend, use, and comply with the technical and design security requirements to ensure the security of containers. Host Security The host is considered to be one of the most crucial components from a security perspective. Although containers are kept isolated from one another, they are built on top of a host operating system (OS). Hence, the host OS needs to be free from any vulnerabilities. This will reduce the likelihood of unauthorized access. Here are some measures to ensure a secure connection between the container and the host: Periodically scan the host OS for security vulnerabilities, if any, and patch the system regularly. Disable unused or unnecessary services, protocols, or functionality. Replace insecure protocols like telnet with widely popular products like SSH. Review access to the host OS annually (or more frequently, depending on the risk level of the applications running on it) and limit it to authorized personnel only. Enable MFA (multi-factor authentication) and RBAC (Role-based access control). Use container isolation technologies like namespaces and cgroups to ensure that containers are isolated from each other and the host. Install host-based firewalls and virtual private networks (VPNs) for container network security. Log container activity using monitoring tools like Auditd, Sysdig, Falco, and Prometheus. They will help you track anomalous user behavior, detect known threats and address them. Create backups for data recovery in case of failures. Also, perform business impact analysis (BIA) testing at regular intervals to measure the backups' effectiveness. Image Hardening Containers are built using software, configuration settings, and libraries. These are collectively referred to as container images and stored in the form of read-only templates. Since these images are the source of truth, it is important to harden them, i.e., keep them free from malware and other vulnerabilities. Below are some ways to do it: First, remove packages that are not used or are unnecessary. Use only secure images (that come from a trusted source) to build a new container. Finally, configure them using secure defaults. Implement access controls for container images; limit user access to containers and use secure credential storage for container authentication. The trusted repository used within the organization should only allow the storage of hardened images. One method to implement this measure is to ensure that any images being uploaded to the secure repository have been signed and verified beforehand. Tools like Docker Content Trust or Docker Notary can be used for the same. Use and implement secure container image management, distribution, caching, tagging, and layering. Use tools like Clair or Trivy to perform vulnerability scanning in the container environment. Container Security Configuration Another important component is the configuration of the container where the application is running. Here are some settings that can be configured to reduce exposure: Run the containers with the least privileged access for all system resources, including memory, CPU, and network. Use tools like SELinux and AppArmor for container runtime security. These can prevent unauthorized access and protect container resources. Manage secure deployment of containers using orchestration tools like Kubernetes and Docker Swarm. Network Security The network is a critical component for all systems. Therefore, it is important to restrict network access and ensure that data at rest and in transit is always encrypted. A few specific network security requirements for containers are: Limit the attack surface by implementing network segmentation for container clusters. To limit access to a containerized application, it is recommended to employ a container firewall and HIDS on the container host while also setting resource limits for the container. Periodically scan the containers for vulnerabilities and conduct security testing. Monitor container network traffic and enable secure container logging. Generate alerts if any suspicious activity is detected. Use tools like Calico and Weave Net for securing network environments. Container and Network Security Policy and Protocols Policies are guidelines and rules for securing containerized applications and their associated networks. The policy may include protocols for deploying, monitoring, and managing containers to ensure that they operate securely and do not pose a threat to the host system or network. Store container images using the secure registry. Implement container backups and encrypt sensitive data to protect against loss of data. Implement a secure container by using only trusted certificates for container communication. Enable secure boot for containers and secure DNS resolution. Implement secure container network drivers, entry points, networking policies, network plugins, bridges, overlay networks, DNS configuration, and network load balancer. Implement secure container network virtual switches, routing policies, firewalls, routing protocols, security groups, access control lists, load balancing algorithms, service recovery, and service mesh. Use container configuration management and orchestration tools to enforce these policies. Application and Platform Security Container application uses several application programming interfaces (APIs) for connecting and gathering information from various systems. There are some basic container application security requirements that should be tested and validated in a timely fashion: Third-party libraries used in coding should be secure and free from vulnerabilities. Developers should be trained to implement only secure coding practices and secure container development practices. Use container orchestration tools alongside implementing secure application deployment and management processes. Implement container host and image - hardening, scanning, signing, verification, and management. Compliance With Security and Regulatory Standards Containers host multiple applications; hence, they must comply with regulatory requirements and security standards like PCIDSS and HIPAA. Some common requirements for container security to meet various compliance standards: Conduct periodic security risk assessments of the applications and the container environment. Set up incident response and change management procedures for container security. Implement backup and restore procedures along with disaster recovery plans. Organizations should have secure container lifecycle management policies and procedures. In addition, regular audits should be conducted to test their effectiveness. Mandate user training to create awareness about secure container practices. Utilize resources such as Open Policy Agent and Kyverno to verify compliance with relevant regulations and recommended security protocols. It is crucial for organizations to implement security measures to mitigate potential risks posed by security breaches in containerization. This involves ensuring that both applications and container environments are thoroughly checked for vulnerabilities. In addition to that, adopting technical measures such as restricting access, implementing access controls, conducting regular risk assessments, and continuously monitoring container environments have proved very effective in minimizing potential security threats. This article outlines a proactive and strategic container security approach that is aimed at aligning all stakeholders, including developers, operations, and security teams. By implementing these requirements, organizations can ensure that their container security is well-coordinated and effectively managed.
The benefits of adopting cloud-native practices have been talked about by industry professionals ad nauseam, with everyone extolling its ability to lower costs, easily scale, and fuel innovation like never before. Easier said than done. Companies want to go cloud-native, but they’re still trying to solve the metaphorical “security puzzle” that’s at the heart of Kubernetes management. They start with people and processes, shift corporate culture to match cloud-native security best-practices, and then sort of hit a wall. They understand they need a way to embed security into the standard developer workflows and cluster deployments, but creating continuous and secure GitOps is—in a word—hard. What Makes Kubernetes Security So Difficult? For starters, Kubernetes is usually managed by developers. They can spin up clusters quickly, but they may not be responsible for managing or securing them. Enter the Site-Reliability Engineers (SREs), who feel the need to slow down developer velocity to better manage clusters and make sure they know what’s going into production. Though not ideal, slowing things down is often seen as a reasonable tradeoff for eliminating unmanaged clusters and the kinds of risks that make IT security professionals shudder. Another challenge of Kubernetes security is that its default configurations aren’t very secure. Kubernetes is built for speed, performance, and scalability. Though each piece of the Kubernetes puzzle has security capabilities built-in, they’re often not turned on by default. That usually means developers forgo security features to move faster. As if that weren’t challenging enough, Kubernetes has complex privilege-management features that can easily become everyone’s greatest pain point. Kubernetes comes with its own Role-Based Access Controls (RBAC), but they add yet another layer of complexity. Any changes made to the Kubernetes puzzle need to be reflected in the others. Four Tenets of Kubernetes Security Although all these challenges roll up to something pretty daunting, all hope is not lost. Enter the four tenets of Kubernetes security: Easy-to-digest best-practices you can implement—today—that will make your Kubernetes and cloud-native infrastructure more secure, and help you manage your cyber exposure. Tenet #1: Manage Kubernetes Misconfigurations With Solid Policies Applying policies and associated industry benchmarks consistently throughout the development lifecycle is an essential first step in Kubernetes security. For example, you might have a policy that says you’re not allowed to run any containers with root privileges in your clusters, or that your Kubernetes API server cannot be accessible to the public. If your policies are violated, that means you have misconfigurations that will lead to unacceptable security risks. Leveraging policy frameworks, like OPA, and industry benchmarks, like CIS, can help harden Kubernetes environments and prevent misconfigurations from going into production. Set your policies first, and don’t just park them on a shelf. Revisit them on a regular schedule to ensure they continue to suit your needs. Tenet #2: Implement Security Guardrails Kubernetes security starts in the development process. Most teams developing for cloud-native platforms are using Infrastructure-as-Code (IaC) to provision and configure their systems. They should expand that to policy-as-code. That means you and your team can apply the same security policies across the software-development lifecycle. For example, developers can scan their code for security problems on their workstations, in CI/CD pipelines, container image registries, and the Kubernetes environment itself. Leveraging developer-friendly tools can help seamlessly integrate security into the development process. Open-source IaC static code-scanners, like Terrascan, can help ensure only secure code enters your environment. Tenet #3: Understand and Remediate Container Image Vulnerabilities Remediating container image vulnerabilities can be a challenge because it can be hard to see what’s actually running in your environment. Kubernetes dashboards don’t tell you much; you need to know what your images contain and how they’re built. Many developers use common base images and call it good. Unfortunately, unscanned base images can leave you vulnerable, and the risks are compounded at the registry and host levels. Developers usually skip this sort of image-scanning because it slows them down. The fact is, if they’re not in the habit of being on the lookout for outdated OS images, misconfigured settings, permission issues, embedded credentials and secrets, deprecated or unverified packages, unnecessary services, and exposed ports, they’re just handing off the problem to someone else. Yes, their work may go more quickly, but their practices are slowing the entire software-delivery process. Some common security risks: In images: Outdated packages, embedded secrets, and the use of untrusted images. In registries: Stale images, lax authentication, and lack of testing. In hosts: Lax network policies, weak access controls, unencrypted data volumes and storage, and insecure host infrastructure. Tenet #4: Exposure Management Once you’ve implemented the first three tenets, it’s time to start looking at your infrastructure holistically. Not all policies apply in all cases, so how are you applying your exclusions? Not every vulnerability is critical, so how do you prioritize fixes and automate remediation? These questions can guide you as you work toward becoming less reactive and more proactive. At the end of the day, visibility is central to managing security in Kubernetes and cloud-native environments. You need to be able to recognize when your configurations have drifted from your secure baselines, and readily identify failing policies and misconfigurations. Only then can you get the full picture of your attack surface. Where Do You Go From Here? The Journey to Comprehensive Cloud Security You might be feeling a little overwhelmed by the sheer number of attack vectors in cloud environments. But, with a solid game-plan and taking to heart the best-practices described above, cloud-native security isn’t just about the destination—your runtimes—but the journey (the entire development lifecycle). Shifting left to catch vulnerabilities using policy-as-code is a great first step toward ensuring the reliability and security of your cloud assets—and wrangling all the pieces of the Kubernetes puzzle.
Apostolos Giannakidis
Product Security,
Microsoft
Samir Behara
Senior Cloud Infrastructure Architect,
AWS
Boris Zaikin
Senior Software Cloud Architect,
Nordcloud GmBH
Anca Sailer
Distinguished Engineer,
IBM