What's so bad about sidecars, anyway?

Published by Hrittik Roy on May 12, 2024
What's so bad about sidecars, anyway?

Sidecars are a design pattern in which an auxiliary container is deployed alongside the main containers, extending the capabilities of individual deployments to pursue a specific task or function.

In this article, you'll explore the benefits and limitations of using sidecars and the specific use cases where they are most appropriate. You’ll learn how to determine whether a sidecar is a suitable choice for a particular scenario as well as how to implement sidecars to maximize their benefits.

What are sidecars?

Sidecars are used to extend the functionality of a primary container as an add-on or side container to the primary one. In orchestration systems like Kubernetes, a pod is the smallest deployable unit and can contain multiple containers. The most common practice is to have a single primary container and one or more sidecar containers in a pod, if you decide to use them.

The primary container is responsible for the main functionality of the pod, while sidecar containers provide auxiliary or complementary functions. The primary benefit of using sidecars is that they allow developers to add additional functionality to an existing service without having to modify the code of that service, making it possible for each to be updated and managed separately.

This can be helpful in situations where modifying the leading service is not possible or would be complex or time-consuming, like for service discovery, retries, circuit breakers, logging, tracing, and security. With this architecture, consistency is maintained across the fleet and there’s no requirement to modify or redeploy the leading service if changes are required.

How can sidecars help?

Understanding the benefits of sidecars is helpful if you're interested in adopting them. Sidecars are particularly useful in the ways listed below.

Isolation

One of the main benefits of using sidecars is that they provide isolation between the primary service and the additional functionality that they provide. This can be useful in cases where you want to keep the primary service focused on its core functionality. It also helps you avoid cluttering things with unrelated code, thus making your codebase easier to maintain.

Sidecar containers also play a vital role in enhancing security by isolating the application from the network. By using a battle-tested and secure sidecar maintained by security professionals, the attack surface can be significantly reduced. Additionally, updating the sidecar can sometimes mitigate certain attack vectors by fixing vulnerabilities, which is often a simpler solution compared to fixing the entire application.

Quick deployment

Another important benefit of sidecars is that they can be deployed and updated quickly, without any changes or redeployment of the primary service. This capability is especially useful when you need to rapidly add or update functionality.

However, it's important to note that modifying the deployment of a pod, even just the sidecar, will trigger a redeployment of all the containers in the pod.

Using sidecars allows you to modularly add functionality to (or remove functionality from) the pod without the need to make changes to the core application container. This can be particularly useful for scenarios where there’s a need to add or remove supporting processes or services, such as monitoring or logging, without disrupting the main application as you're scaling your primary services.

For example, log collection across all your containers can be done with a few commands via Fluentd.

Scalability

Scalability, or the ability to handle changing workload demands without performance issues, is essential for modern businesses. Without sidecars, a central service would be required to manage critical functions such as logging or service discovery, and it would need to be scaled separately to handle increased loads. However, this central service could create a single point of failure in the system.

In contrast, using sidecar containers provides a more flexible solution. When requirements change, additional replicas can be scaled, and with each new pod comes a sidecar to support the additional logic without the need to scale the number of sidecar replicas. This approach helps eliminate the single point of failure and enables smooth scalability.

Where do sidecars fall behind?

Despite these advantages, using a sidecar is not always the best option. Common disadvantages include management complexity, resource usage, and update compatibility.

Management complexity

While using sidecars, it’s important to carefully architect their use and their underlying task to ensure that they are effective and efficient in meeting the desired goals. This is especially important if you have multiple sidecars handling different tasks.

You may need to manage and monitor each of these sidecars separately, so ensure a requirement exists before adopting them.

Resource consumption

Increased usage of sidecars can quickly add up to high CPU, memory, or network bandwidth usage. This can be particularly problematic if you have a large number of sidecars or if they are handling high volumes of data or requests.

There may be cases where the cluster is not automatically scaled down as the sidecar mounts the local volume, occupying your resources and increasing your infrastructure costs.

Additionally, sometimes having the logic inside the primary container or at the node level is more resource-efficient than in sidecars. This is especially true for workloads with low resource requirements or when the overhead of managing multiple containers outweighs the benefits of modularity.

For example, consider a scenario where a simple monitoring process needs to be added to a containerized application. Rather than adding a separate sidecar container for the monitoring logic, it may be more efficient to simply include the necessary code and dependencies in the primary container. This can reduce the overall resource usage and complexity of the system while still providing the desired monitoring functionality.

Update compatibility

Another potential issue with sidecars is that they may not always be compatible with updates to the primary service. If the primary service is updated in a way that is not compatible with the sidecar, it could cause issues with the overall operation of the application.

However, this can be avoided by testing new deployments for compatibility in a similar environment to your production environment in order to keep your services operational without interruption.

When should you not use sidecars?

As mentioned above, sidecars can be quite resource-hungry. Each sidecar container requires its own copy of routing information and executables, which can take up a significant amount of memory if the routing information is large and there are many endpoints to connect to. Additionally, having an instance of the network proxy in every single pod can also be resource-intensive. This makes sidecars not an ideal use case for several situations.

Service meshes, which use sidecar containers to facilitate communication between services, have become an important part of modern infrastructure. However, some teams are now adopting a sidecar-less approach to service meshes due to the resource intensiveness and added complexity of sidecar containers. A sidecar-less service mesh architecture can still provide benefits such as improved observability and control over service communication, but without the added overhead of sidecars.

For applications with a small number of services, sidecar design patterns may not be justified. In these cases, alternative approaches that do not rely on sidecars—like a centralized solution—may be more suitable. Similarly, monolithic architectures with a single, large codebase aren't ideal for a sidecar design pattern.

When are sidecars the perfect fit?

Sidecars are a great choice in situations where adding additional functionality to a primary service would otherwise be complex or time-consuming. Several use cases for implementing a sidecar pattern are relevant here, including authentication, authorization, logging, and metrics collection.

Authentication

Sidecars offer a way to extend the functionality of a primary service by adding additional services to the same application. In the context of authentication, a sidecar called an OAuth proxy can be used to handle the authentication process for the main service, allowing the main service to focus on its core responsibilities.

To ensure security and efficiency, the sidecar uses a service account to gain access to the resources of the main service. Additionally, the OAuth proxy sidecar can be configured to validate access through the use of bearer tokens or Kubernetes client certificates. In this way, the sidecar helps add robust authentication functionality to the primary service.

Log collection

Fluentd is an open source data collection software that can be used in conjunction with the sidecar approach to facilitate the collection and forwarding of log data. In a microservices architecture, each container generates its own log data, which can be difficult to manage and analyze if it is not properly collected and centralized.

By using a Fluentd sidecar, it is possible to stream log data from each service in real time from pods, collecting it in a central location for easy analysis. This can be particularly useful in a sidecar architecture where multiple services are running in a single container or pod, each generating its own log data.

Once the sidecar receives the logs from the volume, the logs are forwarded to a centralized logging system, such as Elasticsearch or Splunk, for analysis and storage.

Metrics

Another important use case for sidecars is when you need to monitor metrics with Prometheus and want to use a Thanos sidecar as a deployment for long-term storage and querying of metrics data.

Thanos is an open source project that helps improve the storage capacity and scalability of Prometheus, a monitoring system for collecting and storing metrics. It does this through the use of a sidecar component, which collects and stores metrics in an object storage system, and a Store API that allows for efficient querying.

Overall, Thanos helps to provide metrics and traces with distributed, highly available, and long-term storage by utilizing the sidecar component.

Authorization

Finally, Cerbos is an open source authorization tool that can be self-hosted and added to applications to handle authorization requests through a sidecar design alongside your main application.

Designed to be efficient and lightweight with a low-latency gRPC API, Cerbos is stateless and focuses on speed and performance, making it suitable for handling large volumes of requests without performance bottlenecks.

Making the most out of your sidecars

Implementing sidecars is generally simple for these kinds of use cases, but here are a few best practices to follow in a production or scale environment to get the most out of them.

Don't use sidecars without a clear reason

First, sidecars are best used to extend the functionality of the primary service. Because they can add complexity and overhead to an application, using sidecars for their own sake without a clear benefit can detract from the overall simplicity and efficiency of the system. By carefully considering whether a sidecar is necessary and whether it is the most appropriate solution for a given problem, you can ensure that your application is well-designed and efficient.

For example, in some cases, it may be more efficient to add a small amount of functionality directly to the primary service rather than using a sidecar to extend its functionality.

However, suppose you need to add logging or metrics to a service. In that case, a sidecar could be a good solution, since using one means you won't have to add logic throughout the application. A sidecar that collects logs via the shared volume and sends them to a centralized data store is a simple and efficient solution that is widely used.

Keep them as small as possible

Next, sidecars should be as small as possible to minimize resource usage and maximize efficiency. This means minimizing the number of libraries and dependencies, as well as reducing the overall size of the container image.

This also reiterates the previous point: if you only need a small amount of functionality, it may be better to add it directly to the primary service than to use a sidecar to avoid unnecessary resource usage and inefficiencies affecting your primary service. When used effectively, however, small and focused sidecars can provide powerful and flexible solutions that enhance the capabilities and resilience of your primary service without sacrificing performance or efficiency.

Set up proper resource limits

Finally, monitoring sidecars is important to ensure that they are running efficiently and are not impacting the performance of the primary service. Some key metrics to monitor include CPU and memory usage, as well as the number of connections or requests being handled by the sidecar. By setting appropriate limits on these metrics, you can help ensure that the sidecar is able to function effectively without impacting the performance of the primary service.

It’s also important to regularly review and adjust these limits as needed to ensure that they are still appropriate for the workload being handled by the sidecar. This can help prevent issues such as resource contention or capacity constraints that could negatively impact the primary service's performance.

Thankfully, monitoring can be achieved fairly easily. As you’ve seen, when sidecars are properly implemented and monitored, they can become an indispensable part of modern service architectures with benefits like security, authentication, logging, and more.

Final thoughts

In this article, you learned about sidecars and their many benefits, including the ability to decouple primary services from auxiliary functions, improve modularity, and enhance security.

While sidecars are useful in certain situations, they have some downsides like high resource usages with scale and slower performance due to the network hops between containers. Modern alternatives like extended Berkeley Packet Filter (eBPF) can run sandbox programs inside your kernel with improved performance and reduced resource usage for instrumentation and other use cases with better visibility and control.

If you are looking for a solution to add authorization capabilities to your applications, consider Cerbos, an open source, self-hosted authorization layer that separates your authorization logic from your core application code.

FAQ

What is the sidecar used for?

What are sidecars in microservices architecture?

What are the drawbacks of using sidecars for service communication?

Are there alternatives to the sidecar pattern in microservices?

Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team