Boosting Application Resilience and Performance with the Sidecar Pattern
How to use the sidecar pattern to compose a more resilient and better performing system.
What is the Sidecar Pattern in Kubernetes?
The sidecar pattern enables developers to compose a system with reusable plug-and-play components. Without affecting the core functionality provided by the containerised application – but enhancing it, by adding additional containers. For example, it lets you add caching for performance or circuit breakers for resilience.
Use cases
To avoid being too theoretical, let me give you some examples:
- Push logs to a repository for analysis and better observability (filebeat sends logs to the ELK stack)
- Provide a reverse proxy that you send all your apps requests through. This allows you to plug in features like network topology analysis or circuit breakers (envoy to enable a service mesh like istio)
- Caching functionality (Hazelcast for distributed caching). Which adds some overhead but is runtime agnostic.
“Make it more visual!”
Okay, okay. In my opinion, the envoy proxy is a great example to illustrate. FAANG and all the other big names use it to build their service meshes. Below you see a pod that has two containers. One container that runs the app and a proxy container. Any ingoing (1, 2) or outgoing (3, 4) request goes through the proxy container and gets forwarded to the destination. The envoy container can intercept the request at any time.

This way one cannot only observe all the traffic flow but also control it. When it comes to observability, the data that is being collected can be insightful. For example, you not only have a UML diagram that states how systems should be connected. But you have a real-time diagram on how systems are actually communicating. Based on the out- and ingoing requests.

When we talk about controlling the traffic, there are numerous mechanisms that help you make your service more resilient and ensure performance. Like, for example, the circuit breaker. A circuit breaker lets you detect a malfunctioning component and decides to stop serving/accepting traffic. Find a good in-depth explanation on the nginx webpage.
Advantages of Using Sidecar Containers in Kubernetes
By now, you should have an idea how the sidecar pattern supports your containerised application. Let us list some benefits of that approach:
- Stick to the single responsibility principle. Your application fulfils its function, while the attached sidecar containers fulfil theirs.
- Provide modularity by only providing minimal interfaces. Usually, the sidecars and the core container do not know about each other. The interface is minimum and logic is kept in the respective scope.
- Cover cross-cutting concerns without changing the workload itself. The operations team can drive adoption of new functionality by changing the deployment configuration without changing the app itself.
- Functionality can be plug-and-play. Just drop your sidecar and add one or two lines of config, and you might be good to go. Not always the case, but often simple.
Namespaces – and why you should know about them
You might wonder what resource and runtime permissions the sidecar container has. Well, this can be configured. But let’s say they can share many resources. The concept that is working behind the scenes here is called namespace sharing.
For example, you can use the Pod Workload API to set sharedProcessNamespace
to true
(link). This allows a container in the pod to see other containers’ processes (note the implications.)
Note that sharing namespaces might make some work easier. But it can also introduce the risk of exposing information that you do not want to expose. For example, by sharing configuration files that you do not want to share.