BLOG

The Art of Scaling Containers: Discovery

Lori MacVittie 缩略图
Lori MacVittie
Published December 28, 2017

Scaling containers is more than just slapping a proxy in front of a service and walking away. There’s more to scale than just distribution, and in the fast-paced world of containers there are five distinct capabilities required to ensure scale: retries, circuit breakers, discovery, distribution, and monitoring.

In this post on the art of scaling containers, we’ll dig into discovery.

Discovery

Traditional models of scale are relatively fixed in configuration. Load balancing proxies use an algorithm to select a resource from a fairly static pool of resources. While selection of the resource (distribution) is a real-time event and may include consideration of just-in-time variables like load and response times, the overall environment is not. Operators add and remove resources based on enterprise-defined processes that govern the increase or decrease of capacity.

Modern models of scale are volatile and dynamic by nature. The premise of scale in modern systems is based on “auto” everything. Auto-scale, a concept made concrete by cloud, forces the notion of expanding and contracting pools of resources based on demand.But this is only a half-step from traditional models. There still exists the notion of a “core” pool of resources from which a load balancing algorithm makes a selection. That pool may be expanded or contracted dynamically, but is still constrained by the fairly traditional model in which it operates.

Containers – and in particular container orchestration environments – break out of that mold completely. The concept of an endpoint (the ingress) is one of the few ‘fixed’ elements in container-based scale, with everything else from routing rules to the resources that make up its supporting pool considered highly variable. Theoretically, containers may live only seconds. In practical application, however, they generally tend to live for days. Compared to traditional models where resources lived for months (and sometimes years), this lifespan puts pressure on operators that cannot be borne without automation.

Discovery is therefore a critical component for scaling containers. Load balancing proxies must be able to select a resource from a pool of resources that may or may not be the same as it was a minute ago. Most container orchestration environments, then, include a ‘registry’ of resources that aids proxies in the real-time discovery of resources to ensure scale.

This process requires proxies not only to be integrated into the container orchestration ecosystem, but to speak the lingua franca of these environments: declarative configuration files (resource files) and meta-data are used within container orchestration environments to allow proxies to identify the resources attached to the services they are scaling. This allows proxies to automatically adjust their “pools” of resources in real-time, and avoid the embarrassment of selecting a resource that is already regrettably been shut down.

Without discovery, there is no efficient way for a load balancing proxy to scale applications and services in a container environment. No operator could keep pace with the changes in such an environment using manual methods of configuration. Merely keeping track of what resources are active – and attached to any given service – would consume most of their time, leaving little time to actual make the changes required.

Discovery provides the means by which components in a container orchestration environment can operate in real-time, and enables load balancing proxies to scale apps and services by ensuring they have the information they need to select resources on-demand.