Testing out IPVS mode with minikube
Starting from kubernetes version 1.9 there is a new promising IPVS mode in kube-proxy. One of its advantages is the possibility to pick a load-balancing method: RR, least connected, source/destination hashing, shortest delay plus some variations of those. What I was interested in are the LC methods, as they’re somewhat better than iptables’ RR in terms of handling inbound traffic under load.
Trying it out
So, to use IPVS one should either pass --proxy-mode ipvs
to kube-proxy, or set the equivalent option in the config file. As it turned out,
minikube doesn’t provide a way to pass extra config to kube-proxy…
Adjusting privileged network options for docker containers
<docker has changed the devops landscape, blah-blah-blah…>
Now, let’s get to the practical aspects of running containers in production.
The challenge
It’s a popular opinion that default value of net.core.somaxconn
is too low for
heavily loaded web-servers (not going to benchmark it, just take it as an
example). No probs, let’s set it to a higher value:
sysctl -w net.core.somaxconn=1024
Well, it’s not going to work.