Archive for the ‘Kubernetes’ Category
Kubernetes e2e tests and feature gates
Today I had to remind myself how the Kubernetes test-infra interacts with features. Unlike with the unit tests, feature gates for the e2e tests are frequently set externally by the CI test definitions rather than the test themselves. Tests that rely on features not set by default are tagged using [Feature:$name]
and excluded from the default presubmit tests.
In my case I was adding a test an alpha feature to the e2e node tests. SIG node maintains test configuration that will run tests tagged [NodeAlphaFeature:$name]
with --feature-gates=AllAlpha=true
, so all I had to do was tag my new tests and remember to set TEST_ARGS="--feature-gates=$name=true"
when running locally.
Ephemeral Containers and Kubernetes 1.22
Today we changed the API for Ephemeral Containers in Kubernetes. It’s a setback for those who were hoping for an Ephemeral Containers beta to get the feature enabled in production clusters, but I’m glad we took the time to change it while the feature is still in alpha. The new API use the simpler, well-known pattern that the kubelet uses to update Pod status through a separate subresource. It was quick to implement since it’s actually the same as a prior prototype.
SIG Auth requested the change during the 1.21 release cycle to make it easier for Admission Controllers to gate changes to pods, but my favorite part is that API reference docs will be simpler since we got rid of the EphemeralContainers
Kind that was used only for interacting with the ephemeralcontainers
subresource.
It’s a large change, though, so the right thing is to hold the revised API in alpha for at least a release to gather feedback. That means the earliest we’d see an Ephemeral Containers beta is 1.23: pretty far from the 1.7 cycle when we started and 1.16 where the feature first landed in alpha. I wonder if that’s a record.
In the mean time, let’s implement all of the feature requests and have nothing left to do in 1.23. Next up: configurable security context.
Sharing Process Namespace in Kubernetes
Kubernetes pods allow cooperation between containers, which can be powerful, but they have always used isolated process namespaces because that’s all Docker supported at the time Kubernetes was created. This prevented one from doing things like signalling a main process from a logging sidecar, for example.
I’ve been working with SIG Node to change this, though, and Process Namespace Sharing has been released as an Alpha feature in Kubernetes 1.10. Compatibility within an API version (e.g. v1.Pod
) is very important to the Kubernetes community, so we didn’t change the default behavior. Instead we introduced a new field in v1.Pod
named ShareProcessNamespace
. Try it for yourself!
Pods exist to share resources, so it makes sense to share processes as well. I wouldn’t be surprised if process namespace sharing became the default in v2.Pod
.
I’d love to hear what you think and whether this feature helps you. Let me know in Kubernetes feature tracking or the comments below.