After a normal user authenticates, it was hav access to everything.
To limit, you need to configure auth.
There are multiple offerings:
AlwaysAllow / AlwaysDeny
ABAC (Attribute-Based Access Control)
RBAC (Role Based Access Control)
Webhook (auth by remote service)
Service Users
Service Users are using Service Account Tokens.
They are stored as credentials using Secrets.
Those Secrets are also mounted in pods to allow communication between the services.
Service Users are specific to a namespace.
They are created automatically by the API or manually using objects.
Any API call not authenticated is considered as an anonymous user.
Other notes on User Management
Auth is still a work in progress. The demo itself shows the creation of a asymmetric key and updating Minikube to allow that as the user.
4.4 RBAC (Role Based Access Control)
Regulates access using roles.
Allows admins to dynamically configure permission policies.
This is what I'll use in the demo.
You add RBAC resources with kubectl from a yaml format file.
First, define a role, then assign users/groups to that role.
You can create roles limited to a namespace or that applies across all namespaces.
Role (single namespace) and ClusterRole (cluster-wide).
RoleBinding (single namespace) and ClusterRoleBinding (cluster-wide).
4.5 Networking
Communication topics already covered:
Container to container: communication within a pod.
Through localhost and the port number.
Pod-To-Service comms
Using NodePort and DNS.
External-To-Service
Using LoadBalancer, NodePort.
Pods
The pod should always be routable.
Kubernetes assumes that pods should be able to communicate to other pods, regardless of which node they are running.
Kubernetes assumes that pods should be able to communicate to other pods, regardless of which node they are running.
Every pod has its own IP address.
Pods on different nodes need to be able to communicate to each other using those IP addresses.
This is implemented differently depending on your networking setup.
On AWS: kubenet networking (kops default).
Kubenet Networking
Every pod can get an IP that is routable using the AWS Virtual Private Network (VPC).
The kubernetes master allocates a /24 subnet to each node (254 IP addresses).
The subnet is added to the VPCs route table.
There is a limit of 50 entries, which means you can't have more than 50 nodes in a single AWS cluster.
VPC Alternatives
Not every cloud provider has VPC-tech (GCE and Azure do).
The alternatives for things like on-prem etc are:
Container Network Interface (CNI)
Software that provides libraries/plugins for network interfaces within containers.
Popular solutions are Calico, Weave (standalone or with CNI).
An Overlay Network
Flannel is an easy and popular way.
4.6 Node Maintenance
It is the Node Controller that is responsible for managing the Node object.
It assigns IP Space to the node when a new node is launched.
It keeps the node list up to date with the available machines.
The node controller is also monitoring the health of the node.
If a node is unhealthy it gets deleted.
Pods running on the unhealthy node will then get rescheduled.
Adding a new node
When adding a new node, the kubelet will attempt to register itself. This is called self-registration and is the default behaviour.
It allows you to easily add more nodes to the cluster without making API changes yourself.
A new node object is automatically created with:
The metadata (with a name: IP or hostname).
Labels (e.g. cloud region / availability zone / instance size).
Has node condition (e.g. Ready, OutOfDisk).
When you want to decomission a node, you want to do it gracefully.
Drain a node before you shut it down or take it out of the cluster.
# drain a node
$ kubectl drain nodename --grace-period=600
# if node runs pods not managed by a controller but just a single pod
$ kubectl drain nodename --force
Terminal draining
4.7 High Availability
If you are running in prod, you willwant all master services in high availability.
Setup
Clustering etcd: at least run 3 etcd nodes.
Replicated API servers: with a a LoadBalancer
Running multiple instances of the scheduler and the controllers.
only one of them will be the leader, the others are on stand-by.