Jump to main content

Configuring MicroK8s services

MicroK8s brings up Kubernetes as a number of different services run through systemd. The configuration of these services is read from files stored in the $SNAP_DATA/args directory, which normally points to /var/snap/microk8s/current/args.

To reconfigure a service you will need to edit the corresponding file and then restart MicroK8s. For example, to add debug level logging to containerd:

echo '-l=debug' | sudo tee -a /var/snap/microk8s/current/args/containerd
microk8s stop
microk8s start

The following systemd services will be run by MicroK8s. Starting with release 1.21, many individual services listed below were consolidated into a single kubelite service.

snap.microk8s.daemon-apiserver

The Kubernetes API server validates and configures data for the API objects which include pods, services, replication controllers, and others. The API Server services REST operations and provides the frontend to the cluster’s shared state through which all other components interact.

Starting with release 1.21, daemon-apiserver was consolidated into daemon-kubelite.

The apiserver daemon is started using the arguments in ${SNAP_DATA}/args/kube-apiserver. The service configuration is described in full in the upstream kube-apiserver documentation.

snap.microk8s.daemon-controller-manager

The Kubernetes controller manager is a daemon that embeds the core control loops shipped with Kubernetes. In Kubernetes, a controller is a control loop which watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.

Starting with release 1.21, daemon-controller-manager was consolidated into daemon-kubelite.

The kube-controller-manager daemon is started using the arguments in ${SNAP_DATA}/args/kube-controller-manager. For more detail on these arguments, see the upstream kube-controller-manager documentation.

snap.microk8s.daemon-proxy

The Kubernetes network proxy runs on each node. This reflects services as defined in the Kubernetes API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.

Starting with release 1.21, daemon-proxy was consolidated into daemon-kubelite.

The kube-proxy daemon is started using the arguments in
${SNAP_DATA}/args/kube-proxy. For more details see the upstream kube-proxy documentation.

snap.microk8s.daemon-scheduler

The Kubernetes scheduler is a workload-specific function which takes into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed through the API as necessary.

Starting with release 1.21, daemon-scheduler was consolidated into daemon-kubelite.

The kube-scheduler daemon started using the arguments in ${SNAP_DATA}/args/kube-scheduler. These are explained fully in the upstream kube-scheduler documentation.

snap.microk8s.daemon-kubelet

The kubelet is the primary “node agent” that runs on each node. The kubelet takes a set of PodSpecs(a YAML or JSON object that describes a pod) that are provided and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not
created by Kubernetes.

Starting with release 1.21, daemon-kubelet was consolidated into the daemon-kubelite.

The kubelet daemon is started using the arguments in ${SNAP_DATA}/args/kubelet. These are fully documented in the upstream
kubelet documentation.

snap.microk8s.daemon-kubelite

Used in release 1.21 and later. The kubelite daemon runs as subprocesses the scheduler, controller, proxy, kubelet, and apiserver services. Each of these individual services can be configured using arguments in the matching ${SNAP_DATA}/args/ directory:

  • scheduler ${SNAP_DATA}/args/kube-scheduler
  • controller ${SNAP_DATA}/args/kube-controller-manager
  • proxy ${SNAP_DATA}/args/kube-proxy
  • kubelet ${SNAP_DATA}/args/kubelet
  • apiserver ${SNAP_DATA}/args/kube-apiserver

snap.microk8s.daemon-containerd

Containerd is the container runtime used by MicroK8s to manage images and execute containers.

The containerd daemon started using the configuration in
${SNAP_DATA}/args/containerd and ${SNAP_DATA}/args/containerd-template.toml.

snap.microk8s.daemon-k8s-dqlite

The k8s-dqlite daemon runs the dqlite datastore that is used to store the state of Kubernetes. In clusters with more than three control plane nodes this daemon ensures the high availability of the datastore.

The k8s-dqlite daemon is started using the arguments in ${SNAP_DATA}/args/k8s-dqlite

snap.microk8s.daemon-etcd

Etcd is a key/value datastore used to support the components of Kubernetes.

Etcd runs if ha-cluster is disabled. If ha-cluster is enabled, dqlite is run instead of etcd.

The etcd daemon is started using the arguments in ${SNAP_DATA}/args/etcd. For more information on the configuration, see the etcd documentation. Note that different channels of MicroK8s may use different versions of etcd.

snap.microk8s.daemon-flanneld

Flannel is a CNI which gives a subnet to each host for use with container runtimes.

Flanneld runs if ha-cluster is not enabled. If ha-cluster is enabled, calico is run instead.

The flannel daemon is started using the arguments in ${SNAP_DATA}/args/flanneld. For more information on the configuration, see the flannel documentation.

calico-node

Calico is a CNI which provides networking services. Calico runs on each node. calico-node is not managed by systemd.

Calico runs if ha-cluster is enabled. If ha-cluster is not enabled, Flannel runs instead.

snap.microk8s.daemon-traefik and snap.microk8s.daemon-apiserver-proxy

The traefik and apiserver-proxy daemons are used in worker nodes to as a proxy to all API server control plane endpoints. The traefik daemon was replaced by the apiserver proxy in 1.25+ releases.

The most significant configuration option for both daemons is the API server endpoints found in ${SNAP_DATA}/args/traefik/provider.yaml. For apiserver-proxy daemon (1.25+ on wards) the refresh frequency of the available control plane endpoints can be set in ${SNAP_DATA}/args/apiserver-proxy via the --refresh-interval parameter.

Last updated 1 year, 8 months ago. Help improve this document in the forum.