Helm is a (good-enough) tool to deploy applications in Kubernetes but one of the main flaws is the server-side component called Tiller which in most places cluster-admin is the recommended role.

K8Spin shares a Kubernetes clusters with many people. This is why helm’s tiller cannot have such broad permissions.

We have written an article in medium describing this problem.

How to use in K8Spin?

Assuming you have already downloaded your namespace configuration file (let’s name it kubernetes.config), the kubernetes client (kubectl) installed, helm also installed locally and a namespace named: angelbarrerasanchez-gmail-com-helm:

$ cd /tmp
$ ls kubernetes.config
# Lets configure the kubernetes client with the kubernetes.config config file
$ export KUBECONFIG=$(pwd)/kubernetes.config
$ kubectl get ns angelbarrerasanchez-gmail-com-helm
NAME                                 STATUS   AGE
angelbarrerasanchez-gmail-com-helm   Active   2m25s
$ helm version --client
Client: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}

Deploy helm’s tiller

Tiller is the in-cluster component of Helm. It interacts directly with the Kubernetes API server to install, upgrade, query, and remove Kubernetes resources. It also stores the objects that represent releases.

Source: https://helm.sh/docs/glossary/#tiller

To deploy the helm’s tiller we are interested in executing init:

$ helm init --service-account angelbarrerasanchez-gmail-com-helm --tiller-namespace angelbarrerasanchez-gmail-com-helm
$HELM_HOME has been configured at /home/angel/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Reviewing the executed command we can see three options of the command:

  • --service-account: Indicates the service account (permissions) that the tiller will have to operate in the cluster. Its value is a service account generated at the time of the namespace generation.
  • --tiller-namespace: It indicates in which namespace the helm’s tiller will live. We’ll use our own. It casually has the same name as the service account. 😲

We can see how the tiller is running on our namespace:

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
tiller-deploy-6f848695db-gvq4z   1/1     Running   0          35s

Test it

Let’s try a very simple chart developed by bitnami to deploy a nginx server.

First, we configured bitnami repository at helm:

$ helm repo add bitnami https://charts.bitnami.com
"bitnami" has been added to your repositories

Then, we can deploy the bitnami nginx chart into our namespace.

$ helm install --name hello --set service.type=ClusterIP bitnami/nginx --tiller-namespace angelbarrerasanchez-gmail-com-helm
NAME:   hello
LAST DEPLOYED: Sat Apr 20 16:28:32 2019
NAMESPACE: angelbarrerasanchez-gmail-com-helm

==> v1/Service
hello-nginx  ClusterIP  <none>       80/TCP   0s

==> v1beta1/Deployment
hello-nginx  1        1        1           0          0s

==> v1/Pod(related)
NAME                          READY  STATUS             RESTARTS  AGE
hello-nginx-646dd5f4b8-6k294  0/1    ContainerCreating  0         0s

You may have noticed how we changed a chart configuration parameter (--set service.type=ClusterIP). By default this chart tries to create a loadbalancer service. This action is not currently available. For this reason the exposed service is of ClusterIP type.

You can view helm releases with the following command:

$ helm ls --tiller-namespace angelbarrerasanchez-gmail-com-helm
hello	1       	Sat Apr 20 16:28:32 2019	DEPLOYED	nginx-2.2.1	1.14.2     	angelbarrerasanchez-gmail-com-helm
Extra: Access to a clusterIP type service

We will use port-forward to access the service.

First we identify the name of the pod we have deployed and the port it exposes:

$ kubectl get pods,endpoints -l release=hello
NAME                               READY   STATUS    RESTARTS   AGE
pod/hello-nginx-646dd5f4b8-6k294   1/1     Running   0          20m

NAME                    ENDPOINTS         AGE
endpoints/hello-nginx   20m

Identified the pod (hello-nginx-646dd5f4b8-6k294) and the port (8080) we will execute port-forward to the pod.

$ kubectl port-forward hello-nginx-646dd5f4b8-6k294 12345:8080
Forwarding from -> 8080
Forwarding from [::1]:12345 -> 8080

On another command line, execute:

$ curl -s localhost:12345 | grep "Thank"
<p><em>Thank you for using nginx.</em></p>

That’s it, simple, isn’t it?

Helm template

You can forget about deploy helms tiller in your namespace with 4 simple steps:

$ helm init --client-only

Fetch the chart:

$ helm fetch \
  --repo https://kubernetes-charts.storage.googleapis.com \
  --untar \
  --untardir ./charts \
  --version 8.9.2 \

Render manifests:

$ helm template \
  --values prometheus-values.yaml \
  --output-dir ./manifests \

Apply manifests:

$ kubectl apply --recursive --filename ./manifests/prometheus

Although there are some side-effects to this (if a resource is removed from the template, it will not be deleted from the cluster).


You must keep thinking that you are in a shared cluster and most helm charts are not meant to be used in a multi-tenant cluster.

What does this mean? You won’t be able to deploy, for example, CRDs or charts that create cluster-level permissions (cluster-role, clusterrolebindings…). This does not prevent you from using helm (as we have demonstrated in this section). Just keep in mind where you are working.

Finally, if you are paranoid about helm and tiller safety, we recommend you to activate tls in the helm’s tiller. More information in the official documentation.