Supplementary Notes to Deploy Argo-Events in Managed Namespace Scope

Antonio Si
4 min readFeb 8, 2023

— coauthored with Prema Kuppuswamy

Photo by Sigmund on Unsplash

Argo-Events is an event-driven workflow automation framework for Kubernetes. There are two options to install Argo-Events. One can either install Argo-Events in cluster scoped or namespace scoped. When installing Argo-Events in namespace scoped, there is an option to install the Argo-Events controller in the default argo-events namespace while the event sources, sensors, and eventbuses are installed in another namespace. This short article supplements the Argo-Events documentation regarding how Argo-Events can be deployed to manage Argo-Events resources in another namespace. Note that we will not go into the details of Argo-Events. Please refer to the documentation of Argo-Events for more details.

@Setup

To illustrate the installation step and the problems that we encountered during our first installation attempt, we would need a simple kubernetes cluster. We use docker-desktop to create a local kubernetes cluster, but one can simply use minikube, k3d, or any other kubernetes cluster.

We create two namespaces, argo-events and runtime-test. The namespace argo-events is the default namespace to run the Argo-Events controller. The namespace runtime-test is created for eventsources, eventbus, and sensors. We intend to deploy all our eventsources, eventbus, and sensors in the runtime-test namespace.

kubectl get namespaces
NAME STATUS AGE
argo-events Active 52s
default Active 11m
kube-node-lease Active 11m
kube-public Active 11m
kube-system Active 11m
runtime-test Active 42s

@Test1

In order to deploy all our Argo-Events manifests in the runtime-namespace, we would need to add the parameter managed-namespace in the controller deployment section of the namespace-install.yaml manifest as described in the Argo-Events documentation:

      - args:
- controller
- --namespaced
- --managed-namespace
- runtime-test

We would also need to create RBAC resources in runtime-test namespace in order for the managed-namespace settings to work. If we apply the namespace-install.yaml manifest without creating proper RBAC in runtime-namespace, we will run into permission errors in the Argo-Events controller-manager pod. To illustrate, we deploy the Jetstream eventbus in the runtime-test namespace and we observe the following error in the controller-manager pod:

W0121 19:17:39.604371       1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:argo-events:argo-events-sa" cannot list resource "services" in API group "" in the namespace "runtime-test"

This error indicates that the serviceaccount, argo-events-sa, from the argo-events namespace is not able to access resources in the managed namespace, runtime-namespace. Some RBAC resources are missing.

@Test2

To address this problem, we copy the argo-events-role defined in the namespace-install.yaml manifest and create a similar role in runtime-test called argo-events-runtime-role:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-events-runtime-role
namespace: runtime-test
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
...

And we create a RoleBinding between argo-events-runtime-role and the serviceaccount argo-events-sa defined in the argo-events namespace:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argo-events-cross-runtime-role-binding
namespace: runtime-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argo-events-runtime-role
subjects:
- kind: ServiceAccount
name: argo-events-sa
namespace: argo-events

This will allow the argo-events-sa serviceaccount in the argo-events namespace to access the resources in runtime-namespace. It is preferable to create this Role and RoleBinding manifests in the runtime-test before the Argo-Events controller-manager deployment is created to avoid the RBAC error.

If one needs to run the sensor under a different service account such as operate-workflow-sa, the following RBAC will be needed:

apiVersion: v1
kind: ServiceAccount
metadata:
name: operate-workflow-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: operate-workflow-role
rules:
- apiGroups:
- argoproj.io
verbs:
- "*"
resources:
- workflows
- workflowtemplates
- cronworkflows
- clusterworkflowtemplates
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: operate-workflow-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: operate-workflow-role
subjects:
- kind: ServiceAccount
name: operate-workflow-sa

@Verify

Once the proper RBAC resources are created in the runtime-test namespace, we can deploy the Jetstream eventbus, the sample webhook eventsource, and the sample webhook sensor in the runtime-namespace. Following the example in the Argo-Events documentation, we can send an event to the eventsource and see the workflow get triggered by the sensor.

kubectl -n runtime-namespace get all
NAME READY STATUS RESTARTS AGE
pod/eventbus-default-js-0 3/3 Running 0 7m28s
pod/eventbus-default-js-1 3/3 Running 0 7m28s
pod/eventbus-default-js-2 3/3 Running 0 7m28s
pod/webhook-eventsource-8652m-585649fdff-6klt2 1/1 Running 0 4m37s
pod/webhook-h5wpp 0/2 Completed 0 13s
pod/webhook-sensor-9b6th-558898c89f-whsjp 1/1 Running 0 103s

--

--