I am trying to grasp the concepts of Waypoint and while I think I modestly understand the various principles in the official documentations or tutorials, I am still failing to see how I could introduce waypoint into my team (10 people).
We are using a local Kubernetes with its configuration stored in one repo containing the YAML files of each service. This configuration is deployed automatically by Github Actions on “PR merge on master” (after checks and peer approval)
Each service has its own Git repository and this is up to each repo to build its artifact (Binary and Docker) itself and push it to our own (local) Docker registry (of course using GH Actions).
We like that the k8s’s repo is the only source of truth (for auditability) and that nobody has any access to the k8s’ cluster directly (except some breakglass and the CI/CD accounts).
The major pain is when we need to bump the version of one service: we need to wait for the Docker image being built, find the new Docker image ID, copy/paste it into one of the YAML files of the k8s’ repo and wait for the deployment.
I am sure there are a lot of anti-patterns here (I welcome any feedback on the best practices btw).
Is it possible to use waypoint without giving k8s’ administrative access to the team directly?
My only idea would be to have one unique waypoint-repo composed of git-submodules pointing to each service and a CI/CD script calling waypoint in each separate folder. I guess that would work but I would rather stay far away from git-submodules if possible.
Would you have any other idea please?
Thank you very much.
PS: I am only talking about k8s here but we also have exactly the same problem with our AWS configuration (backed with terraform)
Hello @nicob, this post caught my attention after @SunSparc’s recent reply! We actually wrote up a use case a couple months back on using Waypoint with GitHub Actions, and recommend that you check that out! GitHub Actions is definitely capable of running Waypoint and doing builds and deployments that way, in CI/CD.
But also, regarding k8s specifically, with Waypoint, operations (builds, deployments, releases, pipeline runs) are done in Kubernetes using runners. You may install a “static runner” to k8s using the command waypoint runner install with the relevant config flags. However, this static runner launches “on-demand runners” to do the work of Waypoint job operations. These on-demand runners are launched using a service account, and are granted service account permissions of their own, enabling them to create deployments in the cluster. The configuration used by the “task launcher” plugin for Kubernetes is documented here.
On a more general level though, Waypoint does address this problem:
The major pain is when we need to bump the version of one service: we need to wait for the Docker image being built, find the new Docker image ID, copy/paste it into one of the YAML files of the k8s’ repo and wait for the deployment.
A Waypoint build will push an artifact to a registry. That artifact, in your case, a Docker image, has the image name and tag associated with it. This is automatically made available to the next phase of the Waypoint lifecycle, deploy, which is capable of using the artifact details to deploy the artifact to the configured platform (Kubernetes) correctly.
Thank you for your feedback and taking the time to consider Waypoint.
Introducing Waypoint to your team without giving direct Kubernetes administrative access is possible. You can consider the following approach:
Waypoint Deployment Repositories: Instead of using git-submodules, you can create separate Waypoint deployment repositories for each service. Each repository would contain the necessary configuration and deployment files specific to that service.
CI/CD Pipeline: Set up a CI/CD pipeline in each service’s repository that builds the artifact (Binary and Docker), pushes it to your local Docker registry, and then triggers the deployment using Waypoint. This way, each service can manage its own deployment process.
Central Configuration Management: You can still keep the cone sleeve Kubernetes and AWS configuration files in your existing repositories to maintain a single source of truth (for auditability), but the deployment-specific configurations should reside in the Waypoint deployment repositories.