Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Prow + Kubernetes - A Perfect Combination To Execute CI/CD At Scale

Intro

Kubernetes is currently the hottest and standard way of deploying workloads in the cloud. It’s well-suited for companies and vendors that need self-healing, high availability, cloud-agnostic characteristics, and easy extensibility.

Now, on another front, a problem has arisen within the CI/CD domain. Since people are using Kubernetes as the underlying orchestrator, they need a robust CI/CD tool that is entirely Kubernetes-native.

Enter Prow

Prow compliments the Kubernetes family in the realm of automation and CI/CD.

In fact, it is the only project that best exemplifies why and how Kubernetes is such a superb platform to execute CI/CD at scale.

Prow (meaning: portion of a ship’s bow—ship’s front end–that’s above water) is a Kubernetes-native CI/CD system, and it has been used by many companies over the past few years like Kyma, Istio, Kubeflow, Openshift, etc.

Where did it come from?

Kubernetes is one of the largest and most successful open-source projects on GitHub. When it comes to Prow’s conception , the Kubernetes community was trying hard to keep its head above water in matters of CI/CD. Their needs included the execution of more than 10k CI/CD jobs/day, spanning over 100+ different repositories in various GitHub organizations—and other automation technology stacks were just not capable of handling everything at this scale.

So, the Kubernetes Testing SIG created their own tools to compliment Prow. Because Prow is currently residing under Kubernetes test-infra project, one might underestimate its true prowess/capabilities. I personally would like to see Prow receive a dedicated repo coming out from under the umbrella of test-infra.

What is Prow?

Prow is not too complex to understand but still vast in a subtle way. It is designed and built on a distributed microservice architecture native to Kubernetes.

It has many components that integrate with one another (plank, hook, etc.) and a bunch of standalone ones that are more of a plug-n-play nature (trigger, config-updater, etc.).

For the context of this blog, I will not be covering Prow’s entire architecture, but feel free to dive into it on your own later. 

Just to name the main building blocks for Prow:

  • Hook - acts as an API gateway to intercept all requests from Github, which then creates a Prow job custom resource that reads the job configuration as well as calls any specific plugin if needed.
  • Plank - is the Prow job controller; after Hook creates a Prow job, Plank processes it and creates a Kubernetes pod for it to run the tests.
  • Deck - serves as the UI for the history of jobs that ran in the past or are currently running.
  • Horologium - is the component that processes periodic jobs only.
  • Sinker - responsible for cleaning up old jobs and pods from the cluster.

More can be found here: Prow Architecture. Note that this link is not the official doc from Kubernetes but from another great open source project that uses Prow extensively day-in-day-out - Kyma.

This is how Prow can be picturized:



Here is a list of things Prow can do and why it was conceived in the first place.

  • GitHub Automation on a wide range

    - ChatOps via slash command like “/foo
    - Fine-tuned policies and permission management in GitHub via OWNERS files
    - tide - PR/merge automation
    - ghProxy - to avoid hitting API limits and to use GitHub API request cache
    - label plugin - labels management 
    - branchprotector - branch protection configuration 
    - releasenote - release notes management
  • Job Execution engine - Plank
  • Job status Reporting to CI/CD dashboard - crier
  • Dashboards for comprehensive job/PR history, merge status, real-time logs, and other statuses – Deck
  • Plug-n-play service to interact with GCS and show job artifacts on dashboard – Spyglass
  • Super easy pluggable Prometheus stack for observability – metrics
  • Config-as-Code for Prow itself – updateconfig
  • And many more, like sinker, branch protector, etc.

Possible Jobs in Prow

Here, a job means any “task that is executed over a trigger.” This trigger can be anything from a github commit to a new PR or a periodic cron trigger. Possible jobs in Prow include:  

  • Presubmit - these jobs are triggered when a new github PR is created.
  • Postsubmit - triggered when there is a new commit.
  • Periodic - triggered on a specific cron time trigger.

Possible states for a job

  • triggered - a new Prow-job custom resource is created reading the job configs
  • pending - a pod is created in response to the Prow-job to run the scripts/tests; Prow-job will be marked pending while the pod is getting created and running 
  • success - if a pod succeeds, the Prow-job status will change to success 
  • failure - if a pod fails, the Prow-job status will be marked failure
  • aborted - when a job is running and the same one is retriggered, then the first pro-job execution will be aborted and its status will change to aborted and the new one is marked pending

What a job config looks like:

CODE: https://gist.github.com/velotiotech/866aa7cad2d12edeaaafa4aed3bd9fb5.js

  • Here, this job is a “presubmit” type, meaning it will be executed when a PR is created against the “master” branch in repo “kubernetes/community”.
  • As shown in spec, a pod will be created from image “Golang” where this repo will be cloned, and the mentioned command will be executed at the start of the container.
  • The output of that command will decide if the pod has succeeded or failed, which will, in turn, decide if the Prow job has successfully completed.

More jobs configs used by Kubernetes itself can be found here - Jobs

Getting a minimalistic Prow cluster up and running on the local system in minutes.

Pre-reqs:

  • Knowledge of Kubernetes 
  • Knowledge of Google Cloud and IAM

For the context of this blog, I have created a sample github repo containing all the basic manifest files and config files. For this repo, the basic CI has also been configured. Feel free to clone/fork this and use it as a getting started guide.

Let’s look at the directory structure for the repo:

CODE: https://gist.github.com/velotiotech/8bba48f18ebfed301b2748bbc75625a6.js

1. Create a bot account. For info, look here. Add this bot as a collaborator in your repo. 

2. Create an OAuth2 token from the GitHub GUI for the bot account.

CODE: https://gist.github.com/velotiotech/d53d72c2fedc51bf4f537f022f10dca2.js

3. Create an OpenSSL token to be used with the Hook.

CODE: https://gist.github.com/velotiotech/2045f7d4bfd14bd402cb38d874d485f9.js

4. Install all the Prow components mentioned in prow-starter.yaml.

CODE: https://gist.github.com/velotiotech/ca0dd89b14e426fdc55a379f7b0d4133.js

5. Update all the jobs and plugins needed for the CI (rules mentioned in the Makefile). Use commands:

  • Updates in plugins.yaml and presubmits.yaml:
  • Change the repo name (velotio-tech/k8s-prow-guide) for the jobs to be configured 
  • Updates in config.yaml:
  • Create a GCS bucket 
  • Update the name of GCS bucket (GCS_BUCKET_NAME) in the config.yaml
  • Create a service_account.json with GCS storage permission and download the JSON file 
  • Create a secret from above service_account.json

CODE: https://gist.github.com/velotiotech/5027392ade10c99da36c6234199fb8ee.js

  • Update the secret name (GCS_SERVICE_ACC) in config.yaml

CODE: https://gist.github.com/velotiotech/f9ecb94e39a4773ab3aaacedb144e8e0.js

6. For exposing a webhook from GitHub repo and pointing it to the local machine, use Ultrahook. Install Ultrahook. This will give you a publicly accessible endpoint. In my case, the result looked like this: http://github.sanster23.ultrahook.com

CODE: https://gist.github.com/velotiotech/2c27e5780f06afcd4a0d9140b6c874ce.js

7. Create a webhook in your repo so that all events can be published to Hook via the public URL above:

  • Set the webhook URL from Step 6
  • Set Content Type as application/json
  • Set the value of token the same as hmac token secret, created in Step 2 
  • Check the “Send me everything” box
Creating webhook

8. Create a new PR and see the magic.

9. Dashboard for Prow will be accessible at http://<minikube_ip>:<deck_node_port></deck_node_port></minikube_ip>

  • MINIKUBE_IP : 192.168.99.100  ( Run “minikube ip”)
  • DECK_NODE_PORT :  32710 ( Run “kubectl get svc deck” )

I will leave you guys with an official reference of Prow Dashboard:

What’s Next

Above is an effort just to give you a taste of what Prow can do with and how easy it is to set up at any scale of infra and for a project of any complexity.

---

P.S. - The content surrounding Prow is scarce, making it a bit unexplored in certain ways, but I found this helpful channel on the Kubernetes Slack #prow. Hopefully, this helps you explore the uncharted waters of Kubernetes Native CI/CD. 

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Prow + Kubernetes - A Perfect Combination To Execute CI/CD At Scale

Intro

Kubernetes is currently the hottest and standard way of deploying workloads in the cloud. It’s well-suited for companies and vendors that need self-healing, high availability, cloud-agnostic characteristics, and easy extensibility.

Now, on another front, a problem has arisen within the CI/CD domain. Since people are using Kubernetes as the underlying orchestrator, they need a robust CI/CD tool that is entirely Kubernetes-native.

Enter Prow

Prow compliments the Kubernetes family in the realm of automation and CI/CD.

In fact, it is the only project that best exemplifies why and how Kubernetes is such a superb platform to execute CI/CD at scale.

Prow (meaning: portion of a ship’s bow—ship’s front end–that’s above water) is a Kubernetes-native CI/CD system, and it has been used by many companies over the past few years like Kyma, Istio, Kubeflow, Openshift, etc.

Where did it come from?

Kubernetes is one of the largest and most successful open-source projects on GitHub. When it comes to Prow’s conception , the Kubernetes community was trying hard to keep its head above water in matters of CI/CD. Their needs included the execution of more than 10k CI/CD jobs/day, spanning over 100+ different repositories in various GitHub organizations—and other automation technology stacks were just not capable of handling everything at this scale.

So, the Kubernetes Testing SIG created their own tools to compliment Prow. Because Prow is currently residing under Kubernetes test-infra project, one might underestimate its true prowess/capabilities. I personally would like to see Prow receive a dedicated repo coming out from under the umbrella of test-infra.

What is Prow?

Prow is not too complex to understand but still vast in a subtle way. It is designed and built on a distributed microservice architecture native to Kubernetes.

It has many components that integrate with one another (plank, hook, etc.) and a bunch of standalone ones that are more of a plug-n-play nature (trigger, config-updater, etc.).

For the context of this blog, I will not be covering Prow’s entire architecture, but feel free to dive into it on your own later. 

Just to name the main building blocks for Prow:

  • Hook - acts as an API gateway to intercept all requests from Github, which then creates a Prow job custom resource that reads the job configuration as well as calls any specific plugin if needed.
  • Plank - is the Prow job controller; after Hook creates a Prow job, Plank processes it and creates a Kubernetes pod for it to run the tests.
  • Deck - serves as the UI for the history of jobs that ran in the past or are currently running.
  • Horologium - is the component that processes periodic jobs only.
  • Sinker - responsible for cleaning up old jobs and pods from the cluster.

More can be found here: Prow Architecture. Note that this link is not the official doc from Kubernetes but from another great open source project that uses Prow extensively day-in-day-out - Kyma.

This is how Prow can be picturized:



Here is a list of things Prow can do and why it was conceived in the first place.

  • GitHub Automation on a wide range

    - ChatOps via slash command like “/foo
    - Fine-tuned policies and permission management in GitHub via OWNERS files
    - tide - PR/merge automation
    - ghProxy - to avoid hitting API limits and to use GitHub API request cache
    - label plugin - labels management 
    - branchprotector - branch protection configuration 
    - releasenote - release notes management
  • Job Execution engine - Plank
  • Job status Reporting to CI/CD dashboard - crier
  • Dashboards for comprehensive job/PR history, merge status, real-time logs, and other statuses – Deck
  • Plug-n-play service to interact with GCS and show job artifacts on dashboard – Spyglass
  • Super easy pluggable Prometheus stack for observability – metrics
  • Config-as-Code for Prow itself – updateconfig
  • And many more, like sinker, branch protector, etc.

Possible Jobs in Prow

Here, a job means any “task that is executed over a trigger.” This trigger can be anything from a github commit to a new PR or a periodic cron trigger. Possible jobs in Prow include:  

  • Presubmit - these jobs are triggered when a new github PR is created.
  • Postsubmit - triggered when there is a new commit.
  • Periodic - triggered on a specific cron time trigger.

Possible states for a job

  • triggered - a new Prow-job custom resource is created reading the job configs
  • pending - a pod is created in response to the Prow-job to run the scripts/tests; Prow-job will be marked pending while the pod is getting created and running 
  • success - if a pod succeeds, the Prow-job status will change to success 
  • failure - if a pod fails, the Prow-job status will be marked failure
  • aborted - when a job is running and the same one is retriggered, then the first pro-job execution will be aborted and its status will change to aborted and the new one is marked pending

What a job config looks like:

CODE: https://gist.github.com/velotiotech/866aa7cad2d12edeaaafa4aed3bd9fb5.js

  • Here, this job is a “presubmit” type, meaning it will be executed when a PR is created against the “master” branch in repo “kubernetes/community”.
  • As shown in spec, a pod will be created from image “Golang” where this repo will be cloned, and the mentioned command will be executed at the start of the container.
  • The output of that command will decide if the pod has succeeded or failed, which will, in turn, decide if the Prow job has successfully completed.

More jobs configs used by Kubernetes itself can be found here - Jobs

Getting a minimalistic Prow cluster up and running on the local system in minutes.

Pre-reqs:

  • Knowledge of Kubernetes 
  • Knowledge of Google Cloud and IAM

For the context of this blog, I have created a sample github repo containing all the basic manifest files and config files. For this repo, the basic CI has also been configured. Feel free to clone/fork this and use it as a getting started guide.

Let’s look at the directory structure for the repo:

CODE: https://gist.github.com/velotiotech/8bba48f18ebfed301b2748bbc75625a6.js

1. Create a bot account. For info, look here. Add this bot as a collaborator in your repo. 

2. Create an OAuth2 token from the GitHub GUI for the bot account.

CODE: https://gist.github.com/velotiotech/d53d72c2fedc51bf4f537f022f10dca2.js

3. Create an OpenSSL token to be used with the Hook.

CODE: https://gist.github.com/velotiotech/2045f7d4bfd14bd402cb38d874d485f9.js

4. Install all the Prow components mentioned in prow-starter.yaml.

CODE: https://gist.github.com/velotiotech/ca0dd89b14e426fdc55a379f7b0d4133.js

5. Update all the jobs and plugins needed for the CI (rules mentioned in the Makefile). Use commands:

  • Updates in plugins.yaml and presubmits.yaml:
  • Change the repo name (velotio-tech/k8s-prow-guide) for the jobs to be configured 
  • Updates in config.yaml:
  • Create a GCS bucket 
  • Update the name of GCS bucket (GCS_BUCKET_NAME) in the config.yaml
  • Create a service_account.json with GCS storage permission and download the JSON file 
  • Create a secret from above service_account.json

CODE: https://gist.github.com/velotiotech/5027392ade10c99da36c6234199fb8ee.js

  • Update the secret name (GCS_SERVICE_ACC) in config.yaml

CODE: https://gist.github.com/velotiotech/f9ecb94e39a4773ab3aaacedb144e8e0.js

6. For exposing a webhook from GitHub repo and pointing it to the local machine, use Ultrahook. Install Ultrahook. This will give you a publicly accessible endpoint. In my case, the result looked like this: http://github.sanster23.ultrahook.com

CODE: https://gist.github.com/velotiotech/2c27e5780f06afcd4a0d9140b6c874ce.js

7. Create a webhook in your repo so that all events can be published to Hook via the public URL above:

  • Set the webhook URL from Step 6
  • Set Content Type as application/json
  • Set the value of token the same as hmac token secret, created in Step 2 
  • Check the “Send me everything” box
Creating webhook

8. Create a new PR and see the magic.

9. Dashboard for Prow will be accessible at http://<minikube_ip>:<deck_node_port></deck_node_port></minikube_ip>

  • MINIKUBE_IP : 192.168.99.100  ( Run “minikube ip”)
  • DECK_NODE_PORT :  32710 ( Run “kubectl get svc deck” )

I will leave you guys with an official reference of Prow Dashboard:

What’s Next

Above is an effort just to give you a taste of what Prow can do with and how easy it is to set up at any scale of infra and for a project of any complexity.

---

P.S. - The content surrounding Prow is scarce, making it a bit unexplored in certain ways, but I found this helpful channel on the Kubernetes Slack #prow. Hopefully, this helps you explore the uncharted waters of Kubernetes Native CI/CD. 

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings