The intended audience of this document are developers wishing to contribute to the Kubecf project.
It provides a basic overview of various aspects of the project below, and uses these overviews as the launching points to other documents which go deeper into the details of each aspect.
Table of Contents (Aspects)
- Pull Requests
- Source Organization
- Updating Subcharts
- Docker Images
- [BOSH Development Workflow]
- Rotating secrets
Kubecf is built on top of a number of technologies, namely Kubernetes, Helm (charts), and the cf-operator.
For all these we have multiple choices for installing them, and various interactions between the choices influence the details of the commands to use.
Instead of trying to document all the possibilities and all their interactions at once, supporting documents will describe specific combinations of choices in detail, from the bottom up.
|Local Minikube||Minikube + Operator + Kubecf|
|General Kube||Any Kube + Operator/Helm + Kubecf/Helm|
The general work flow for pull requests contributing bug fixes, features, etc. is:
Branch or Fork the cloudfoundry-incubator/kubecf repository, depending on permissions.
Implement the bug fix, feature, etc. on that branch/fork.
Submit a pull request based on the branch/fork through the github web interface, against the master branch.
Developers will review the content of the pull request, asking questions, requesting changes, and generally discussing the submission with the submitter and among themselves.
PRs from branches of this repository are automatically tested in the CI. For forks, you should ask a maintainer of this repository to trigger a build. Automated triggers have been disabled for security reasons.
After all issues with the request are resolved, and CI has passed, a developer will merge it into master.
Note that it may be necessary to rebase the branch/fork to resolve any conflicts due to other PRs getting merging while the PR is under discussion.
Such a rebase will be a change request from the developers to the contributor, on the assumption that the contributor is best suited to resolving the conflicts.
The important directories of the kubecf sources, and their contents are shown in the table below. Each directory entry links to the associated documentation, if we have any.
|top||Documentation entrypoint, License,|
|Main workspace definitions.|
|top/…/README.md||Directory-specific local documentation.|
|top/bosh/releases||Support for runtime patches of a kubecf deployment.|
|top/dev/cf_deployment/bump||Tools to support updating the cf deployment|
|manifest used by kubecf.|
|top/dev/cf_cli||Deploy cf cli into a helper pod from which to then|
|inspect the deployed Kubecf|
|top/dev/kube||Tools to inspect kube clusters and kubecf deployments.|
|top/dev/kubecf||Kubecf chart configuration|
|top/deploy/helm/kubecf||Templates and assets wrapping a CF deployment|
|manifest into a helm chart.|
|top/testing||Scripts with specific testing|
|top/scripts||Developer scripts used by make to start a k8s cluster|
|(for example on kind), lint, build, run & test kubecf|
|top/scripts/tools||Developer scripts pinning the development dependencies|
The kubecf helm chart includes a number of subcharts. They are declared in
requirements.yaml. For the
convenience of development they are included in unpacked form directly in
this repo, so version changes can be inspected with regular
and the subcharts can be searched with
The procedure to update the version of a subchart is:
vi deploy/helm/kubecf/requirements.yaml ./dev/helm/update_subcharts.sh git commit
The docker images used by kubecf to run jobs in container use a moderately complex naming scheme.
This scheme is explained in a separate document: The Naming Of Docker Images in kubecf.
Currently, 3 linters are available: shellcheck, yamllint, & helm linting.
Invoke these linters with
to run shellcheck on all
.sh files found in the entire checkout, or yamllint
.yml files respectively, and report any issues found. The
last option runs
helm lint (without
--strict) on the generated helm chart.
See the authoritative list of linters being called in the
make lint target.
The main goal of the CF operator is to take a BOSH deployment manifest, deploy it, and have it run as-is.
Naturally, in practice, this goal is not quite reached yet, requiring patching of the deployment manifest in question, and/or the involved releases, at various points of the deployment process. The reason behind a patch is generally fixing a problem, whether it be from the translation into the kube environment, an issue with an underlying component, or something else.
Then, there are features, given the user of the helm chart wrapped around the deployment manifest the ability to easily toggle various preset configurations, for example the use of eirini instead of diego as the application scheduler.
The helm templating is used to translate the properties in the chart’s values.yaml to the actual actions to take, by including/excluding chart elements, often the BOSH ops files containing the structured patches modifying the deployment itself (changing properties, adding/removing releases, (de)activating jobs, etc.)
The helm templating is applied when the kubecf chart is deployed.
The ops files are then applied by the operator, transforming the base manifest from the chart into the final manifest to deploy.
Kubecf provides two mechanisms for customization during development (and maybe by operators ?):
.Values.operations.customof the chart is a list of names for kube configmaps containing the texts of the ops files to apply beyond the ops files from the chart itself.
Note that we are talking here about a yaml structure whose
data.opsproperty is a text block holding the yaml structure of an ops file.
There is no tooling to help the writer with the ensuing quoting hell.
Note further that the resulting config maps have to be applied, i.e. uploaded into the kube cluster before deploying the kubecf helm chart with its modified values.yaml.
kubectl applythe object below
```yaml --- apiVersion: v1 kind: ConfigMap metadata: name: configmap_name data: ops: |- some_random_ops ```
and then use
```yaml operations: custom: - configmap_name ```
in the values.yaml (or an equivalent
--setoption) as part of a kubecf deployment to include that ops file in the deployment.
The [BOSH Development Workflow] is an example of its use.
[BOSH Development Workflow]: bosh-release-development.md
The second mechanism allows the specification of any custom BOSH property for any instancegroup and job therein.
```yaml properties: instance-group-name: job-name: some-property: some-value ```
in the values.yaml for the kubecf chart causes the chart to generate and use an ops file which applies the assignment of
some-propertyto the specified instance group and job during deployment.
An example of its use in Kubecf is limiting the set of test suites executed by the CF acceptance tests.
Both forms of customization assume a great deal of familiarity on the part of the developer and/or operator with the BOSH releases, instance groups and jobs underlying the CF deployment manifest, i.e. which properties exist, what changes to them mean and how they affect the system.
In SCF v2, the predecessor to kubecf, the patches scripts enabled developers and maintainers to apply general patches to the sources of a job (i.e. configuration templates, script sources, etc.) before that job was rendered and then executed. At the core, the feature allows the user to execute custom scripts during runtime of the job container for a specific instance_group.
Pre render scripts are the equivalent feature of the CF operator.
Kubecf makes use of this feature to fix a number of issues in the
deployment. The relevant patch scripts are found under the directory
When following the directory structure explained by the README, the bazel machinery for generating the kubecf helm chart will automatically convert these scripts into the proper ops files for use by the CF operator.
Attention All patch scripts must be idempotent. In other words, it must be possible to apply them multiple times without error and without changing the result.
The existing patch scripts do this by checking if the patch is already applied before attempting to apply it for real.
Rotating secrets is in general the process of updating one or more secrets to new values and restarting all affected pods so that they will use these new values.
Most of the process is automatic. How to trigger it is explained in Secret Rotation.
Beyond this, the keys used to encrypt the Cloud Controller Database (CCDB) can also be rotated, however, they do not exist as general secrets of the KubeCF deployment. This means that the general process explained above does not apply to them.
Their custom process is explained in CCDB encryption key rotation.