K9 User Guide

This guide is meant to walk through the steps that need to be taken by a user of K9. Anything here is subject to change and review.

Steps taken by user:

  1. k9 create project

  2. Edit cluster-config.yml files

  3. (Optional) edit CloudFormation template files

  4. k9 create cluster

  5. k9 create cicd

  6. k9 link cicd

  7. (Optional) Incorporating Google Chat into K9 process

  8. Edit app-config.yml files

  9. Edit helm config files

  10. k9 deploy service

  11. k9 deploy ui

Prerequisites

  • kubectl, helm, and aws-cli installed

  • AWS Environment exists and aws cli is connected.

  • Deployable service and UI apps are in git repos.

  • Secrets defined in AWS Secrets Manager:

    • Deploy tokens to access the application’s git repos. Separate secrets are needed for ui and service repos.

    • (Optional) Secret to hold specific configuration values for the service application

    • (Optional) Secrets to make into Jenkins credentials.

Create Project

k9 create project

Creates k9 directory with all yml files and cluster subdirectories with cluster-config.yml files. Subdirectories for ‘prd’ and ‘np’ clusters will be created by default, with a prompt to create other clusters by name. The user will be prompted to enter appNames to create subdirectories for.

This command should be run in a directory where it can be stored in a git repository. The purpose of storing this repo is for disaster recovery, and it should contain everything needed to recreate an app’s deployment environment exactly.

  • k9 version

  • cfm templates and cluster-config

  • app-config files

  • app helm charts

Example repo: mock-cluster

k9/
  apps/
    defaults.yml
    appName/
      app-config.yml
      k9-helm-values.yml
      helm-values.yml
  prd/
    cluster-config.yml
  np/
    cluster-config.yml

Sample directory structure after running k9 create cluster

Edit cluster-configs

Users should go into each cluster subdirectory and edit the cluster-config.yml to their needs. Users can change any field, and add new worker groups and databases. For more details, see: cluster-config.yml.

Create CloudFormation Templates

k9 create templates

This command will grab the cloudformation templates from atomic-cloud. Users may edit the contents of these files, but the idea is to provide templates that can be used “as-is”.

This is an optional step, only needed if users wish to edit the CloudFormation files before running k9 create cluster. Otherwise, the files will be created as part of running k9 create cluster.

Create Clusters

k9 create cluster

For each cluster the user wishes to create, they must run this command from within the cluster’s subdirectory. CloudFormation yml files will be copied from atomic-cloud into the current directory and used. k9 will use atomic-cloud to build all of the CloudFormation stacks. Then k9 will deploy standard apps. k9 will add standard dashboards to Kibana and Grafana by routing directly to the load balancer created by the ALB controller.

Execution steps:

  • create cfm stacks for a cluster

  • deploy standard apps: ALB Controller, EFK, Prometheus, Grafana

  • configure standard apps (dashboards)

  • create a cluster factsheet in the current directory

If there are any errors this command can be run repeatedly and will attempt all of these steps again, skipping steps that completed properly already.

Kibana and Grafana login credentials will be placed in an AWS secret named clusterName-monitoring-logins-secret

Create cicd

k9 create cicd

Deploys Jenkins, and SonarQube to the cicd namespace. Uses the cluster from the current directory’s cluster-config.yml.

Jenkins credentials will be placed in an AWS secret named <clusterName>-jenkins-password

Sonarqube login credentials are in an AWS secret named default-sonarqube-credentials which should have already existed before running this step. After running k9 link cicd, the password will be updated and stored in <clusterName>-sonar-web-login-credentials

Execution steps:

  • automatically edit defaults.yml in the k9/apps directory to have the correct jenkins url and cicd clusterName (used for creating jenkins pipelines when deploying ui and service applications)

  • install jenkins

  • create AWS lambda to create sonarqube database

  • install sonarqube

  • create kubernetes secret ‘sonar-password’ with the db password for sonarqube to use.

  • wait for lambda to delete

Edit app-config

It is now time to start deploying your application. Change directories into the app folder for your app. All of the following steps must be repeated for each app to be deployed.

Edit app-config.yml. For more details, see: app-config.yml.

Edit helm config files

In your app directory there are two yml files related to the helm chart for your app. k9-helm-values.yml and helm-values.yml. Inside k9-helm-values.yml you must provide your helm repo url, repo name, and chart name for your application. Inside helm-values.yml you should copy your helm chart’s values.yml file. The contents of this file will be copied into every values file used for deployment. The contents of k9-helm-values.yml will be used to generate all information needed for k9 to run the helm install command for each deployment instance.

k9-helm-values.yml is auto-filled by k9 create project and may not need to be edited if the the chart is hosted on charts.simoncomputing.com and the chartName matches the appName provided. helm-values.yml can be left empty if no customization is needed.

Incorporating Google Chat into K9 process

This optional step should be done if you would like to integrate Google Chat with Jenkins such that whenever a Jenkins build passes or fails or even aborts, a message is sent to a predetermined Google Chat channel. For more details, see: Incorporating Google Chat into K9 process.

Note that if done, this process needs to be done only once per application (which includes both service and ui).

Deploy app

It is now time to deploy your backend and frontend applications. Both of these commands will request the necessary AWS certificates.

k9 deploy service

See Deploying a Service for more details.

Run this command from your app directory to deploy your service to every environment specified in app-config.yml. Each deployment instance will use the helm values file from the values/ directory.

Execution steps:

  • Request any certificates if they do not exist.

  • Create an AWS ECR for the app.

  • Create an AWS lambda to create the application’s databases.

  • Read appSecret from AWS and create a kubernetes secret for each deployment namespace.

  • Create helm values.yml files for each deployment inside of the appName/values directory.

  • Run helm install for each deployment instance using the values files inside of the appName/values directory.

  • create the service multibranch pipeline on Jenkins

  • wait for the AWS lambda to delete

k9 deploy ui

Run this command from your app directory to deploy your front end app. Because CloudFront is not available inside of GovCloud environments, different steps must be taken to deploy the UI when inside of a GovCloud region. k9 will detect this and automatically choose the GovCloud deployment when needed.

Non-GovCloud Execution steps:

  • Create a certificate in the us-east-1 region for each deployment.

  • Create an S3 Hosting Stack for each deployment. This includes:

    • S3 bucket

    • BucketPolicy granting CloudFront access

    • CloudFront Distribution

  • Create a bucket to store different build versions.

  • Create the UI multibranch pipeline on Jenkins

Routing to the s3 buckets is done by CloudFront.

GovCloud Execution steps:

  • Create a VPC endpoint for each cluster being deployed to.

  • Create an S3-Gov-Hosting Stack for each deployment. This includes:

    • S3 Bucket

    • BucketPolicy granting the VPCEndpoint access.

  • Create a values yaml file for each nginx routing deployment.

  • Run helm install for each nginx routing deployment using the generated values file.

  • Create a bucket to store different build versions.

  • Create the UI multibranch pipeline on Jenkins

Routing to s3 buckets is done through the load balancer, through the nginx deployments. Records will need to be created for the new URLs in the hosted zone. See DNS Routing for information.

Other commands

k9 delete cluster -n clusterName

Used to delete a cluster entirely. Tears down the load balancer, secrets, databases, and CloudFormation stacks associated with the stack. All applications must be deleted from a cluster to successfully delete all resources associated with the cluster.

k9 delete monitoring

Removes EFK, Grafana, Prometheus installs from the current cluster. This is useful if the user wishes to use different helm charts for these apps.

k9 delete cicd

Deletes all resources created by create cicd. Must be run before running k9 delete cluster on the cicd cluster. Must be run from the cicd cluster’s directory containing the cluster-config.yml.

k9 delete service

Deletes all resources created by k9 deploy service. Must be run from the app directory you wish to delete. Accepts the -kc flag followed by a boolean to keep certificates. Prompts if not provided.

k9 delete ui
k9 delete ui -env envName

Deletes all resources created by k9 deploy ui. Must be run from the app directory you wish to delete. Accepts the -kc flag followed by a boolean to keep certificates. Prompts if not provided. A single environment may be provided, otherwise the appName is requested to confirm deleting in all environments. Automatically detects if in a GovCloud environment to delete the correct resources.

k9 create eks-startstop

Creates lambda functions that are automatically triggered to scale down clusters at night, and start them back up in the morning. The default times are at 6am ET for startup and 8pm ET for shutdown. To change these times, edit the Event Schedules on the EventBridge Rules that target the lambdas. Clusters must be manually tagged in order to be affected by the lambdas. The value for the eks-autostop tag determines what the cluster’s nodegroups’ desired size is set to by the stop lambda (usually 0). The value for the eks-autostart tag determines what the cluster’s nodegroups’ desired size is set to by the start lambda (usually 2 or 3). Both tags must be applied to a cluster to successfully scale it down and back up automatically.

k9 delete eks-startstop

Deletes all resources created by k9 create eks-startstop.

Final Results

After following these steps, users should have multiple clusters with monitoring and logging set up. They may log in to Kibana and Grafana with the login information stored in AWS Secrets Manager. These applications are configured with default dashboards by k9.

cicd applications Jenkins and SonarQube will also be deployed to the cluster chosen. Login credentials for these apps will be in AWS Secrets Manager.

User-created service applications will be deployed to each environment specified by the application’s app-config.yml. Frontend UI applications will be deployed into S3 buckets. Multibranch pipelines will be fully configured on the jenkins instance.