Promoting Builds¶
Promoting builds is the process of upgrading a build to staging or production. For ui builds, this is done by copying s3 content into different buckets. For backend services, this is done by updating tags on ECR images so they are deployed in different environments. k9 provides a way to build scripts that must be manually run to promote a build.
k9 automatically create promotion stacks as part of promote service
and promote ui
so the following commands may not need to be run if they already exist.
UI¶
From the app directory containing app-config.yml
run
k9 create promote-ui
This will create a CloudFormation stack containing an AWS Automation document, a Lambda Function, and a Lambda Role. To promote a version, execute the document with Version and TargetEnv parameters. (Systems Manager > Automation > Execute Automation > Owned by me > {{appName}}-ui-promotion > Execute). There is also a link to the document in the Outputs of the CloudFormation stack. Enter the build version, ex: 0.1.0-2
and pick either sat or prd as the target. This will call the {{appName}}-ui-promotion lambda. k9 defines the builds bucket (build source) and the sat and prd buckets in the lambda’s Environment Variables. If the bucket names were generated incorrectly or your bucket URLs are updated, they can be changed here.
Cross-account permissions¶
If your production bucket is in a different AWS account than your development and SAT buckets, then you will need to create another promote-ui stack in your production account. Then you should grant the LambdaRole permission to copy contents from your builds-bucket in the non-production account.
Manually add the following bucket policy to your builds bucket. (S3 > Buckets > -builds bucket > Permissions > Bucket Policy).
Replace PRD_LAMBDA_ROLE_ARN
with the ARN of the role created by the promote-ui stack in the production account.
Replace BUILDS_BUCKET_ARN with the arn of the bucket you are applying this policy to. Note the /*
after the ARN in the GetObject statement.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "PRD_LAMBDA_ROLE_ARN"
},
"Action": "s3:GetObject",
"Resource": "BUILDS_BUCKET_ARN/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "PRD_LAMBDA_ROLE_ARN"
},
"Action": "s3:ListBucket",
"Resource": "BUILDS_BUCKET_ARN"
}
]
}
The lambda in the non-production will not have access to the production account bucket. Similarly, the production account lambda will not have access to the non-production lambda. This way, promotions must be run within the account being promoted to.
Service¶
From the app directory containing app-config.yml
run
k9 create promote-service
This will create a CloudFormation stack containing an AWS Automation document, a Lambda Function, and a Lambda Role. To promote a version, execute the document with Version and TargetEnv parameters. (Systems Manager > Automation > Execute Automation > Owned by me > {{appName}}-service-promotion > Execute). There is also a link to the document in the Outputs of the CloudFormation stack. Enter the build version from the ECR ex: 0.0.1-2
and then pick the tag to apply to the that version. The lambda will apply the selected tag to the selected version.
Cross-account permissions¶
This lambda will only work for applying tags within one ECR. Cross-account promotion requires docker running to pull and push images, which is not currently possible through an AWS lambda. Images must be copied between accounts through some other method, either an ec2 instance in your production account, or using aws cli credentials locally.
Automatically Moving Images Across AWS Accounts¶
For security purposes, you do not want any IAM role outside of your production account to be able to reach into the production ECR. You also do not want to generate long term AWS api secret access tokens. For these reasons, it is best to have your production cluster handle copying images from nonprod ECR, into the production account ECR. k9 provides a command to set up a k8s cron job to do exactly this. Perform the following prerequisite steps:
- Find the IAM role for your prd cluster nodes
(prd AWS account) IAM > Roles > {{clusterName}}
- Attach the AmazonEC2ContainerRegistryFullAccess policy
Add permissions > Attach policies > AmazonEC2ContainerRegistryFullAccess
This will grant the worker nodes the required permissions to push and pull images.
This policy will need to be manually removed when deleting the cluster. The worker nodes stack will fail to delete if this policy is still attached.
- Attach a policy to your nonprod ECR
(nonprod AWS account) > ECR > appName > Permissions (left sidebar) > Edit policy JSON
Paste the following policy
{ "Version": "2012-10-17", "Statement": [ { "Sid": "PrdAccountAccess", "Effect": "Allow", "Principal": { "AWS": "ENTITY_ARN" }, "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:BatchGetImage", "ecr:GetDownloadUrlForLayer", "ecr:DescribeImages" ] } ] }
Replace ENTITY_ARN with the ARN of the prd cluster nodes
Perform the following from each app directory you have deployed a service for.
If a ecr-sync-values.yml
file exists in the app directory, edit this file. Make sure to enter the clusterName
you wish to apply the cronjob to, and the AWS account number for the nonprod ECR.
If a ecr-sync-values.yml
file does not exist in the app directory, run
k9 create prd-configs
This will create prd-app-config.yml
and ecr-sync-values.yml
files. At this stage it is not necessary to edit the prd-app-config.yml
file, but the ecr-sync-values.yml
file must be edited with the information listed above.
Once the ecr-sync-values.yml
file has been edited, run the following command to deploy the cron job.
k9 create ecr-sync
Now any image tagged with test
in the non-production account will automatically be copied into the production ECR.
You may manually delete the cronjob to stop this process, or run k9 delete ecr-sync
from the app directory.
Manually Moving Images Across AWS Accounts (Not Recommended)¶
An image can be copied from one ECR to another using the following bash script. Docker must be running before starting the script. The aws cli should be using credentials for en entity (user/role) with permissions for both ECRs.
#!/bin/sh
set -e
####################################
# edit these
VERSION="0.0.1-1"
REPO_NAME="app-name"
SOURCE_REGION="us-east-1"
DESTINATION_REGION="us-east-1"
SOURCE_ACCOUNT="111111111111"
DESTINATION_ACCOUNT="222222222222"
####################################
# don't edit these
SOURCE_BASE_PATH="$SOURCE_ACCOUNT.dkr.ecr.$SOURCE_REGION.amazonaws.com"
DESTINATION_BASE_PATH="$DESTINATION_ACCOUNT.dkr.ecr.$DESTINATION_REGION.amazonaws.com"
SOURCE_REPO="$SOURCE_BASE_PATH/$REPO_NAME"
DESTINATION_REPO="$DESTINATION_BASE_PATH/$REPO_NAME"
####################################
# source login
aws --region $SOURCE_REGION ecr get-login-password | docker login --username AWS --password-stdin $SOURCE_BASE_PATH
# pull and re-tag
docker pull $SOURCE_REPO:$VERSION
docker tag $SOURCE_REPO:$VERSION $DESTINATION_REPO:$VERSION
# destination login
aws --region $DESTINATION_REGION ecr get-login-password | docker login --username AWS --password-stdin $DESTINATION_BASE_PATH
# push
docker push $DESTINATION_REPO:$VERSION
# delete local image
docker rmi $SOURCE_REPO:$VERSION
This is not recommended because it makes use of long term AWS api secret access tokens when running on a local machine.