Create an AWS IAM user and group for a Gitlab CI pipeline

As stated in my previous post, I launched barz.lol, a fun API for UI prototyping. Part of the challenge was to deploy the front-end landing page to S3 and refresh the CloudFront cache after each deployment. I could easily have done it manually, but we like automation here. In addition, the process was a bit more involved than I had initially anticipated so here’s a walkthrough of how to create a CI/CD pipeline on GitLab for AWS.

The steps to achieve our goal are going to be the following:

  • Create a group for our project
  • Create a pipeline user
  • Add the user to the group
  • give user programmatic access with access keys
  • create the necessary policies
  • attach said policies to the user
  • write the .gitlab-ci.yml file

Pre-requisites

This walkthrough assumes you have the latest version of aws-cli installed on your system along with a configured profile.

Create a group for our project

It’s considered good practice to have users assigned into groups, so we’ll create one to begin with.

user@machine: aws iam create-group --group-name my-group

Create a user

Now that we have a group, we need to create a user. Again, it’s best to have a separate pipeline user who is not an admin and will only be granted the permissions they need to perform the required actions.

user@machine: aws iam create-user --user-name pipeline-user

Assign the user to the group

With our new user freshly created, we can now proceed to add them to the group.

user@machine: aws iam add-user-to-group --user-name pipeline-user --group-name my-group

We can verify that the operation completed successfully by running the following command

user@machine: aws iam get-group --group-name my-group

Give the user programmatic access

We need to do this is because the pipeline will make use of the AWS CLI inside GitLab’s workers. Consequently, we need to have credentials configured for that user. This is done by running

user@machine: aws iam create-access-key --user-name pipeline-user

This will output the credentials we’ll need for authentication. Be sure to save them somewhere safe where you can access them later on.

Create the IAM policy for the pipeline user

Now this is where things get interesting. There are hundreds of managed policies that can be used on AWS. However we want to be as strict as possible in terms of how much privilege we grant users. For example: our user will only need to upload items to a previously created s3 bucket. This does not mean we should give them full access to S3. Same thing applies to CloudFront.

Policies take the form of a JSON formatted document and can be generated using a tool like the policy generator . In the end, this is what our policy will look like

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "CustomPolicies",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:ListBucket",
        "cloudfront:CreateInvalidation"
      ],
      "Resource": [
        "arn:aws:cloudfront::ACCOUNT_ID:distribution/DISTRIBUTION_ID",
        "arn:aws:s3:::arn:aws:s3:::BUCKET_NAME/*",
        "arn:aws:s3:::BUCKET_NAME"
      ]
    }
  ]
}

In this example, we only grant the necessary permissions that our user will need to deploy the website. We allow the putObject, getObject, and listBucket actions (required when using aws s3 sync) and the CreateInvalidation action for CloudFront. These actions can only be performed against the list of resources listed in the Resource array. Anything else would result in an “access denied” error.

Now that we have our policy document we have to upload it to AWS. To do so, just run:

user@machine: aws iam create-policy --policy-name my-policy --policy-document file://path-to-policy.json

Attach the policy to the pipeline user

Right now, the policy is uploaded to AWS but it’s not bound to any user. We have to explicitly connect a policy and a user. This can be done with the command below:

user@machine: aws iam attach-user-policy --user-name pipeline-user --policy-arn my-policy-arn

If you don’t remember the arn for the policy, you can get it in the console or by issuing this command

user@machine: aws iam list-policies --query 'Policies[?PolicyName=="my-policy-name"].Arn' --output text

Alternatively, you can append the --output text argument to the create-policy command which will return the arn.

That’s it for the IAM part.

Creating a CI/CD pipeline

This example will make use of GitLab CI. It’s free, reliable and I use it for all my projects. Pipelines on GitLab read from a manifest file called .gitlab-ci.yml (notice the dot and extension they’re both important) The documentation goes into great detail about the different types of options and syntax available, I highly recommend giving it a read. In our case we’re only interested in deploying our app so we won’t be covering other stages as they’re not relevant to the topic at hand.

What we want to achieve is the following:

  • build our site into static assets
  • use the aws cli to upload our static assets to our bucket
  • use the aws cli to invalidate the cloudfront cache so that our change can be seen instantly
deploy_site:
  stage: deploy
  image:
    name: amazon/aws-cli
    entrypoint:
      - ""
  variables:
    AWS_ACCESS_KEY_ID: ${ACCESS_KEY}
    AWS_SECRET_ACCESS_KEY: ${SECRET_ACCESS_KEY}
    AWS_DEFAULT_REGION: ${DEFAULT_REGION}
  dependencies:
    - build_site
  script:
    - aws sts get-caller-identity
    - aws s3 sync dist s3://${S3_BUCKET}
    - aws cloudfront create-invalidation --distribution-id ${DISTRIBUTION_ID} --paths "/*"
  only:
    - master

This job is called deploy_site and is part of the deploy stage (The name is arbitrary, you can call it whatever you want).

We use the amazon/aws-cli docker image. This saves us from having to use a python docker image and install the cli ourselves. Everything is included out of the box, so it’s essentially plug and play.

We set the entrypoint for the image to an empty string. By default, the entrypoint attribute is set to aws, which will result in errors in our case.

Next, we define the necessary variables that the aws cli expects, namely AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION. Their values are set to variables created in the pipeline settings under “Variables”.

Because we use a different image than the one used in the build stage, we specify a dependencies attribute. This tells gitlab ci to download whatever artifacts saved by that job to our current job so we can access it. In our case, it’s the dist folder generated by astro.js.

Finally, we run the scripts to sync our bucket and invalidate the cloudfront cache.

This job will run only when changes are pushed to the master branch.