Safely Expose Credentials

aws devops security

If you’re juggling STS credentials while wrangling more AWS accounts than you can count, you’ve likely heard of aws-vault. Aside from the convenience factor, it’s also good for security (keep plain-text credentials off disk).

Similarly, it’s obvious you don’t want to commit credentials. Depending how far you take that, even committing encrypted credentials (or storing them anywhere not intended for managing secrets) should be discouraged.

One reason is building up muscle memory and inviting accidental commits of unencrypted secrets (use something like gitleaks to prevent that). The bigger reason in my mind is consistent handling of credentials makes it easier to develop tools and process around audit, rotation, etc. Depending how you handle encryption, having “blobs” in your VCS may also turn into a philosophical argument.

That said, we all use a lot of credentials, and often need them for local scripts vs only in pipelines or cloud services where injecting credentials securely is easy. We can combine aws-vault, our secure secret storage backend of choice (Parameter Store, Secrets Manager, Vault) and direnv to get a convenient local workflow that doesn’t add risk.

NOTE: This example uses Paramter Store, the same can work for Secrets Manager, Vault or your backend of choice with a little refactoring.

The first thing to understand is that .envrc can do more than export variables. It can run typical shell commands. With that in mind, let’s provide context and describe the goal:

First we need some glue in our .envrc:

# .envrc boilerplate
export AWS_REGION="us-east-2"
export AWS_PROFILE="dev"
export VAULT="aws-vault exec ${AWS_PROFILE} --"
for i in $(${VAULT} env | grep AWS); do
  key=$(echo $i | cut -d= -f1 2>/dev/null)
  [ "${key}" == "AWS_VAULT" ] && continue
  export $i

AWS_PROFILE will need adjusted based on the AWS account you’re using to store your secrets and your ~/.aws/config. We simply do the usual aws-vault exec for the configured profile, and export all of the AWS_ variables in our local shell. AWS_VAULT is excluded to avoid warnings about nesting.

Now you can define project-specific environment using the exported AWS credentials. For example:

# .envrc, below the boilerpalte
export MY_API_KEY=$(aws ssm get-parameter --name "/yourproject/MY_API_KEY" --with-decryption \
  | jq -r '.Parameter.Value' 2>/dev/null)

After configuring all of this (or when making changes) you need to type direnv allow (this is to ensure commands checked out in random projects can’t be ran without approval):

❯ vi .envrc # add above and save...
direnv: error /home/mrh/test/.envrc is blocked. Run `direnv allow` to approve its content
❯ direnv allow
direnv: loading ~/test/.envrc
❯ echo $MY_API_KEY

This is a little work to setup, but no more than a workflow to encrypt secrets you intend to commit. Since .envrc contains nothing sensitive, it can be safely committed and shared by all team members.


The boilerplate above is mostly to simplify subsequent AWS commands requiring STS credentials. You could simplify by just wrapping each command with aws-vault:

# .envrc
export AWS_REGION="us-east-2"
export AWS_PROFILE="dev"
export VAULT="aws-vault exec ${AWS_PROFILE} --"
export FOO_KEY=$(${VAULT} aws ssm get-parameter --name "/yourproject/FOO_KEY" --with-decryption | jq -r '.Parameter.Value')
export BAR_KEY=$(${VAULT} aws ssm get-parameter --name "/yourproject/BAR_KEY" --with-decryption | jq -r '.Parameter.Value')
export BAZ_KEY=$(${VAULT} aws ssm get-parameter --name "/yourproject/BAZ_KEY" --with-decryption | jq -r '.Parameter.Value')


If you don’t have cached credentials (e.g. they timeout overnight), the first aws-vault call will prompt for MFA which may break commands or look a little weird. It mostly works as expected if you are a fast typer. ;-)

❯ aws-vault clear dev
Cleared 1 sessions.
❯ cd ~/test
direnv: loading ~/test/.envrc
Enter MFA code for arn:aws:iam::012345678901:mfa/ direnv: ([/usr/bin/direnv export zsh]) is taking a while to execute. Use CTRL-C to give up.