Vault Policy Controller
Vault Policy Controller manages HashiCorp Vault policies and roles through Infrastructure as Code. The controller watches Config Maps for Vault configurations. Once detected, these configs are deployed to (or removed from) a Vault Server via CRUD operations.
The concepts that this controller is based off of comes from Bank Vaults External Configuration.
Current Functionality
The current implementation of the Vault Policy Controller only allows it to add, remove, or update policies and auth roles. It cannot enable auth methods or manage other features of HashiCorp Vault.
Environment Variables
| Name |
Description |
Default Value |
HEALTHCHECK_PORT |
Port used for healthcheck probes |
8080 |
K8S_LISTWATCH_RESYNC_ENABLED |
Kubernetes Informer ListWatcher resync period |
true |
K8S_LISTWATCH_RESYNC_INTERVAL |
Interval to periodically synchronize vault policies (to be effective, it must be less than or equal to K8S_LISTWATCH_TIMEOUT) |
5m |
K8S_LISTWATCH_TIMEOUT |
Kubernetes Informer ListWatcher timeout - k8s recommendation is 5 - 10 minutes (resync occurs when watch is reestablished) |
10m |
K8S_LOG_VERBOSITY |
Kubernetes log verbosity (1 - 10) |
2 |
LOG_FORMAT |
Log format (text or json) |
json |
LOG_LEVEL |
Log level |
info |
LOG_KV_* |
Additional environment variables used as key/values that are added to log messages (replace * with the desired key name) |
|
METRICS_ENABLED |
Enable promenteus metrics |
true |
METRICS_PORT |
Port used for prometheus server |
9090 |
VAULT_API_BASE_BACKOFF_TIME |
Base number used for exponential backoff time of Vault API retries |
1s |
VAULT_API_MAX_RETRIES |
Number of retries for Vault API calls |
3 |
VAULT_API_SLEEP_TIME |
Duration to sleep after each API call |
100ms |
VAULT_ADDR |
Vault URL |
http://vault.vault-system.svc.cluster.local:8200 |
VAULT_HIDDEN_LOG_KEYS_REGEX |
Regex of log key to obsfucate |
(?i)(secret|password) |
VAULT_HIDDEN_LOG_VALUE |
Obsfucated Log value |
[OBFUSCATED] |
VAULT_LOGIN_AUTH_METHOD |
Method to use when authenticating to Vault |
kubernetes |
VAULT_LOGIN_AUTH_ROLE |
Role to use when authenticating to Vault |
vault-policy-controller |
WATCH_LABEL_KEY |
Label selector key for watched config maps |
vault-policy-controller |
WATCH_LABEL_VALUE |
Label selector value for watched config maps |
enabled |
WATCH_NAMESPACES |
Comma delimited list of namespaces to watch (when undefined, all namespaces are watched) |
|
Config Map Examples
The following config map will deploy a policy and GCP auth role
-
Policy foo-bar granting read access to kv/data/foo/bar/*
-
Auth role my-gcp-app associating the foo-bar policy with the foo-bar@my-gcp-project.iam.gserviceaccount.com GCP Service Account
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
vault-policy-controller: enabled
name: my-gcp-app
data:
vault-config.yml: |
policies:
- name: foo-bar
rules: |
path "kv/data/foo/bar/*" { capabilities = ["read"] }
auth:
- type: gcp
roles:
- name: my-gcp-app
type: iam
policies: foo-bar
max_ttl: 30m
bound_service_accounts: foo-bar@my-gcp-project.iam.gserviceaccount.com
The following config map will deploy an OIDC config, policy, and auth role
-
Policy developer granting read access to kv/data/*
-
Auth role oidc-dev-role associating the developer policy with the OIDC dev group
- The OIDC client secret will be retrieved from GCP Secret Manager in
projects/my-project/secrets/oidc-client-secret/versions/latest
- If the
vault.ciacco.net/gcpsm-oidc-client-secret annotation is missing, the oidc_client_secret key/value can be used
- Secrets can also be retrieved from vault using the
vault.ciacco.net/hv-oidc-client-secret annotation
-
OIDC config defining the Vault OIDC settings
---
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
vault.ciacco.net/gcpsm-oidc-client-secret: projects/my-project/secrets/oidc-client-secret/versions/latest
# vault.ciacco.net/hv-oidc-client-secret: kv/data/oidc-secrets/client-secret#secret
labels:
vault-policy-controller: enabled
name: my-oidc-config
data:
vault-config.yml: |
policies:
- name: developer
rules: |
path "kv/data/*" { capabilities = ["read"] }
auth:
- type: oidc
path: oidc
description: login for oidc users
config:
bound_issuer: https://accounts.google.com
default_role: developer
oidc_client_id: my-client-id
# oidc_client_secret: my-client-secret
oidc_discovery_url: https://accounts.google.com
tune:
default_lease_ttl: 15m
max_lease_ttl: 30m
roles:
- name: oidc-dev-role
type: oidc
allowed_redirect_uris:
- http://localhost:8200/ui/vault/auth/oidc/oidc/callback
- http://localhost:8250/oidc/callback
bound_audiences:
- vault-users
bound_claims:
groups:
- dev
groups_claim: groups
oidc_scopes:
- profile
- openid
- groups
policies:
- developer
user_claim: sub
Installation
Vault Permissions
-
Download the Vault binary from HashiCorp: Install Vault
-
Export Variables
# Vault Address
export VAULT_ADDR=http://localhost:8200
# Kubernetes Auth Method
export K8S_NAMESPACE=vault
export K8S_SERVICE_ACCOUNT=vault-policy-controller
# GCP Auth Method
GCP_SERVICE_ACCOUNT=vault-policy-controller@my-gcp-project.iam.gserviceaccount.com
# OIDC Client Secret Manager Project
GCP_SECRET_MANAGER_PROJECT=my-secret-gcp-project
-
Authenticate to vault
vault login
-
Create the Vault Policy Controller ACL
cat <<EOH | vault policy write vault-policy-controller -
# Manage auth methods broadly across Vault
path "auth/*" {
capabilities = ["create", "delete", "list", "read", "update"]
}
# Create and manage ACL policies
path "sys/policies/acl/*" {
capabilities = ["create", "delete", "list", "read", "update"]
}
# Tune GCP auth settings
path "sys/auth/gcp/tune" {
capabilities = ["create", "delete", "list", "read", "sudo", "update"]
}
# Tune Kubernetes auth settings
path "sys/auth/kubernetes/tune" {
capabilities = ["create", "delete", "list", "read", "sudo", "update"]
}
# Tune OIDC auth settings
path "sys/auth/oidc/tune" {
capabilities = ["create", "delete", "list", "read", "sudo", "update"]
}
# OIDC client secret path
path "kv/data/oidc-secrets/client-secret" {
capabilities = ["read"]
}
EOH
-
Enable the auth method (skip if already configured)
-
Kubernetes
# Gather kubernetes info from your kubeconfig
export KUBECONFIG="${KUBECONFIG:-${HOME}/.kube/config}"
export CONTEXT=$(kubectl config current-context)
export CLUSTER=$(yq ".contexts[] | select(.name == \"${CONTEXT}\").context.cluster" "${KUBECONFIG}")
yq ".clusters[] | select(.name == \"${CLUSTER}\").cluster.certificate-authority-data" "${KUBECONFIG}" | base64 -d > /tmp/ca.crt
export IP=$(kubectl get service kubernetes -n default -ojsonpath='{.spec.clusterIP}')
# Enable the kubernetes auth method
vault auth enable kubernetes
# Configure the kubernetes auth method config
vault write auth/kubernetes/config \
kubernetes_host="https://${IP}" \
kubernetes_ca_cert=@/tmp/ca.crt
# Clean up temp file
rm -f /tmp/ca.crt
-
GCP
# Enable the GCP auth method
vault auth enable gcp
-
Create the Vault Policy Controller Auth Role
-
Kubernetes
vault write auth/kubernetes/role/vault-policy-controller \
bound_service_account_names="${K8S_SERVICE_ACCOUNT}" \
bound_service_account_namespaces="${K8S_NAMESPACE}" \
policies=vault-policy-controller
-
GCP
vault write auth/gcp/role/vault-policy-controller \
bound_service_accounts="${GCP_SERVICE_ACCOUNT}" \
max_jwt_exp=3600 \
policies=vault-policy-controller \
type=iam
-
Grant the Vault Policy Controller Secret Manager access
gcloud projects add-iam-policy-binding "${GCP_SECRET_MANAGER_PROJECT}" \
--member="serviceAccount:${GCP_SERVICE_ACCOUNT}" \
--role="roles/secretmanager.secretAccessor"
Deployment
See the Helm Chart's values.yaml file for the value override options.
-
Install the helm chart
helm template ./chart/vault-policy-controller \
-f /path/to/overrides/values.yaml \
| kubectl apply -f -