CMK (Customer-Managed-Keys)
This repository contains the application and business logic for the
CMK (Customer-Managed-Keys) layer of Key Management Service.
Contents
Dependencies
Note that not all of these programs may be required depending on your environment
Prerequisite
CMK has external dependencies which require credentials. These are stored at env/secret which are created from env/blueprints.
Run the following command to generate the env/secret files to configure
make create-empty-secrets
In order to run the full CMK workflow and correctly start the task-worker, one of the system information implementations has to be configured.
To select which plugin is used, one can specify SIS_PLUGIN in the Make target, this must also be present in the values-dev.yaml
Event Processor
Event processing utilizes the Orbital to send
and process events. Orbital requires target AMQP message brokers to be configured.
Additionally, if mTLS is used, certificate files need to be provided in the env/secret/event-processor directory.
They include a CA certificate to verify the server, a client certificate, and a private key.
Identity Management
We need to set up identity management for obtaining information
relating to the identities (eg user groups)
To configure it, replace the values in env/secret/identity-management/scim.json
Client Data Signing
To sign data for client data within the HTTP requests, we need to set up a private key and public key, both can be found for local setup and testing in the env/secret/signing-keys directory after running
make generate-signing-keys
That key pair will be used to secure the requests. The private key can be used to sign the requests for tests and the related public key will be used to verify the signatures. How the sign a header can be found here function Encode.
Local Execution
Please also see section Debugging for details on how to debug
these environments.
K3d Environment
Key Features
- Clean Namespace: Deletes all resources in the
cmk namespace to ensure a clean environment.
- Install k3d: Checks if
k3d is installed; if not, it automatically installs it.
- Create/Recreate Cluster: Creates or recreates a k3d cluster named
cmkcluster.
- Import Docker Image: Imports a Docker image into the k3d cluster's internal registry.
- Helm Release Management: Automatically installs or upgrades the Helm release.
- Namespace Creation: If the specified namespace does not exist, the command creates it automatically.
- Set up Postgresql database: Applys Postgresql set up from bitnami repository.
- Import test data: Import test data.
- Set up port forwarding: Set up port forwarding so that the application is accessible on localhost.
Running
make start-cmk
The application should be accessible on http://localhost:8080
For example http://localhost:8080/keys
Helm chart directory
The Helm charts required for deployment are located in the ./chart directory.
Running aws-kms local mock
Pull Helm chart repository. The Helm charts required for deployment are located in the following repository:
Update here with the cmk charts location
set up env varible CMK_HELM_CHART, to point to 'charts' directory of helm-chart repository.
Example:
export CMK_HELM_CHART=/helm-charts/charts
Run:
make apply-kms-local-chart
API Access with Client Data
Running the CMK application locally requires API requests to include signed client data headers for authentication.
Therefore, you need to generate these headers using the generate_client_headers.go utility.
A detailed guide can be found here.
Troubleshooting
Credentials Issues
If you encounter problems with Docker credentials (e.g., login or authentication
issues), you can modify the Docker configuration file to resolve them. The
credentials store used by Docker is specified in the ~/.docker/config.json file.
- Open the
~/.docker/config.json file in a text editor.
- Locate the
credsStore field. It should look like this:
{
"credsStore": "osxkeychain" // for macOS
}
Application Startup Delay
The cmk application may take some time to fully start after deployment.
This is because it waits for the PostgreSQL database to become available.
Application Startup Failure
If the application does not start as expected:
- Check the logs of the
cmk application for messages about the database connection.
kubectl logs <cmk-pod-name> -n cmk
- If running with Colima ensure that resources are sufficient. The following
command has been deemed sufficient:
colima start --memory 4 --disk 150
Swagger UI
Swagger UI allows to visualize and interact with the API’s resources. It is containerized and can be setup via:
make swagger-ui.
This will simply run a docker image which serves swagger-ui. It can be found at localhost:8087/swagger
Development
Building
Building can be via the following Make command:
make build
Unit tests
Running tests can be done through a Make command:
make test
How to write Unit Tests
Guidelines:
- Should test a small section of code, usually a function
- Should be idempotent and independent of other test input/outputs
- Shouldn't make calls to external services, if so it should use mock clients
[!NOTE]
Currently there are tests that are not following the guidelines mentioned.
Please fix them or create an enhancement ticket
To ensure consistency testutils where created. Please use them and enhance if needed in your use-case.
Refer to code documentation on the following functions for it's usage and available options.
testutils.NewTestDB(tb testing.TB, cfg TestDBConfig, opts ...TestDBConfigOpt) (*multitenancy.DB, []string)
testutils.NewAPIServer(tb testing.TB, db *multitenancy.DB, testCfg TestAPIServerConfig) *http.ServeMux
testutils.MakeHTTPRequest(tb testing.TB, server *http.ServeMux, opt RequestOptions) *httptest.ResponseRecorder
testutils.WithJSON(tb testing.TB, i any) io.Reader
testutils.WithString(tb testing.TB, i any) io.Reader
testutils.GetJSONBody\[t any\](tb testing.TB, w *httptest.ResponseRecorder)
testutils.New<modelType>(m func(*model.<modelType>) *model<modelType>)
testutils.NewGRPCSuite(tb testing.TB, services ...systemsgrpc.ServiceServer)
Integration tests
Running integration tests can be done through a Make command:
make integration_test
NOTE: Some integration tests require credentials. You can refer to Prerequisite chapter to setup those.
If no credentials are provided the tests are skipped!
Debugging
Run the following command to get a list of your pods:
sudo kubectl get pod --all-namespaces
Then, using the relevant pod (usually of form cmk-XXX-YYY):
sudo kubectl logs -n cmk cmk-XXX-YYY
This should display any logs from the cmk application.
API Implementations
The API clients required for CMK can be generated from the OpenAPI spec.
We use oapi-codegen to generate Go Code based on the OpenAPI spec
In order to generate the clients, execute make codegen with one of the listed api flag on make codegen
Example: make codegen api=cmk
Logging
CMK uses context-based logging via slogctx, injecting a logger onto the context.
On API Requests, the logger is injected with default information on the logging middleware, and in other scenarios also later injected with relevant information.
- Static information can be added to all logs via values.yaml labels as documentated (ex. Target: CMK)
- Dynamic Information that's repeatable in a certain context should be injected into the logger, otherwise added as an attribute on the specific log
HTTP Error Mapping
Our error mapping system automatically converts internal errors to structured API responses with appropriate HTTP status codes and meaningful error messages.
Each operation in our API has specific error mappings that are automatically
selected based on the operation ID.
How Error Mapping Works
The core of our error mapping system is the ErrorMap struct which associates internal errors
with standardized API responses:
type ErrorMap struct {
Error []error // Internal errors to match against
Detail cmkapi.DetailedError // API response details
}
When an error occurs, the system:
- Finds the appropriate error mappings for that operation
- Matches the encountered error against all possible mappings
- Selects the best matching error response
- Returns a standardized error response to the client
Adding New Error Mappings
To add new error mappings for your feature, follow these steps:
- Define Error Constants
First, define your error constants in the apierrors package:
var (
ErrMyNewError = errors.New("description of the new error")
)
- Create Error Mappings
Add mappings to the appropriate entity's mapping slice (e.g., system, key, keyConfiguration):
var system = []ErrorMap{
// Existing mappings...
{
Error: []error{ErrMyNewError},
Detail: cmkapi.DetailedError{
Code: "MY_NEW_ERROR_CODE",
Message: "User-friendly error message",
Status: http.StatusBadRequest,
},
},
// More specific mapping with multiple errors
{
Error: []error{ErrMyNewError, repo.ErrNotFound},
Detail: cmkapi.DetailedError{
Code: "MY_NEW_ERROR_NOT_FOUND",
Message: "Resource not found: detailed message",
Status: http.StatusNotFound,
},
},
}
How Errors Are Matched
- If there is an high prio API Error on the error chain, that API Error is selected
- If API Error chain contains errors not existing in the error they are ignored
- Mapping is done with the most number of matching errors
- If no matches are found, it returns a default internal server error
This allows for precise error handling when errors are wrapped or combined.
Tenant Manager CLI
A command-line tool for managing tenants in the database.
Local usage
Compile the CLI
go build -o tenant-manager-cli ./cmd/tenant-manager-cli/main.go
Requirements
config.yaml file should be present in the same directory as the compiled binary containing database configuration.
Example config.yaml:
database:
host:
source: embedded
value: localhost
user:
source: embedded
value: postgres
secret:
source: embedded
value: secret
name: cmk
port: "5432"
Running commands
./tenant-manager-cli <command> [flags]
Run:
./tenant-manager-cli --help
to see all available commands.
Run CLI in cluster - Makefile target
Makefile target is prepared, to run the CLI commands in cluster.
make tenant-cli ARGS="<command>"
Async Task CLI
A command-line tool for managing asynchronous tasks. This can gather stats,
list tasks, and invoke periodic tasks manually.
The tool should be run in cluster with the presence of task queues and
task workers. A makefile target is prepared, to run the CLI commands in cluster.
make task-cli ARGS="<command>"
For example, to list all supported commands, run:
make task-cli ARGS="--help"
DB Migrator
A command-line tool to trigger db-migrations. It can run schema and data migrations
for public and tenant schemas
This tool should be run as a K8s Job in the cluster as a helm pre-hook to run schema migrations before other deployments
To list all supported commands, run:
go build -o db-migrator ./cmd/db-migrator/
./db-migrator -h
Authors
KMS dev team 2
Version History
License
This project is licensed under the [NAME HERE] License - see the LICENSE.md file for details