Cluster IQ

Cluster IQ is a tool for making stock of the Openshift Clusters and its
resources running on the most common cloud providers and collects relevant
information about the compute resources, access routes and billing.
Metrics and monitoring is not part of the scope of this project, the main
purpose is to maintain and updated inventory of the clusters and offer a easier
way to identify, manage, and estimate costs.
Supported cloud providers
The scope of the project is to cover make stock on the most common public cloud
providers, but as the component dedicated to scrape data is decoupled, more
providers could be included in the future.
The following table shows the compatibility matrix and which features are
available for every cloud provider:
| Cloud Provider |
Compute Resources |
Billing |
Actions |
Scheduled Actions |
| AWS |
Yes |
Yes |
Yes |
Yes |
| Azure |
No |
No |
No |
No |
| GCP |
No |
No |
No |
No |
Architecture
The following graph shows the architecture of this project:

Documentation
The following documentation is available:
Installation
This section explains how to deploy ClusterIQ and ClusterIQ Console.
Prerequisites:
Cloud provider RBAC configuration
Before configuring credentials for ClusterIQ, it is recommended to access the
user and permission management service and create a dedicated user exclusively
for ClusterIQ. This user should have the minimum necessary permissions to
function properly. This approach enhances the security of your public cloud
provider accounts by enforcing the principle of least privilege.
Each Cloud Provider has a different way for configuring users and permissions.
Before continuing, check and follow the steps for each Cloud Provider you want
to configure:
Accounts Configuration
-
Create a folder called secrets for saving the cloud credentials. This folder is ignored on this repo to keep your
credentials safe.
mkdir secrets
export CLUSTER_IQ_CREDENTIALS_FILE="./secrets/credentials"
⚠ Please take care and don't include them on the repo.
-
Create your credentials file with the AWS credentials of the accounts you
want to scrape. The file must follow the following format:
echo "
[ACCOUNT_NAME]
provider = {aws/gcp/azure}
user = XXXXXXX
key = YYYYYYY
billing_enabled = {true/false}
" >> $CLUSTER_IQ_CREDENTIALS_FILE
⚠ The values for provider are: aws, gcp and azure, but the
scraping is only supported for aws by the moment. The credentials file
should be placed on the path secrets/* to work with
docker/podman-compose.
❗ This file structure was design to be generic, but it works
differently depending on the cloud provider. For AWS, user refers to the
ACCESS_KEY, and key refers to SECRET_ACCESS_KEY.
❗ Some Cloud Providers has extra costs when querying the Billing
APIs (like AWS Cost Explorer). Be careful when enable this module. Check your
account before enabling it.
Openshift Deployment
Since version 0.3, ClusterIQ includes its own Helm Chart placed on ./deployments/helm/cluster-iq.
For more information about the supported parameters, check the Configuration Section.
-
Prepare your cluster and CLI
oc login ...
export NAMESPACE="cluster-iq"
oc new-project $NAMESPACE
-
Create a secret containing this information is needed. To create the secret,
use the following command:
oc create secret generic credentials -n $NAMESPACE \
--from-file=credentials=$CLUSTER_IQ_CREDENTIALS_FILE
-
Configure your cluster-iq deployment by modifying the
./deployments/helm/cluster-iq/values.yaml file.
-
Deploy the Helm Chart
helm upgrade cluster-iq ./deployments/helm/cluster-iq/ \
--install \
--namespace $NAMESPACE \
-f ./deployments/helm/cluster-iq/values.yaml
-
Monitor every resource was created correctly:
oc get pods -w -n $NAMESPACE
helm list -n $NAMESPACE
-
Once every pod is up and running, trigger the scanner manually for
initializing the inventory
oc create job --from=cronjob/scanner scanner-init -n $NAMESPACE
Uninstalling
To uninstall ClusterIQ Helm chart, use the following commands
helm uninstall cluster-iq -n $NAMESPACE
helm list -n $NAMESPACE
Local Deployment (for development)
For deploying ClusterIQ in local for development purposes, check the following
document
DB Backup
For backing up or restoring the ClusterIQ database, check the following
document
This document also describes how to manage data migration when a new release of
ClusterIQ changes DB data structure.
Configuration
Available configuration via Env Vars:
| Key |
Value |
Description |
| CIQ_AGENT_INSTANT_SERVICE_LISTEN_URL |
string (Default: "0.0.0.0:50051") |
ClusterIQ Agent gRPC listen URL |
| CIQ_AGENT_POLLING_SECONDS_INTERVAL |
integer (Default: 30) |
ClusterIQ Agent polling time (seconds) |
| CIQ_AGENT_URL |
string (Default: "agent:50051") |
ClusterIQ Agent listen URL |
| CIQ_API_LISTEN_URL |
string (Default: "0.0.0.0:8080") |
ClusterIQ API listen URL |
| CIQ_API_URL |
string (Default: "") |
ClusterIQ API public endpoint |
| CIQ_AGENT_LISTEN_URL |
string (Default: "0.0.0.0:50051") |
ClusterIQ Agent listen URL |
| CIQ_DB_URL |
string (Default: "postgresql://pgsql:5432/clusteriq") |
ClusterIQ DB URL |
| CIQ_CREDS_FILE |
string (Default: "") |
Cloud providers accounts credentials file |
| CIQ_LOG_LEVEL |
string (Default: "INFO") |
ClusterIQ Logs verbosity mode |
| CIQ_SKIP_NO_OPENSHIFT_INSTANCES |
boolean (Default: true) |
Skips scanned instances without cluster |
Scanner
The scanner searches each region for instances (servers) that are part of an
Openshift cluster. As each provider and each service has different
specifications, the Scanner includes a specific module dedicated to each of
them. These modules are automatically activated or deactivated depending on the
configured accounts and their configuration.
# Building in a container
make build-scanner
# Building in local
make local-build-scanner
API Server
The API server interacts between the UI and the DB.
# Building in a container
make build-api
# Building in local
make local-build-api
Agent (gRPC)
The Agent performs actions over the selected cloud resources. It only accepts
incoming requests from the API.
Currently, on release v0.4, the agent only supports Power On/Off clusters on AWS.
# Building in a container
make build-agent
# Building in local
make local-build-agent