Create a Hybrid Catalog
Create Hybrid catalogs to provision infrastructure and deploy applications together in a single, coordinated deployment.
Summary
Hybrid catalogs combine IAC and YAML catalog patterns to provision cloud infrastructure resources and deploy Kubernetes applications that use those resources.
Prerequisites
- Catalog prerequisites completed
- Understanding of both YAML catalogs and IAC catalogs
- Organization GitOps repository cloned locally
- Familiarity with Terraform and Helm
When to Use Hybrid Catalogs
Use Hybrid catalogs when you need to:
- Provision infrastructure and deploy applications that use it (e.g., database + application)
- Deploy monitoring infrastructure with collection agents
- Create complete application stacks with dedicated resources
- Provision storage with file management applications
- Deploy network infrastructure with ingress controllers
Example scenarios:
- PostgreSQL database + database management UI
- S3 bucket + file upload service
- Redis cache + cache management dashboard
- CloudWatch logs + Datadog agent
Step 1: Create Catalog Directory Structure
-
Navigate to your GitOps repository root:
cd <your-org>-gitops -
Create directory at root level with both YAML and IAC components:
mkdir <catalog-name>
mkdir <catalog-name>/templates -
Create all required files:
# YAML catalog files
touch <catalog-name>/Chart.yaml
touch <catalog-name>/values.yaml
touch <catalog-name>/templates/application.yaml
# IAC catalog files
touch <catalog-name>/provider
touch <catalog-name>/main.tf
touch <catalog-name>/variables.tf
touch <catalog-name>/outputs.tf
Step 2: Configure YAML Components
Follow the YAML catalog guide to configure:
- Chart.yaml - Helm chart metadata
- values.yaml - Parameters with
@inputannotations (use camelCase) - templates/ - ArgoCD Application manifests
Refer to the YAML catalog documentation for detailed steps.
Step 3: Configure IAC Components
Follow the IAC catalog guide to configure:
- provider - Crossplane provider configuration with tokens
- main.tf - Terraform infrastructure resources
- variables.tf - Input variables (use snake_case)
- outputs.tf - Resource outputs for application use
Refer to the IAC catalog documentation for detailed steps.
Step 4: Link Infrastructure to Applications
Use Terraform outputs to provide infrastructure connection details to applications.
Export Infrastructure Details
In outputs.tf, export resource attributes:
output "database_endpoint" {
description = "Database connection endpoint"
value = aws_db_instance.main.endpoint
}
output "database_port" {
description = "Database port"
value = aws_db_instance.main.port
}
output "database_name" {
description = "Database name"
value = aws_db_instance.main.db_name
}
output "secret_arn" {
description = "ARN of secret containing database credentials"
value = aws_secretsmanager_secret.db_credentials.arn
}
Reference in Application Templates
Use External Secrets to inject infrastructure details:
templates/external-secret.yaml:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: {{ .Values.appName }}-database
namespace: {{ .Values.namespace }}
spec:
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: database-connection
creationPolicy: Owner
data:
- secretKey: endpoint
remoteRef:
key: {{ .Values.databaseSecretArn }}
property: endpoint
- secretKey: password
remoteRef:
key: {{ .Values.databaseSecretArn }}
property: password
- secretKey: username
remoteRef:
key: {{ .Values.databaseSecretArn }}
property: username
templates/application.yaml:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.clusterName }}-{{ .Values.appName }}
namespace: argocd
spec:
destination:
namespace: {{ .Values.namespace }}
name: {{ .Values.clusterDestination }}
project: {{ .Values.project }}
source:
chart: postgresql-client
repoURL: https://charts.example.com
targetRevision: "1.0.0"
helm:
values: |
database:
existingSecret: database-connection
endpointKey: endpoint
usernameKey: username
passwordKey: password
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Step 5: Coordinate Deployment Phases
Hybrid catalogs deploy in two phases:
Phase 1: Infrastructure Provisioning
The IAC components deploy first:
- ProviderConfig and Workspace CRDs created
- Crossplane executes Terraform
- Infrastructure resources provisioned
- Outputs become available
Phase 2: Application Deployment
Once infrastructure is ready (Available state):
- YAML components deploy
- ArgoCD Applications created
- External Secrets fetch infrastructure details
- Applications connect to provisioned resources
The Konstruct operator handles phase coordination automatically.
Step 6: Configure Combined Values
In values.yaml, combine parameters for both IAC and YAML:
# Application parameters (camelCase)
# @input.type: string
# @input.description: Application name
# @input.required: true
appName: my-app
# @input.type: string
# @input.description: Namespace for application
# @input.required: true
# @input.default: default
namespace: default
# Infrastructure parameters (will be converted to snake_case for Terraform)
# @input.type: string
# @input.description: Database instance identifier
# @input.required: true
databaseName: myapp-db
# @input.type: enum
# @input.description: Database instance class
# @input.options: db.t3.micro,db.t3.small,db.t3.medium
# @input.required: true
# @input.default: db.t3.micro
databaseInstanceClass: db.t3.micro
# @input.type: string
# @input.description: Allocated storage in GB
# @input.required: true
# @input.default: 20
allocatedStorage: 20
# Secret passed to application
# @input.type: secret
# @input.description: Database admin password
# @input.required: true
# @input.secretKey: db-admin-password
# @input.secretEnv: DB_ADMIN_PASSWORD
# @input.secretBackend: vault
# @input.secretPath: /myapp-db
databasePassword: ""
# Standard values
# @input.type: string
# @input.description: Target cluster
# @input.required: true
# @input.default: in-cluster
clusterDestination: in-cluster
# @input.type: string
# @input.description: ArgoCD project
# @input.required: true
# @input.default: default
project: default
Step 7: Commit and Push Catalog
-
Stage catalog files:
git add <catalog-name>/ -
Commit changes:
git commit -m "feat(catalog): add <catalog-name> Hybrid catalog" -
Push to remote:
git push origin main
Deployment Locations
Hybrid catalogs deploy to both repositories:
Platform GitOps (Infrastructure)
<org>-gitops/
└── registry/
└── clusters/
└── <cluster-name>/
└── components/
└── iac/
└── <catalog-name>.yaml
Application GitOps (Applications)
<org>-gitops/
└── registry/
└── environments/
└── <environment>/
└── <cluster-name>/
├── <catalog-name>.yaml
└── <catalog-name>/
Complete Example: PostgreSQL + Admin UI
Here's a complete Hybrid catalog that provisions a PostgreSQL database and deploys a management UI:
Chart.yaml:
apiVersion: v2
name: postgres-with-admin
description: PostgreSQL database with pgAdmin management UI
type: application
version: 0.1.0
values.yaml:
# Application
# @input.type: string
# @input.description: Application name
# @input.required: true
appName: pgadmin
# @input.type: string
# @input.description: Namespace
# @input.required: true
namespace: database
# Database Infrastructure
# @input.type: string
# @input.description: Database identifier
# @input.required: true
databaseName: myapp-postgres
# @input.type: enum
# @input.description: Instance class
# @input.options: db.t3.micro,db.t3.small,db.t3.medium
# @input.required: true
# @input.default: db.t3.micro
instanceClass: db.t3.micro
# Secrets
# @input.type: secret
# @input.description: Database admin password
# @input.required: true
# @input.secretKey: admin-password
# @input.secretEnv: DB_ADMIN_PASSWORD
# @input.secretBackend: vault
# @input.secretPath: /postgres
adminPassword: ""
main.tf:
resource "aws_db_instance" "main" {
identifier = var.database_name
engine = "postgres"
engine_version = "15.3"
instance_class = var.instance_class
allocated_storage = 20
db_name = replace(var.database_name, "-", "_")
username = "postgres"
password = var.admin_password
skip_final_snapshot = true
tags = {
Name = var.database_name
}
}
variables.tf:
variable "database_name" {
description = "Database identifier"
type = string
}
variable "instance_class" {
description = "RDS instance class"
type = string
}
variable "admin_password" {
description = "Admin password"
type = string
sensitive = true
}
outputs.tf:
output "endpoint" {
value = aws_db_instance.main.endpoint
}
output "port" {
value = aws_db_instance.main.port
}
Best Practices
- Phase awareness: Understand that infrastructure deploys before applications
- Output usage: Export all infrastructure details applications might need
- Secret management: Store sensitive data in secret backends, not in code
- Error handling: Applications should handle delayed infrastructure availability
- Resource cleanup: Both phases clean up when catalog is deleted
- Testing: Test infrastructure provisioning independently before adding applications