Skip to main content

Overview

This guide walks you through deploying SGP in a GCP project using the SGP GCP Terraform modules. SGP GCP infrastructure is defined by terraform modules managed by Scale. GCP deployment is split into two Terraform phases with different privilege levels:
  • projectsetup/ — Run once with privileged GCP credentials. Creates a dedicated Terraform service account (with least privileges), enables required GCP APIs, and provisions Artifact Registries for SGP images and Helm charts.
  • deployments/<your-deployment>/ — Run with the service account created above. Provisions the GKE cluster, Cloud SQL, networking, and all SGP infrastructure.
This split ensures the main infrastructure Terraform never requires broad privileged credentials.

Prerequisites

  • Access to a GCP project with permissions to manage IAM, APIs, and compute resources
  • The following tools installed:
  • The following from Scale:
    • The SGP GCP Infrastructure modules (gcp/)
    • A workspace_id (8-digit number) and registration_secret unique to your deployment
  • A new application configured in your identity provider to authenticate to the SGP platform (SAML or OIDC) (optional)
  • A custom domain for your deployment (optional)

Installation

Step 1: Bootstrap the Project

Navigate to the projectsetup/ directory. This phase creates the Terraform service account and Artifact Registries. Edit the locals block at the top of projectsetup/main.tf:
locals {
  project              = "<your-gcp-project-id>"
  region               = "<gcp-region>"         # e.g. "us-east1"
  zone                 = "<gcp-zone>"            # e.g. "us-east1-a"
  service_account_name = "<workspace_id>-sa"    # Suffix for the service account name
  workspaceID          = "<workspace_id>"        # 8-digit number from Scale, e.g. "90000001"
}
Then initialize and apply:
cd projectsetup

terraform init
terraform apply
This creates:
  • A Terraform service account (sgp-tf-lp-<service_account_name>) with the IAM roles needed to provision SGP infrastructure
  • A service account key stored as terraform-service-account-key-secret in Secret Manager
  • Docker and Helm Artifact Registries (sgp-<workspace_id>-docker-repository, sgp-<workspace_id>-helm-repository)
  • All required GCP APIs enabled on the project
The Terraform outputs include the service_account_email of the newly created service account, which is useful for auditing. The service account key is automatically stored in Secret Manager and read by the main infrastructure Terraform — no manual key management is required.

Step 2: Copy SGP Images and Helm Charts

This step is only required if your deployment is configured to block internet access from the cluster (offline_mode in step 3). Before provisioning the main infrastructure, copy SGP’s Docker images and Helm charts from Scale’s registry into your GCP Artifact Registry. Authenticate to your GCP Artifact Registry:
gcloud auth configure-docker <region>-docker.pkg.dev
Authenticate to Scale’s source registries (a Scale engineer will provide the tokens):
# For AWS ECR source
crane auth login \
  022465994601.dkr.ecr.us-west-2.amazonaws.com --username AWS --password-stdin
Run the sync script from the projectsetup/ directory:
python download_manifest.py \
  --dest-docker-repository <region>-docker.pkg.dev/<project-id>/sgp-<workspace_id>-docker-repository \
  --dest-helm-repository oci://<region>-docker.pkg.dev/<project-id>/sgp-<workspace_id>-helm-repository \
  --skip-checks \
  --use-crane
Replace <region>, <project-id>, and <workspace_id> with the values from your project. The --skip-checks flag skips validation of source registry credentials — use it if you have already authenticated separately. Use --use-crane if crane is installed (faster than Docker for copying images between registries). This step can take 30–60 minutes depending on image count and network speed.

Step 3: Configure the Deployment

Copy the deployments/prototype/ directory and rename it for your deployment:
cp -r deployments/prototype deployments/<your-deployment-name>
cd deployments/<your-deployment-name>
Edit the locals block at the top of main.tf. The deployment reads its Terraform service account credentials automatically from Secret Manager — no key file is required on disk.
locals {
  project  = "<your-gcp-project-id>"
  region   = "<gcp-region>"                                               # e.g. "us-east1"
  zone     = "<gcp-zone>"                                                 # e.g. "us-east1-b"

  # ── Deployment identity ────────────────────────────────────────────────
  workspaceID        = "<workspace_id>"                                   # 8-digit number from Scale
  registrationSecret = "<registration_secret>"                            # From Scale
  deploymentURL      = "<workspace_id>.workspace.egp.scale.com"          # Or your custom domain

  # ── Repositories ──────────────────────────────────────────────────────
  # baseRepository: used by workloads running inside the cluster to pull images
  # publicBaseRepository: used for initial image pulls during bootstrap (before private DNS resolves)
  # Both point to the same registry when using a private Artifact Registry.
  baseRepository       = "<region>-docker.pkg.dev/<project-id>/sgp-<workspace_id>-docker-repository"
  publicBaseRepository = "<region>-docker.pkg.dev/<project-id>/sgp-<workspace_id>-docker-repository"

  # ── System Manager ────────────────────────────────────────────────────
  systemManagerImageTag = "<system_manager_image_tag>"  # From Scale; must match manifest.yaml
  offline_mode          = true                          # true when using your own Artifact Registry

  # ── Auth ──────────────────────────────────────────────────────────────
  authType = "SAML"   # "SAML" or "OIDC"

  # ── Bootstrap ─────────────────────────────────────────────────────────
  # deploy_system_manager=false: Terraform creates infra only; run manual-helm-install.sh to bootstrap
  # deploy_system_manager=true:  Terraform also installs System Manager into the cluster
  deploy_system_manager = false

  # ── DNS ───────────────────────────────────────────────────────────────
  createDNSRecords = false   # Set true if Terraform should manage Cloud DNS records

  # ── Encryption ────────────────────────────────────────────────────────
  useCustomerManagedEncryptionKey = false   # Set true for CMEK (recommended for production)

  # ── Networking ────────────────────────────────────────────────────────
  gke_config = {
    private_endpoint = true
    master_authorized_networks = [
      { cidr_block = "10.0.0.0/16", display_name = "cluster vpc primary range" },
      { cidr_block = "10.2.0.0/16", display_name = "cluster vpc services range" },
      { cidr_block = "10.4.0.0/16", display_name = "cluster vpc pod range" },
    ]
  }
}
If using SAML, place your IdP’s x509 certificate (without BEGIN/END lines) as x509.cer in the deployment directory. Update the samlConfigSecret in the module "sgp" block:
module "sgp" {
  source = "../../modules/sgp/"
  samlConfigSecret = jsonencode({
    "id" = local.workspaceID
    "samlConfiguration" = {
      "entityId" = "https://auth.${local.deploymentURL}"
      "x509Cert" = file("${path.module}/x509.cer")
      "ssoUrl"   = "<your-idp-sso-url>"
      "attributeMappings" = {
        "email"     = "<email-attribute-from-idp>"
        "firstName" = "<first-name-attribute-from-idp>"
        "lastName"  = "<last-name-attribute-from-idp>"
      }
    }
  })
  # ... other variables
}
If using OIDC, update oidcConfigSecret instead:
oidcConfigSecret = jsonencode({
  "id"               = local.workspaceID
  "clientId"         = "<client-id-from-idp>"
  "clientSecret"     = "<client-secret-from-idp>"
  "issuer"           = "<issuer-url>"
  "authorizationUrl" = "<authorization-url>"
  "tokenUrl"         = "<token-url>"
  "userInfoUrl"      = "<user-info-url>"
})

Step 4: Provision Infrastructure via Terraform

From the deployment directory, initialize and apply:
terraform init -upgrade

terraform plan -out=tfplan

# Inspect the plan before proceeding
terraform show tfplan

terraform apply tfplan
This step can take 30–60 minutes due to GKE cluster provisioning and Cloud SQL setup. After apply completes, connect to the cluster:
gcloud container clusters get-credentials sgp-<workspace_id>-kubernetes-cluster \
  --region <gcp-region> \
  --project <your-gcp-project-id>
If gke_config.private_endpoint = true, the cluster API server is only accessible from within the VPC. Use the provisioned bastion host (via IAP) or a network-connected runner to access it. Set bastion_enabled = true and add your user email to bastion_iap_members in the security_compliance block to enable bastion access.

Step 5: Bootstrap the Cluster

SGP System Manager orchestrates SGP service deployment. If you set deploy_system_manager = false in the previous step, bootstrap it manually using the provided script. Edit manual-helm-install.sh in your deployment directory with the correct values:
PROJECT_ID="<your-gcp-project-id>"
PROJECT_NUMBER="<your-gcp-project-number>"    # From: gcloud projects describe <project-id> --format='value(projectNumber)'
BASE_REPOSITORY="<region>-docker.pkg.dev/<project-id>/sgp-<workspace_id>-docker-repository"
SYSTEM_MANAGER_IMAGE_TAG="<system_manager_image_tag>"
OFFLINE_MODE=true

# Obtain from Cloud SQL instance → "Connect to this instance" in the console, or from Terraform state
SPICEDB_IP_ADDRESS="<cloud-sql-private-ip>"
SPICEDB_PASSWORD="<spicedb-db-password>"
TEMPORAL_DB_PASSWORD="<temporal-db-password>"
Then run the bootstrap script:
chmod +x manual-helm-install.sh
./manual-helm-install.sh
This installs System Manager into the cluster. System Manager will then begin reconciling the desired_state.json file and deploying SGP services automatically.

Desired State

The desired_state.json file in your deployment directory defines which SGP packs System Manager installs. Update it to reference your GCP Artifact Registry:
{
  "version": "0.1",
  "packs": [
    { "name": "flux" },
    {
      "name": "sgp-helm-repository",
      "properties": {
        "helm-repo": {
          "url": "oci://<region>-docker.pkg.dev/<project-id>/sgp-<workspace_id>-helm-repository"
        }
      }
    },
    { "name": "istio" },
    {
      "name": "sgp-base",
      "properties": {
        "helm-base": {
          "value_overrides": {
            "refreshRegcred": { "enabled": false },
            "gcp": {
              "presharedCertificates": ["sgp-<workspace_id>-ssl-certificate"]
            }
          }
        }
      }
    },
    { "name": "spicedb" },
    { "name": "identity-service" },
    { "name": "temporalf" },
    { "name": "sgp-apps" }
  ]
}
See Step 6 for the SSL certificate options and how to set the correct value in the gcp block above.

Step 6: Configure SSL Certificates

Three options are available, in order of preference: Provisions a wildcard certificate (*.your-domain.com) via DNS authorization. Covers all subdomains automatically. Enable it in main.tf:
module "sgp" {
  source = "../../modules/sgp/"
  enable_certificate_manager = true
  # ...
}
After terraform apply, note the two outputs:
  • certificate_manager_dns_auth_record — a CNAME you must add to your DNS zone for domain validation
  • certificate_map_name — typically sgp-<workspace_id>-cert-map
Add the CNAME to your DNS provider, then update desired_state.json:
{
  "name": "sgp-base",
  "properties": {
    "helm-base": {
      "value_overrides": {
        "gcp": {
          "certificateMapName": "sgp-<workspace_id>-cert-map"
        }
      }
    }
  }
}
The certificate typically provisions within 10–60 minutes of the DNS record being in place.

Option 2: Google Managed Certificate

Google provisions and auto-renews per-subdomain certificates. Does not require uploading a certificate, but requires DNS to resolve to the load balancer IP before provisioning, and does not support wildcards. This is the default when neither certificateMapName nor presharedCertificates is set in the sgp-base pack’s gcp block. Simply omit those keys from desired_state.json.

Option 3: Preshared Certificate

Use a certificate you manage and upload to Google Cloud. Required when DNS is not publicly resolvable or Certificate Manager is not available. Upload the certificate:
gcloud compute ssl-certificates create sgp-<workspace_id>-ssl-certificate \
  --certificate=fullchain1.pem \
  --private-key=privkey1.pem \
  --project <your-gcp-project-id>
Reference it in desired_state.json:
{
  "name": "sgp-base",
  "properties": {
    "helm-base": {
      "value_overrides": {
        "gcp": {
          "presharedCertificates": ["sgp-<workspace_id>-ssl-certificate"]
        }
      }
    }
  }
}
Preshared certificates must be renewed manually. Let’s Encrypt certificates expire after 90 days.
After changing the SSL configuration in desired_state.json, restart System Manager to apply it:
kubectl rollout restart deployment sgp-system-manager -n sgp-system-manager

Step 7: Configure DNS

Get the external IP address of the load balancer:
# From inside the cluster:
kubectl get svc istio-ingress --namespace istio-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

# Or from gcloud (the global address is created by Terraform):
gcloud compute addresses describe sgp-<workspace_id>-external-ip \
  --global \
  --project <your-gcp-project-id> \
  --format="value(address)"
Create DNS A records in your DNS provider pointing to this IP for:
  • <deployment_url> (apex)
  • api.<deployment_url>
  • auth.<deployment_url>
  • app.<deployment_url>
If createDNSRecords = true in your main.tf locals, Terraform manages a Cloud DNS zone and creates these records automatically. Retrieve the name servers from the Terraform output:
terraform output name_servers
Then delegate the zone by configuring these as NS records at your domain registrar.

Step 8: Verify the Deployment

Wait for all services to be ready:
kubectl get helmreleases -A  # All should show Ready=True
kubectl get pods -A          # All should be Running or Completed
System Manager continuously reconciles the desired state. If a HelmRelease shows Ready=False, check its events:
kubectl describe helmrelease <name> -n <namespace>

Accessing the Platform

Once all HelmReleases are ready and DNS resolves correctly, navigate to https://<workspace_id>.workspace.egp.scale.com (or your custom domain) and authenticate via your configured identity provider.