On-Prem Deployment Instructions (GCP)
This guide will help you and your team install Prequel on GCP infrastructure you control. Prequel deployments rely on a few tools:
- Terraform v1.0.x for provisioning services required by Prequel
- Helm 3.9.4+ for installing and upgrading Prequel
- Kubernetes CLI for managing and inspecting the Kubernetes cluster.
Before we get started
- Validate that you have access to the Terraform directory and Helm chart we sent over. If not, please email or Slack
[email protected]
to request access. - Create the dedicated cloud project where you’d like Prequel to run. We typically recommend creating a new cloud project for this, which allows all resources to be fully sandboxed from any other existing infrastructure.
Setting up the infrastructure
- Take a look through
variables.tf
and fill in the required values. We have aterraform.tfvars.example
that can be your reference. - Perform a Terraform dry-run and double check that everything looks good.
terraform plan
- Terraform the
main.tf
file. This will create all the necessary infrastructure for Prequel to run. Save the output variables, you'll need them later.
terraform apply
- Update your DNS records to point to the
prequel-ingress-ip
returned by the Terraform script. You'll need to create three DNS records.
prequel.your-domain.com # the domain you'll use when hitting the API.
prequel-admin.your-domain.com # the UI that admins on your team will use to manage Prequel.
data-connect.your-domain.com # the domain your customers will use to connect their data warehouse
Setting up Workload Identity
Important: Workload Identity must be configured to allow Kubernetes service accounts to assume the GCP service account created by Terraform. This is required for Prequel services to access GCP resources.
- Verify that Workload Identity is enabled on your GKE cluster. If you used the provided Terraform configuration, this should already be enabled. You can check with:
# For zonal clusters
gcloud container clusters describe {your_cluster_name} --zone={your_zone} --project={your_cluster_project_id} --format="value(workloadIdentityConfig.workloadPool)"
# For regional clusters
gcloud container clusters describe {your_cluster_name} --region={your_region} --project={your_cluster_project_id} --format="value(workloadIdentityConfig.workloadPool)"
The output should show {your_cluster_project_id}.svc.id.goog
. If this is empty, Workload Identity is not enabled.
Note: If your GKE cluster is in a different project than your service account, replace {your_cluster_project_id}
with the actual project ID where your cluster is deployed.
- Create the IAM policy binding to allow the Kubernetes service account to impersonate the GCP service account:
# Replace the placeholder values with your actual values
# {your_cluster_project_id} = Project ID where your GKE cluster is deployed
# {your_service_account_project_id} = Project ID where your service account is created
# {service_account_name} = Name of your service account (from Terraform)
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:{your_cluster_project_id}.svc.id.goog[default/prequel-datafeed]" \
{service_account_name}@{your_service_account_project_id}.iam.gserviceaccount.com
# Also bind for the animalcontrol service account
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:{your_cluster_project_id}.svc.id.goog[default/animalcontrol]" \
{service_account_name}@{your_service_account_project_id}.iam.gserviceaccount.com
- Verify the workload identity binding:
# Test that the binding was created successfully
gcloud iam service-accounts get-iam-policy {service_account_name}@{your_service_account_project_id}.iam.gserviceaccount.com
You should see the workload identity bindings in the output.
Deploying Prequel
- Authenticate to the Kubernetes cluster created in step 5.
- Get a hold of the GitHub App Private Key for your deployment. Ask your Prequel contact to send it over.
- Download and rename the file to
privatekey
. It is important that the file name is correct here. Otherwise, the cluster won't come up properly.
mv {download_path} ~/Downloads/privatekey
- Create a Kubernetes secret from it.
kubectl create secret generic github-app-private-key --from-file=~/Downloads/privatekey
- Create the following Kubernetes secrets required by the Prequel deployment.
# Generate and store secure random values in environment variables
export POSTGRES_PASSWORD={your_db_password}
export WORKOS_API_KEY={workos_api_key}
export SSH_SALT={your_generated_ssh_salt}
export ADMIN_API_KEY={your_generated_admin_api_key}
export AUTH_TOKEN_KEY={your_generated_auth_token_key}
# Create secret for Postgres DB credentials
kubectl create secret generic datafeed-postgres \
--from-literal=password="${POSTGRES_PASSWORD}"
# Create secret for SSH salt (used for hashing public keys)
kubectl create secret generic datafeed-ssh-salt \
--from-literal=salt="${SSH_SALT}"
# Create secret for Shepherd service
kubectl create secret generic datafeed-shepherd \
--from-literal=apiKey="${ADMIN_API_KEY}" \
--from-literal=authToken="${AUTH_TOKEN_KEY}" \
--from-literal=workOSApiKey="${WORKOS_API_KEY}"
Make sure to store these generated values securely for future maintenance and troubleshooting. Each value is:
datafeed-postgres.password
: The password for your Postgres database.datafeed-ssh-salt.salt
: A random 32-char string used for hashing SSH public keys.datafeed-shepherd.workOSApiKey
: The WorkOS API key provided to you by Prequel.datafeed-shepherd.apiKey
: A random 32-char string used for admin API authentication.datafeed-shepherd.authToken
: A random 32-char string used to encrypt/decrypt authentication tokens.
- Install the
cert-manager
Helm chart.
helm install cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.18.2 \
--set crds.enabled=true \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerGroup=cert-manager.io
- Fill
gcp_on_prem_values_override.yaml
based on your configurations.
The following values should be set from the secrets created in step 14:
postgresDb.secretName
:datafeed-postgres
or the name of the secret created for Postgres DB.postgresDb.passwordSecretKey
:password
or the key in the secret created for Postgres DB that contains the password.sshSaltSecretName
:datafeed-ssh-salt
or the name of the secret created for SSH salt.sshSaltSecretKey
:salt
or the key in the secret created for SSH salt that contains the salt.shepherd.secretName
:datafeed-shepherd
or the name of the secret created for Shepherd service.shepherd.workOS.apiKeySecretKey
:workOSApiKey
or the key in the secret created for Shepherd service that contains the WorkOS API key provided to you by Prequel.shepherd.apiKeySecretKey
:apiKey
or the key in the secret created for Shepherd service that contains the admin API key.shepherd.authTokenSecretKey
:authToken
or the key in the secret created for Shepherd service that contains the authentication token key.
- Install the Prequel Helm chart.
helm install prequel datafeed-1.1.105.tgz -f gcp_on_prem_values_override.yaml
You're all set
Notify your Prequel counterpart that the deployment is ready to roll. They'll guide you through next steps: configuring your first source.
Updating Prequel
We'll notify you when a new release is available, and provide you with the release tag. You can then run the following command to update your deployment to the new release.
helm upgrade prequel datafeed-1.1.24.tgz --reuse-values --set image.tag={provided_release_tag}
Updated 10 days ago