Redshift Serverless
Configuring your Redshift Serverless destination.
Prerequisites
- If your Redshift security posture requires IP whitelisting, have the data syncing service's static IP available during the following steps. It will be required in Step 2.
- By default, Redshift authentication uses role-based access. You will need the trust policy prepopulated with the data-syncing service's identifier to grant access. It should look similar to the following JSON object with a proper service account identifier:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRoleWithWebIdentity"
],
"Principal": {
"Federated": "accounts.google.com"
},
"Condition": {
"StringEquals": {
"accounts.google.com:oaud": "<some_organization_identifier>",
"accounts.google.com:sub": "<some_service_account_identifier>"
}
}
}
]
}
Network allowlistingCloud Hosted (US):
35.192.85.117/32Cloud Hosted (EU):
104.199.49.149/32If private-cloud or self-hosted, contact support for the static egress IP.
How authentication works
Two identities involved
- Redshift database user (in your Redshift workgroup): You create this limited user in Step 1. It owns database-level privileges (e.g., create schema, temporary tables) needed to load and transform data.
- AWS IAM Role (in your AWS account): You create this role in Step 3. It holds S3 permissions for staging and allows data syncing service's to call
redshift-serverless:GetCredentialson your Redshift Serverless workgroup via the trust policy. We assume this role to obtain short-lived credentials; no long-lived secrets are required.
Step 1: Create a Limited User in Redshift
- Connect to Redshift using the SQL client.
- Execute the following query to create a user to write the data (replace
<password>with a password of your choice).
CREATE USER <username> PASSWORD '<password>';
Creating a user without a password.Role based auth does not require a password. You may create the user using
CREATE USER <username> PASSWORD DISABLE;.
- Grant user
createandtemporaryprivileges on the database.createallows the service to create new schemas andtemporaryallows the service to create temporary tables.
GRANT CREATE, TEMPORARY ON DATABASE <database> TO <username>;
The schema will be created during the first syncThe schema name supplied as part of Step 4 will be created during the first connection. It does not need to be created manually in the destination ahead of time.
🚧 If theschemaalready existsBy default, the service creates a new schema based on the destination configuration. If you prefer to create the schema yourself before connecting the destination, you must ensure that the writer user has the proper permissions on the schema, using
GRANT ALL ON schema <schema> TO <username>;Once you've provided the
GRANT ALLpermission on the schema, you can safely remove theCREATEpermission on the database (but you must retain theTEMPORARYpermission on the database).
Step 2: Whitelist connection
- In the Redshift console, click Workgroups, and make a note of the workgroup name.
- Select the workgroup you would like to connect.
- In the General information pane, make note of the Endpoint details. You may need to use the copy icon to copy the full details to discover the full endpoint and port number.
- Click the Properties tab.
- Scroll down to the Network and security settings section.
- In the VPC security group field, select a security group to open it.
- In the Security Groups window, click Inbound rules.
- Click Edit inbound rules.
- In the Edit the Inbound rules window, follow the steps below to create custom TCP rules for the static IP:
a. Select Custom TCP in the drop-down menu.
b. Enter your Redshift port number. (likely
5439) c. Enter the static IP. d. Click Add rule.
Step 3: Create a staging bucket
Create staging bucket
- Navigate to the S3 service page.
- Click Create bucket.
- Enter a Bucket name and modify any of the default settings as desired. Note: Object Ownership can be set to "ACLs disabled" and Block Public Access settings for this bucket can be set to "Block all public access" as recommended by AWS. Make note of the Bucket name and AWS Region.
- Click Create bucket.
Create policy
- Navigate to the IAM service page, click on the Policies navigation tab, and click Create policy.
- Click the JSON tab, and paste the following policy, being sure to replace
BUCKET_NAMEwith the name of the bucket chosen above, andREGION_NAME,ACCOUNT_ID, andWORKGROUP_NAME_OR_IDwith the proper Redshift Serverless values.- Note: the first bucket permission in the list applies to
BUCKET_NAMEwhereas the second permission applies only to the bucket's contents —BUCKET_NAME/*— an important distinction.
- Note: the first bucket permission in the list applies to
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"redshift-serverless:GetCredentials"
],
"Resource": [
"arn:aws:redshift-serverless:REGION_NAME:ACCOUNT_ID:workgroup/WORKGROUP_NAME_OR_ID"
]
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::BUCKET_NAME"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
}
]
}- Click through to the Review step, choose a name for the policy, for example,
transfer-service-policy(this will be referenced in the next step), add a description, and click Create policy.
Create role
- Navigate to the IAM service page.
- Navigate to the Roles navigation tab, and click Create role.
- Select Custom trust policy and paste the provided trust policy (from the prerequisite) to allow AssumeRole access to this role. Click Next.
- Add the permissions policy created above, and click Next.
- Enter a Role name, for example,
transfer-role, and click Create role. - Once successfully created, search for the created role in the Roles list, click the role name, and make a note of the ARN value.
Alternative authentication method: AWS User with HMAC Access Key ID & Secret Access KeyRole based authentication is the preferred authentication mode for Redshift based on AWS recommendations, however, HMAC Access Key ID & Secret Access Key is an alternative authentication method that can be used if preferred.
- Navigate to the IAM service page.
- Navigate to the Users navigation tab, and click Add users.
- Enter a User name for the service, for example,
transfer-service, click Next. Under Select AWS access type, select the Access key - Programmatic access option. Click Next: Permissions.- Click the Attach existing policies directly option, and search for the name of the policy created in the previous step. Select the policy, and click Next: Tags.
- Click Next: Review and click Create user.
- In the Success screen, record the Access key ID and the Secret access key.
Step 4: Add your destination
Securely share your username, host, database, workgroup, your chosen schema, IAM role ARN, and staging bucket details with us to complete the connection.
Permissions checklist
- Redshift database user exists and has
CREATEandTEMPORARYon the database. If you pre-created the schema, ensureGRANT ALL ON SCHEMA <schema> TO <username>. - IAM role trust policy allows data syncing service's to assume the role.
- IAM policy includes:
redshift-serverless:GetCredentialson your targetworkgroupARN (Serverless uses workgroups, not clusters).- S3
ListBucketonarn:aws:s3:::BUCKET_NAME. - S3
GetObject,PutObject,DeleteObjectonarn:aws:s3:::BUCKET_NAME/*.
- Network allowlisting (if enforced) permits egress IP/CIDR for the Redshift port (typically 5439).
FAQ
Q: How is the Redshift connection secured?
A: We use role-based authentication with your AWS IAM Role. The data syncing service's assumes your role to obtain short-lived database credentials and network access can be constrained by allowlisting the static egress IPs noted above.
Q: Why is an S3 bucket required?
A: Redshift's high-throughput path loads data from S3 using COPY. We stage files briefly in your bucket to maximize throughput and reliability. Files are cleaned up after load. We require ListBucket to enumerate staged files and GetObject/PutObject/DeleteObject to write, read back, and delete staged files.
Q: Why do you need redshift-serverless:GetCredentials on the workgroup?
redshift-serverless:GetCredentials on the workgroup?A: In Serverless, temporary database credentials are issued per workgroup. Granting this action on the target workgroup allows our assumed role to obtain ephemeral credentials for the database user without long-lived secrets, improving security and auditability.
Q: What are the oaud vs sub IDs used for?
oaud vs sub IDs used for?A: These are identity claims used in the IAM trust policy when federating from GCP to AWS. sub uniquely identifies our Google principal in federation. oaud is an additional claim used to bind role assumption to your organization.
Q: Why am I getting authentication errors with Redshift Serverless?
A: Common causes:
- Missing or incorrect permission on
redshift-serverless:GetCredentials(ensure it targets the correct workgroup ARN and region/account). - Trust policy mismatch (the data syncing service's principal isn't permitted to assume your role).
- Using a Cluster ARN or
redshift:GetClusterCredentialsinstead of Serverlessworkgroup+redshift-serverless:GetCredentials. - Propagation delay: IAM changes can take a few minutes to apply. Retry after 5-10 minutes.
Q: Do I need to pre-create the schema?
A: No. The schema provided in the destination configuration is created automatically on first sync. If you pre-create it, grant ALL on the schema to the writer user and you may remove the database-level CREATE permission (retain TEMPORARY).
Updated about 2 hours ago