Redshift
Configuring your Redshift destination.
Prerequisites
- If your Redshift security posture requires IP whitelisting, have our data syncing service's static IP available during the following steps. It will be required in Step 2.
- By default, Redshift authentication uses role-based access. You will need the trust policy prepopulated with the data-syncing service's identifier to grant access. It should look similar to the following JSON object with a proper service account identifier:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRoleWithWebIdentity"
],
"Principal": {
"Federated": "accounts.google.com"
},
"Condition": {
"StringEquals": {
"accounts.google.com:sub": "<some_service_account_identifier>"
}
}
}
]
}
Step 1: Create a Limited User in Redshift
- Connect to Redshift using the SQL client.
- Execute the following query to create a user to write the data (replace
<password>
with a password of your choice).
CREATE USER <username> PASSWORD '<password>';
Creating a user without a password.
Role based auth does not require a password. You may create the user using
CREATE USER <username> PASSWORD DISABLE;
.
- Grant user
create
andtemporary
privileges on the database.create
allows the service to create new schemas andtemporary
allows the service to create temporary tables.
GRANT CREATE, TEMPORARY ON DATABASE <database> TO <username>;
The schema will be created during the first sync
The schema name supplied as part of Step 4 will be created during the first connection. It does not need to be created manually in the destination ahead of time.
🚧 If the
schema
already existsBy default, the service creates a new schema based on the destination configuration. If you prefer to create the schema yourself before connecting the destination, you must ensure that the writer user has the proper permissions on the schema, using
GRANT ALL ON schema <schema> TO <username>;
Once you've provided the
GRANT ALL
permission on the schema, you can safely remove theCREATE
permission on the database (but you must retain theTEMPORARY
permission on the database).
Step 2: Whitelist connection
- In the Redshift console, click Clusters, and make a note of the cluster name.
- Select the cluster you would like to connect.
- In the General information pane, make note of the Endpoint details. You may need to use the copy icon to copy the full details to discover the full endpoint and port number.
- Click the Properties tab.
- Scroll down to the Network and security settings section.
- In the VPC security group field, select a security group to open it.
- In the Security Groups window, click Inbound rules.
- Click Edit inbound rules.
- In the Edit the Inbound rules window, follow the steps below to create custom TCP rules for the static IP:
a. Select Custom TCP in the drop-down menu.
b. Enter your Redshift port number. (likely5439
)
c. Enter the static IP.
d. Click Add rule.
Step 3: Create a staging bucket
Create staging bucket
- Navigate to the S3 service page.
- Click Create bucket.
- Enter a Bucket name and modify any of the default settings as desired. Note: Object Ownership can be set to "ACLs disabled" and Block Public Access settings for this bucket can be set to "Block all public access" as recommended by AWS. Make note of the Bucket name and AWS Region.
- Click Create bucket.
Create policy
- Navigate to the IAM service page, click on the Policies navigation tab, and click Create policy.
- Click the JSON tab, and paste the following policy, being sure to replace
BUCKET_NAME
with the name of the bucket chosen above, and REGION_NAME, ACCOUNT_ID, CLUSTER_NAME, USERNAME, and DATABASE_NAME with the proper Redshift values.- Note: the first bucket permission in the list applies to
BUCKET_NAME
whereas the second permission applies only to the bucket's contents —BUCKET_NAME/*
— an important distinction.
- Note: the first bucket permission in the list applies to
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::BUCKET_NAME"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
},
{
"Effect": "Allow",
"Action": "redshift:GetClusterCredentials",
"Resource": [
"arn:aws:redshift:REGION_NAME:ACCOUNT_ID:dbuser:CLUSTER_NAME/USERNAME",
"arn:aws:redshift:REGION_NAME:ACCOUNT_ID:dbname:CLUSTER_NAME/DATABASE_NAME"
]
}
]
}
- Click through to the Review step, choose a name for the policy, for example,
transfer-service-policy
(this will be referenced in the next step), add a description, and click Create policy.
Create role
- Navigate to the IAM service page.
- Navigate to the Roles navigation tab, and click Create role.
- Select Custom trust policy and paste the provided trust policy (from the prerequisite) to allow AssumeRole access to this role. Click Next.
- Add the permissions policy created above, and click Next.
- Enter a Role name, for example,
transfer-role
, and click Create role. - Once successfully created, search for the created role in the Roles list, click the role name, and make a note of the ARN value.
Alternative authentication method: AWS User with HMAC Access Key ID & Secret Access Key
Role based authentication is the preferred authentication mode for Redshift based on AWS recommendations, however, HMAC Access Key ID & Secret Access Key is an alternative authentication method that can be used if preferred.
- Navigate to the IAM service page.
- Navigate to the Users navigation tab, and click Add users.
- Enter a User name for the service, for example,
transfer-service
, click Next. Under Select AWS access type, select the Access key - Programatic access option. Click Next: Permissions.- Click the Attach existing policies directly option, and search for the name of the policy created in the previous step. Select the policy, and click Next: Tags.
- Click Next: Review and click Create user.
- In the Success screen, record the Access key ID and the Secret access key.
Step 4: Add your destination
- Securely share your host, database, cluster, your chosen schema, IAM role ARN, and staging bucket details with us to complete the connection.
Updated 7 months ago