Create a Sync from BigQuery to Batch Profile attributes

Before you start

To create a BigQuery → Batch sync, you’ll need:

  • Access to the Batch dashboard

  • A BigQuery table or view containing one row per profile

  • A Google Cloud service account key (JSON) to grant Batch read access

  • A table (or view) that follows the Cloud Sync input format (see below)


1) Prepare your BigQuery table

Cloud Sync expects your BigQuery source (table or view) to include:

  1. A profile identifier (to know which profile to update)

  2. A cursor field (to know what changed since the last run)

  3. Any number of attribute columns (sent to Batch as profile attributes)


1.1 One row per profile

Your source must contain one row per profile. Each row is interpreted as an update to a single Batch profile.


1.2 Required columns

Your table (or view) must include:

Column
Required
Description

custom_id

The profile identifier in Batch

last_updated_at

Cursor used for incremental sync

Important: last_updated_at must be updated every time any synced attribute changes, otherwise updates may not be picked up by the next run.


1.3 Attribute naming rules (BigQuery-compatible)

Cloud Sync reads BigQuery columns and converts them into Batch profile attributes.

However, BigQuery column names cannot contain characters like $, (, or ). That means you can’t use the exact Profile API formats such as:

  • url(avatar)

  • date(birthday)

  • $email_address

✅ Instead, Cloud Sync relies on prefixes in column names to represent typed or native fields.

Supported prefixes

Prefix
Meaning
Example

date__

Date attribute

date__birthday

url__

URL attribute

url__avatar

batch__

Native profile fields (instead of $...)

batch__email_address


1.4 Example schema (e-commerce)

Here’s a table format you can use as a reference:

How this maps in Batch:

  • custom_id identifies the profile

  • batch__email_address updates the profile’s native email field

  • plan, country, lifetime_value, is_vip become attributes

  • url__avatar is interpreted as a URL attribute

  • date__birthday and date__last_purchase are interpreted as date attributes


1.5 Using a View

If your raw table doesn’t match the expected naming or format, create a BigQuery View that converts your schema into the correct conventions.

Example:

This approach lets you:

  • rename fields with the correct prefixes (batch__, date__, url__)

  • compute a reliable last_updated_at

  • ensure you always expose one row per profile


1.6 Handling nulls

If a column value is NULL, Batch interprets it as attribute removal for that profile.

If you don’t want an attribute removed:

  • ensure your view returns a non-null value, or

  • exclude the column from the sync entirely.


1.7 Attributes limits and constraints

When syncing data from BigQuery to Batch, all attributes sent through Cloud Sync must respect the same limits and constraints as the Batch Profile API. See Profile API documentation, the attributes object.

2) Create a Service Account key in Google Cloud

Batch uses a Service Account Key (JSON) to securely access your BigQuery dataset.

  1. Go to Google Cloud Console → IAM & Admin → Service Accounts

  2. Create a service account (or reuse an existing one)

  3. Generate a JSON key

  4. Grant the service account:

    • roles/bigquery.jobUser

    • Dataset-level permission: BigQuery Data Editor on the dataset containing your source table/view


3) Create the Sync in the Batch dashboard

Cloud Sync is configured from the dashboard via a dedicated Sync module.

  1. Open the Batch dashboard

  2. Go to Data → Cloud Sync

  3. Click Create Sync

  4. Select BigQuery as the source


3.1 Configure your BigQuery connection

Enter:

  • Dataset

  • Table or View

  • Upload your Service Account Key (JSON)

Batch validates the connection before continuing.


3.2 Configure profile mapping

Cloud Sync applies a simple mapping model:

  • custom_id → identifies which Batch profile to update

  • all other columns → mapped to profile attributes

  • last_updated_at → used only for incremental sync logic


4) How incremental sync works

Cloud Sync uses incremental processing, which means it does not re-import your full dataset at every run. Instead, it fetches only the rows that changed since the last successful sync.


4.1 The last_updated_at cursor

Batch stores the last successful cursor value internally.

At each run, Batch fetches only rows where:

  • last_updated_at is greater than the last stored cursor

This makes sync runs faster, more scalable, and more cost-efficient.


4.2 Inserts, updates, and deletes

Incremental syncs naturally capture:

  • ✅ inserts

  • ✅ updates

They do not automatically capture:

  • ❌ deletes

If you need deletions reflected in Batch, rely on a different pipelines or implement soft deletes by setting all attributes to null in the BigQuery view when a profile is deleted.


4.3 Best practices for reliable incremental syncs

To avoid missing changes:

  • Ensure last_updated_at updates every time a synced column changes

  • Avoid timestamps that only reflect partial updates

  • Use a View if you need computed fields or type conversions

  • Partition or cluster on last_updated_at for large datasets


5) Test and enable your Sync

Before enabling the schedule:

  1. Run a test sync

  2. Verify:

    • Profiles are created or updated correctly

    • batch__, date__, and url__ fields are interpreted correctly

    • Null values behave as expected (null → attribute removal)

Once enabled, Batch automatically handles:

  • batching

  • retries

Last updated