Snapshot Options
When triggering a snapshot, you can choose from three options:Filtered Snapshot
Apply filter conditions to capture specific rows. Streaming continues during snapshot.- Best for capturing a subset of data based on conditions (e.g., date ranges, specific statuses)
- Uses incremental watermarking to capture data in small chunks
- Requires tables to have primary keys (or a Surrogate Key)
- Can continue from where it left off on failure or cancellation
- SQL-based Sources (PostgreSQL, MySQL, Oracle, etc.): Use SQL WHERE clause syntax
- Document/NoSQL Sources (MongoDB, DocumentDB, DynamoDB): Use JSON filter expressions
Full Snapshot
Capture all rows from selected tables. Streaming continues during snapshot.- Best for complete data backfills where you need all historical data
- Uses incremental watermarking to capture data in small chunks
- Requires tables to have primary keys (or a Surrogate Key)
- Can continue from where it left off on failure or cancellation
Blocking Snapshot
Capture all rows while pausing streaming. Streaming resumes automatically after snapshot completes.- Required for keyless tables: Tables without primary keys cannot use incremental snapshots (unless a Surrogate Key is specified)
- Point-in-time consistency: Guarantees a consistent view of data at a specific moment
- Faster for large tables: Can be more performant since it captures all data in one operation
- Multiple tables in parallel: Depending on connector configuration, multiple tables can be snapshotted simultaneously
Blocking snapshots do not currently support filters. To apply filters, use the Filtered Snapshot option instead (requires primary keys).
Surrogate Key
Available under Advanced Options when configuring Filtered or Full snapshots.
- Keyless tables: Tables without primary keys can use a surrogate key (e.g., a timestamp or auto-increment column) for incremental snapshots instead of requiring a blocking snapshot
- Performance optimization: A different column may provide better chunking performance (e.g., using
created_atinstead of a UUID primary key for more efficient range queries)

- Only single-column surrogate keys are supported (composite keys are not available)
- The surrogate key column must exist in the table and contain sortable values
Snapshot Lifecycle
| When | Behavior |
|---|---|
| At connector creation | The connector starts in streaming mode, reading any change data seen from this point onwards. No snapshots are triggered automatically. |
| After connector creation | You can trigger ad-hoc snapshots for any or all of the tables the connector is configured to capture. A confirmation prompt is required before the snapshot begins. |
| Pipeline creation and edit | You can choose to trigger snapshots for the topics the pipeline will stream to your destination. A confirmation prompt is required before the snapshot begins. |
Behavior
Deletions are not captured during snapshots. Snapshots read existing rows at a point in time—deletion events can only be processed during streaming, or, replayed if Streamkap data retention policies allow.
Filtered & Full Snapshots
These snapshots use incremental watermarking, capturing data in small chunks to minimize database impact. Streaming continues uninterrupted while historical data is being backfilled. When snapshotting multiple tables, tables are processed sequentially—one at a time. Each table must complete before the next begins. On failure: The snapshot resumes from where it left off. If it cannot resume automatically, you can re-trigger it at the Connector or Table level once the issue is resolved. On cancellation: The snapshot stops at its current progress. Streaming continues uninterrupted. You can resume the snapshot later from where it left off.Event ordering during concurrent changes
Event ordering during concurrent changes
When rows are modified while an incremental snapshot is running, event ordering may vary, because the Connector’s streaming and snapshotting in parallel:
- Updates: You may receive events in different orders (
read→update,update→read, or justupdate) - Deletes: You may receive
read→delete, or justdelete
Blocking Snapshots
These snapshots capture all data in a single transaction. Streaming pauses until the snapshot completes, then resumes automatically. Multiple tables may be processed in parallel depending on connector configuration. On failure: Streaming resumes immediately. Re-trigger the snapshot once the issue is resolved—it will start from the beginning since blocking snapshots capture all rows in one operation. On cancellation: Since streaming is paused during blocking snapshots, when you cancel a blocking snapshot, Streamkap restarts the connector to terminate the snapshot immediately. Streaming resumes after the restart.Duplicate events after completion
Duplicate events after completion
A brief delay exists between signaling a blocking snapshot and when streaming actually pauses. This may result in some duplicate events being emitted after the snapshot completes. Ensure your destination can handle idempotent writes or has deduplication enabled.
A high performance, bulk parallel snapshot feature is planned for future releases.
Triggering a Snapshot
You can trigger an ad-hoc snapshot at the Source level or per Table from the Connector’s page.Source Level Snapshot
This will trigger an incremental snapshot for all tables/topics captured by the Source:
Snapshot Options Dialog
When triggering a source-level snapshot, you can choose between Full Snapshot or Blocking Snapshot:
Table/Topic Level Snapshot
This will trigger a snapshot for the selected tables/topics only:
Snapshot Options Dialog
When triggering a table/topic snapshot, you can choose the snapshot type and configure advanced options:
Filtered Snapshot Configuration
Select Filtered Snapshot to apply filter conditions. Filter syntax varies by Source type:- SQL-based Sources: Use SQL WHERE clause syntax (e.g.,
created_at >= '2025-01-01' AND created_at < '2025-02-01') - Document/NoSQL Sources: Use JSON filter expressions (e.g.,
{"status": "active", "created_at": {"$gte": "2025-01-01", "$lt": "2025-02-01"}})

Best Practices for Filtered Snapshots
When using Filtered snapshots, we strongly recommend:- Use closed range filters when applying comparative operators on timestamp or date fields. For example:
created_at >= '2025-01-01' AND created_at < '2025-02-01'. Closed ranges ensure you capture all intended data without gaps or overlaps. - Filter on indexed or primary key fields for optimal performance. Filtering on columns that are part of your table’s indices or primary key allows the database to efficiently locate matching rows, significantly reducing the load on your source database.
Confirmation Prompt
After initiating a snapshot, you must confirm your action by typing “snapshot” in the confirmation dialog:
Snapshot Progress
Upon triggering a snapshot, the Connector status will update to reflect the snapshot operation:

Cancelling a Snapshot
You can cancel an in-progress snapshot from the Connector’s quick actions menu: