How to maintain CDC pipelines during source database version upgrades
Upgrading your source database version (e.g., PostgreSQL 14 to 16, MySQL 5.7 to 8.0) requires careful planning to maintain CDC pipeline continuity. This guide covers pre-upgrade preparation, upgrade procedures, and post-upgrade verification for each supported database type.
We recommend notifying Streamkap support about your database upgrade ahead of time. Some steps may require assistance from the Streamkap team, particularly around offset management and connector restarts.
Before upgrading any source database, complete the following preparation steps:
Verify your pipeline is healthy — no errors, low consumer lag, and the connector shows a RUNNING status
Note the current pipeline position (snapshot status, consumer lag metrics)
Consider pausing the pipeline during the upgrade window to avoid partial captures
Ensure a database backup is available and tested
Review the database-specific CDC requirements for your target version (see tabs below)
Test the upgrade in a non-production environment first, including verifying that CDC resumes correctly
Streamkap handles brief network interruptions well. If a monitored database stops, the connector attempts to resume from the last recorded position once communication is restored. However, database version upgrades can invalidate replication positions and require additional steps beyond a simple reconnection.
Major PostgreSQL upgrades (using pg_upgrade or equivalent) drop logical replication slots. The pipeline will need to be reconfigured after the upgrade. Failing to follow the correct procedure can result in silent data loss.
PostgreSQL removes replication slots during major upgrades and does not restore them. When the connector restarts, it requests the last known offset, but PostgreSQL cannot return it from the new slot. A new replication slot only tracks changes from its creation point, so the connector would skip older change events and resume from the latest log position.Procedure:
1
Stop writes to the database
Using your database’s upgrade procedure, ensure writes to the database have stopped.
2
Allow the connector to drain all events
Allow the connector to capture all remaining change events before starting the upgrade. Ask Streamkap to confirm all events have been captured.
3
Stop the Source in Streamkap
Stop the Source in the Streamkap app. This flushes the last records and saves the last offset.
4
Upgrade the database
Stop the database and upgrade it using your standard upgrade procedure (e.g., pg_upgrade).
5
Verify wal_level
Confirm that wal_level = logical is set in the upgraded PostgreSQL configuration. Major upgrades may reset this parameter.
SQL
Copy
Ask AI
SHOW wal_level;
6
Recreate the replication slot
The replication slot must be recreated after the upgrade:
Confirm the publication for Streamkap still exists. Recreate it if necessary with the same tables:
SQL
Copy
Ask AI
-- Check if publication existsSELECT * FROM pg_publication WHERE pubname = 'streamkap_pub';-- Recreate if needed (all tables)CREATE PUBLICATION streamkap_pub FOR ALL TABLES;-- Or recreate with specific tablesCREATE PUBLICATION streamkap_pub FOR TABLE table1, table2, table3;
8
Restore write access and resume the Source
Restore write access to the database, then resume or restart the Source in the Streamkap app.
9
Trigger a snapshot if needed
If any events were not captured before the upgrade, trigger a snapshot for the affected tables to ensure no data was missed.
If you choose different names for the replication slot or publication during the upgrade, update them in the Streamkap setup page for the relevant PostgreSQL Source. Contact Streamkap, as they may need to reset your connector’s offsets.
The same general procedure applies for Amazon RDS and Aurora PostgreSQL managed upgrades. Key considerations:
Use the RDS console or CLI to perform the version upgrade
After the upgrade, verify rds.logical_replication is still set to 1 in your parameter group
Recreate the replication slot and verify the publication as described above
For Aurora, ensure you are upgrading the primary instance (Aurora read replicas only support physical replication)
Exact behavior during RDS managed upgrades may vary depending on the upgrade type (in-place vs. blue/green deployment). Refer to your PostgreSQL platform setup guide for platform-specific configuration details.
For full PostgreSQL source setup and troubleshooting details, see:
Major MySQL upgrades require verifying that all CDC-related configuration remains intact.Pre-upgrade verification:Confirm these settings are properly configured in your target version:
Setting
Required Value
Verification Query
binlog_format
ROW
SHOW VARIABLES LIKE 'binlog_format';
binlog_row_image
FULL
SHOW VARIABLES LIKE 'binlog_row_image';
log_bin
ON
SHOW VARIABLES LIKE 'log_bin';
gtid_mode (recommended)
ON
SHOW VARIABLES LIKE 'gtid_mode';
Upgrade behavior depends on GTID configuration:
GTID enabled (recommended)
If GTID mode is enabled, the upgrade is significantly simpler:
Stop writes to the database
Allow the connector to capture all remaining events
Perform the upgrade using your standard procedure
Verify all CDC-related settings (see table above)
Resume the Source in Streamkap — the connector resumes from the GTID position
GTID provides a global transaction identifier that survives the upgrade, allowing the connector to resume from exactly where it left off.
GTID not enabled
Without GTID, MySQL uses binlog file and position tracking. Major upgrades can change binlog filenames and positions, which means:
Stop writes to the database
Allow the connector to capture all remaining events and confirm with Streamkap
Stop the Source in Streamkap
Perform the upgrade
Verify all CDC-related settings (see table above)
Resume the Source in Streamkap
Trigger a snapshot for all affected tables, as the binlog position from before the upgrade may no longer be valid
Without GTID, there is a higher risk of data gaps during the upgrade. We strongly recommend enabling GTID before upgrading if possible. See Enable GTID for instructions.
MySQL 5.7 to 8.0 specific notes:
MySQL 8.0 changes the default authentication plugin to caching_sha2_password. If your Streamkap user was created with mysql_native_password, verify it still authenticates correctly after the upgrade
Review binlog retention settings after upgrade: CALL mysql.rds_show_configuration; (RDS) or SHOW VARIABLES LIKE 'binlog_expire_logs_seconds'; (self-hosted)
Streamkap’s Oracle connector uses LogMiner to read redo and archive logs for CDC. After upgrading your Oracle database, verify the following:Post-upgrade verification checklist:
Supplemental logging is enabled — Oracle upgrades may reset supplemental logging settings
SQL
Copy
Ask AI
-- Verify database-level supplemental loggingSELECT NAME, SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;-- SUPPLEMENTAL_LOG_DATA_MIN should be YES
Table-level supplemental logging is intact — Verify for each captured table
SQL
Copy
Ask AI
-- Check table-level supplemental loggingSELECT * FROM ALL_LOG_GROUPS WHERE TABLE_NAME = '{TABLE_NAME}';
ARCHIVELOG mode is still enabled
SQL
Copy
Ask AI
archive log list;-- Database log mode should be: Archive Mode
Redo log configuration is unchanged — Verify redo log sizes and groups
SQL
Copy
Ask AI
SELECT GROUP#, BYTES/1024/1024 SIZE_MB, STATUS FROM V$LOG ORDER BY 1;
Streamkap user privileges are intact — Verify the STREAMKAP_USER (or C##STREAMKAP_USER for CDB) retains all necessary grants, including LOGMINING, FLASHBACK ANY TABLE, and access to V$ views
Upgrade procedure:
Stop writes to the database
Allow the connector to capture all remaining events
Stop the Source in Streamkap
Perform the Oracle upgrade using your standard procedure
SQL Server CDC relies on change tracking tables and the SQL Server Agent service. Upgrades can disrupt both of these.Post-upgrade verification checklist:
CDC is still enabled on the database
SQL
Copy
Ask AI
SELECT name, is_cdc_enabled FROM sys.databases WHERE name = '{database}';-- is_cdc_enabled should be 1
CDC capture jobs are running — SQL Server CDC depends on capture and cleanup jobs
SQL
Copy
Ask AI
EXEC sys.sp_cdc_help_jobs;
SQL Server Agent is running — The Agent must be active for CDC capture jobs to execute. If it is not running, start it through SQL Server Management Studio or via service management
CDC is enabled on all captured tables — Verify each table still has CDC tracking enabled
SQL
Copy
Ask AI
EXEC sys.sp_cdc_help_change_data_capture;
Streamkap user permissions are intact — Verify the streamkap_user retains SELECT privileges on the cdc schema and source tables
Upgrade procedure:
Stop writes to the database
Allow the connector to capture all remaining events
Stop the Source in Streamkap
Perform the SQL Server upgrade
Verify all items in the checklist above
Re-enable CDC on the database and tables if needed
Ensure SQL Server Agent is running
Resume the Source in Streamkap
Trigger a snapshot if you suspect any events were missed
If CDC is disabled during the upgrade process (either intentionally or by the upgrade procedure), you must re-enable it on both the database and each individual table. Change events that occur while CDC is disabled will not be captured.
Streamkap’s MongoDB connector uses change streams (backed by the oplog) for CDC. MongoDB upgrade behavior depends on your deployment type and the version jump.Replica set rolling upgrades are CDC-friendly:
Rolling upgrades (upgrading secondary nodes first, then stepping down the primary) allow the connector to continue reading change streams with minimal interruption
The connector automatically reconnects to the new primary after a stepdown
Key risks during major version upgrades:
Change stream resume tokens may not survive major version upgrades if the oplog format changes
If the resume token is invalidated, the connector cannot resume from its last position
Post-upgrade verification checklist:
Oplog size is sufficient — Verify the oplog window is large enough to cover the upgrade duration plus a buffer
Replica set is healthy — All members should be in a healthy state
Copy
Ask AI
rs.status()
Streamkap user privileges are intact — Verify the streamkap_user retains readAnyDatabase and read on the local database
Change streams are functional — Verify the connector can open a change stream on the target collections
Upgrade procedure:
Verify your oplog window is large enough to cover the expected upgrade duration
If performing a rolling upgrade, allow the connector to continue running during secondary node upgrades
Before stepping down the primary, allow the connector to capture all pending events
Stop the Source in Streamkap if the resume token may be invalidated (major version upgrade)
Complete the upgrade
Verify all items in the checklist above
Resume the Source in Streamkap
Trigger a snapshot if the resume token was invalidated or you suspect data gaps
A full snapshot is required in specific situations. Trigger a snapshot if any of the following apply:
Scenario
Database
Reason
Replication slot was dropped
PostgreSQL
Major upgrades drop logical replication slots; new slots only track changes from creation
Binlog position was lost
MySQL (without GTID)
Binlog file names and positions can change during major upgrades
Resume token was invalidated
MongoDB
Major version upgrades may change oplog format, invalidating existing resume tokens
CDC was disabled during upgrade
SQL Server
Change events are not captured while CDC is disabled
Supplemental logging was reset
Oracle
Events during the period without supplemental logging are not captured
Data gaps are suspected
Any
If you cannot confirm all events were captured before the upgrade
If all change events were not captured before stopping the Source and upgrading the database, you can perform a snapshot after the upgrade is completed to ensure no change events were missed. See Snapshots & Backfilling for detailed snapshot procedures.
Cloud-managed database upgrades (Amazon RDS, Aurora, Google Cloud SQL, Azure) vary depending on the provider’s upgrade mechanism (in-place, blue/green deployment, read replica promotion, etc.). The general procedures in this guide apply, but consult your cloud provider’s documentation for specific upgrade behavior.
Key points for managed database upgrades:
Amazon RDS / Aurora: Managed upgrades may handle some infrastructure steps automatically, but replication slots (PostgreSQL) and CDC configuration (SQL Server) still need manual verification
Google Cloud SQL: Similar to RDS — verify CDC-related parameters in your database flags after the upgrade
Azure Database: Check that server parameters and CDC settings are preserved through the upgrade process
MongoDB Atlas: Atlas manages rolling upgrades automatically for replica sets, but verify your connector resumes correctly after the upgrade completes