Skip to main content
Upgrading your source database version (e.g., PostgreSQL 14 to 16, MySQL 5.7 to 8.0) requires careful planning to maintain CDC pipeline continuity. This guide covers pre-upgrade preparation, upgrade procedures, and post-upgrade verification for each supported database type.
We recommend notifying Streamkap support about your database upgrade ahead of time. Some steps may require assistance from the Streamkap team, particularly around offset management and connector restarts.

General Pre-Upgrade Checklist

Before upgrading any source database, complete the following preparation steps:
  1. Verify your pipeline is healthy — no errors, low consumer lag, and the connector shows a RUNNING status
  2. Note the current pipeline position (snapshot status, consumer lag metrics)
  3. Consider pausing the pipeline during the upgrade window to avoid partial captures
  4. Ensure a database backup is available and tested
  5. Review the database-specific CDC requirements for your target version (see tabs below)
  6. Test the upgrade in a non-production environment first, including verifying that CDC resumes correctly
Streamkap handles brief network interruptions well. If a monitored database stops, the connector attempts to resume from the last recorded position once communication is restored. However, database version upgrades can invalidate replication positions and require additional steps beyond a simple reconnection.

Database-Specific Upgrade Guides

Minor Version Upgrades (e.g., 14.8 to 14.12)

Minor version upgrades are generally safe for CDC pipelines:
  • Replication slots are preserved across minor upgrades
  • The pipeline reconnects automatically after the database restarts
  • No snapshot is typically required
Verify that wal_level = logical remains set after the upgrade, as minor upgrades should not change this.

Major Version Upgrades (e.g., 14 to 16)

Major PostgreSQL upgrades (using pg_upgrade or equivalent) drop logical replication slots. The pipeline will need to be reconfigured after the upgrade. Failing to follow the correct procedure can result in silent data loss.
PostgreSQL removes replication slots during major upgrades and does not restore them. When the connector restarts, it requests the last known offset, but PostgreSQL cannot return it from the new slot. A new replication slot only tracks changes from its creation point, so the connector would skip older change events and resume from the latest log position.Procedure:
1

Stop writes to the database

Using your database’s upgrade procedure, ensure writes to the database have stopped.
2

Allow the connector to drain all events

Allow the connector to capture all remaining change events before starting the upgrade. Ask Streamkap to confirm all events have been captured.
3

Stop the Source in Streamkap

Stop the Source in the Streamkap app. This flushes the last records and saves the last offset.
4

Upgrade the database

Stop the database and upgrade it using your standard upgrade procedure (e.g., pg_upgrade).
5

Verify wal_level

Confirm that wal_level = logical is set in the upgraded PostgreSQL configuration. Major upgrades may reset this parameter.
SQL
SHOW wal_level;
6

Recreate the replication slot

The replication slot must be recreated after the upgrade:
SQL
SELECT pg_create_logical_replication_slot('streamkap_pgoutput_slot', 'pgoutput');
7

Verify or recreate the publication

Confirm the publication for Streamkap still exists. Recreate it if necessary with the same tables:
SQL
-- Check if publication exists
SELECT * FROM pg_publication WHERE pubname = 'streamkap_pub';

-- Recreate if needed (all tables)
CREATE PUBLICATION streamkap_pub FOR ALL TABLES;

-- Or recreate with specific tables
CREATE PUBLICATION streamkap_pub FOR TABLE table1, table2, table3;
8

Restore write access and resume the Source

Restore write access to the database, then resume or restart the Source in the Streamkap app.
9

Trigger a snapshot if needed

If any events were not captured before the upgrade, trigger a snapshot for the affected tables to ensure no data was missed.
If you choose different names for the replication slot or publication during the upgrade, update them in the Streamkap setup page for the relevant PostgreSQL Source. Contact Streamkap, as they may need to reset your connector’s offsets.

RDS / Aurora PostgreSQL Specifics

The same general procedure applies for Amazon RDS and Aurora PostgreSQL managed upgrades. Key considerations:
  • Use the RDS console or CLI to perform the version upgrade
  • After the upgrade, verify rds.logical_replication is still set to 1 in your parameter group
  • Recreate the replication slot and verify the publication as described above
  • For Aurora, ensure you are upgrading the primary instance (Aurora read replicas only support physical replication)
Exact behavior during RDS managed upgrades may vary depending on the upgrade type (in-place vs. blue/green deployment). Refer to your PostgreSQL platform setup guide for platform-specific configuration details.
For full PostgreSQL source setup and troubleshooting details, see:

Post-Upgrade Verification Checklist

After completing the upgrade for any database type, verify the following:
  1. The pipeline reconnects and shows a RUNNING status in the Streamkap app
  2. New data changes (inserts, updates, deletes) are being captured — insert a test row and verify it appears at the destination
  3. Consumer group lag is decreasing steadily
  4. Check the DLQ (Dead Letter Queue) for any error messages related to the upgrade
  5. Compare row counts between source and destination for key tables to identify any gaps
  6. Monitor the pipeline for 24-48 hours for any delayed issues

When to Snapshot

A full snapshot is required in specific situations. Trigger a snapshot if any of the following apply:
ScenarioDatabaseReason
Replication slot was droppedPostgreSQLMajor upgrades drop logical replication slots; new slots only track changes from creation
Binlog position was lostMySQL (without GTID)Binlog file names and positions can change during major upgrades
Resume token was invalidatedMongoDBMajor version upgrades may change oplog format, invalidating existing resume tokens
CDC was disabled during upgradeSQL ServerChange events are not captured while CDC is disabled
Supplemental logging was resetOracleEvents during the period without supplemental logging are not captured
Data gaps are suspectedAnyIf you cannot confirm all events were captured before the upgrade
If all change events were not captured before stopping the Source and upgrading the database, you can perform a snapshot after the upgrade is completed to ensure no change events were missed. See Snapshots & Backfilling for detailed snapshot procedures.

Cloud-Managed Database Considerations

Cloud-managed database upgrades (Amazon RDS, Aurora, Google Cloud SQL, Azure) vary depending on the provider’s upgrade mechanism (in-place, blue/green deployment, read replica promotion, etc.). The general procedures in this guide apply, but consult your cloud provider’s documentation for specific upgrade behavior.
Key points for managed database upgrades:
  • Amazon RDS / Aurora: Managed upgrades may handle some infrastructure steps automatically, but replication slots (PostgreSQL) and CDC configuration (SQL Server) still need manual verification
  • Google Cloud SQL: Similar to RDS — verify CDC-related parameters in your database flags after the upgrade
  • Azure Database: Check that server parameters and CDC settings are preserved through the upgrade process
  • MongoDB Atlas: Atlas manages rolling upgrades automatically for replica sets, but verify your connector resumes correctly after the upgrade completes