Why Heartbeats Matter
Streamkap connectors use offsets (bookmarks) to track their position in the database’s change log (WAL, binlog, redo log, or change stream). When no data changes occur for an extended period, these offsets can become stale. If the database rotates or purges its change log files before the connector advances past them, you risk:- Data loss — the connector can no longer read the changes it missed.
- Log file accumulation — some databases retain logs indefinitely until the consumer advances, consuming disk space.
- Connector restart failures — a stale offset may point to a log position that no longer exists.
How Heartbeats Work
Connectors use “offsets”—like bookmarks—to track their position in the database’s log or change stream. When no changes occur for long periods, these offsets may become outdated, and the Connector might lose its place or stop capturing changes. Heartbeats ensure the Connector stays active and continues capturing changes. There are two layers of heartbeat protection:Layer 1: Connector heartbeats (enabled by default)
The Connector periodically emits heartbeat messages to an internal topic, even when no actual data changes are detected. This keeps offsets fresh and prevents staleness. No configuration is necessary for this layer; it is automatically enabled. We recommend keeping this layer enabled for all deployments.Layer 2: Source database heartbeats (recommended)
Why we recommend configuring Layer 2While Layer 2 is crucial for low-traffic or intermittent databases, we recommend configuring it for all deployments. It provides additional resilience and helps prevent issues during periods of inactivity.
- Read-write connections (when Read only is No during Streamkap Setup): The Connector updates the heartbeat table directly.
- Read-only connections (when Read only is Yes during Streamkap Setup): A scheduled job on the primary database updates the heartbeat table, and these changes replicate to the read replica for the Connector to consume.
pg_cron for PostgreSQL, event_scheduler for MySQL)—on your source database.
Recommended Interval
The default heartbeat interval is 1 minute, which works well for most deployments. This interval balances the need for timely offset advancement against the minimal overhead of a single row update.| Scenario | Recommended interval |
|---|---|
| Standard deployments | 1 minute |
| Very low-traffic databases (hours between changes) | 1 minute |
| High-traffic databases (changes every few seconds) | 1—5 minutes (heartbeats are less critical but still recommended) |
Setup Instructions
Heartbeat configuration varies by database type. Follow the setup guide for your specific source connector: MySQL / MariaDB:- MySQL (Generic)
- Amazon RDS MySQL
- Amazon RDS Aurora MySQL
- Google Cloud MySQL
- Azure MySQL Database
- MariaDB (Generic)
- Amazon RDS MariaDB
- PostgreSQL (Self-hosted)
- Amazon RDS PostgreSQL
- Amazon RDS Aurora PostgreSQL
- Amazon RDS PostgreSQL Serverless
- Google Cloud SQL PostgreSQL
- Azure Database for PostgreSQL
- Neon
- AlloyDB
- Supabase
Troubleshooting
Heartbeat table not created or not found
Heartbeat table not created or not found
Symptoms: The connector logs errors about a missing heartbeat table, or heartbeat events are not being generated.Solutions:
- Verify the heartbeat table exists in the correct schema. Run a
SELECTquery against it to confirm. - For PostgreSQL, check that the
streamkapschema exists:SELECT schema_name FROM information_schema.schemata WHERE schema_name = 'streamkap'; - For MySQL/MariaDB, check that the
streamkapdatabase exists:SHOW DATABASES LIKE 'streamkap'; - For Oracle, remember the table is in the
STREAMKAP_USERschema, not a separatestreamkapschema. - Ensure the initial row has been inserted (
INSERT INTO ... VALUES ('test_heartbeat')).
Insufficient permissions on the heartbeat table
Insufficient permissions on the heartbeat table
Symptoms: The connector cannot read from or write to the heartbeat table. Errors reference permission denied or access issues.Solutions:
- Verify the Streamkap user has
SELECT,UPDATE,INSERT, andDELETEpermissions on the heartbeat table. - For PostgreSQL, also verify
USAGEpermission on thestreamkapschema. - For SQL Server, verify the
streamkap_rolehas the required grants. - For Oracle, verify permissions are granted to both
STREAMKAP_USERandC##STREAMKAP_USER. - Re-run the
GRANTstatements from the setup instructions for your database type.
Heartbeat not advancing (read-only connections)
Heartbeat not advancing (read-only connections)
Symptoms: The heartbeat table exists and has a row, but the
last_update timestamp is not being updated.Solutions:- PostgreSQL: Verify the
pg_cronjob is running:SELECT * FROM cron.job;and checkSELECT * FROM cron.job_run_details ORDER BY start_time DESC LIMIT 5;for errors. - MySQL/MariaDB: Verify the event scheduler is enabled:
SHOW VARIABLES WHERE VARIABLE_NAME = 'event_scheduler';should returnON. Check event status:SHOW EVENTS IN streamkap; - MongoDB: Verify your scheduled trigger or cron job is executing. For Atlas, check the trigger logs in the App Services console.
- Confirm the scheduled job is updating the correct row (typically
WHERE id = 1).
Heartbeat table not included in publication or CDC
Heartbeat table not included in publication or CDC
Symptoms: The heartbeat table is being updated, but the connector does not detect the changes.Solutions:
- PostgreSQL: If you created a publication for specific tables, add the heartbeat table:
ALTER PUBLICATION streamkap_pub ADD TABLE streamkap.streamkap_heartbeat; - SQL Server: Verify CDC is enabled on the heartbeat table using
SELECT is_tracked_by_cdc FROM sys.tables WHERE name = 'streamkap_heartbeat';— the result should be1. - MongoDB: Ensure the heartbeat collection is included in the connector’s namespace configuration. If you configured specific databases/collections, add
streamkap.streamkap_heartbeat.
Connector still losing position despite heartbeats
Connector still losing position despite heartbeats
Symptoms: Even with heartbeats configured, the connector reports stale offsets or cannot resume from its last position.Solutions:
- Verify that both Layer 1 (connector heartbeats) and Layer 2 (database heartbeats) are active.
- Check your database’s log retention settings:
- PostgreSQL: Ensure
wal_keep_sizeorwal_keep_segmentsis sufficient. See Monitoring the PostgreSQL WAL Log. - MySQL: Verify
binlog_expire_logs_seconds(orexpire_logs_days) is set to at least 3 days. See MySQL Low Volume Log Rotation.
- PostgreSQL: Ensure
- Ensure the heartbeat interval (1 minute) is shorter than your log rotation period.
- Contact Streamkap support if the issue persists.
Quick Reference
| Database | Heartbeat table | Schema/Location | Read-only support | Scheduler for read-only |
|---|---|---|---|---|
| PostgreSQL | streamkap_heartbeat | streamkap schema | Yes | pg_cron extension |
| MySQL | streamkap_heartbeat | streamkap database | Yes | MySQL Event Scheduler |
| Oracle | STREAMKAP_HEARTBEAT | STREAMKAP_USER schema | No | N/A |
| SQL Server | streamkap_heartbeat | streamkap schema | No | N/A |
| MariaDB | streamkap_heartbeat | streamkap database | Yes | MariaDB Event Scheduler |
| MongoDB | streamkap_heartbeat | streamkap database (collection) | N/A (always external) | Atlas Triggers / cron / K8s CronJob |