Schema
Schema errors occur when there is a mismatch between the source data schema and what the destination expects — including type conflicts, evolution failures, and unsupported data types.Schema mismatch — column type conflict at destination
Schema mismatch — column type conflict at destination
INTEGER changed to TEXT), or the destination column type does not match the incoming data type. This commonly happens after a schema change at the source that was not reflected at the destination.Resolution:- Compare the source schema with the destination table schema
- Alter the destination column to match the expected type, or enable schema evolution to handle new columns automatically
- If using transforms, verify the transform output schema aligns with the destination
- Check the DLQ topic for affected messages and confirm new data flows correctly after the fix
Decimal precision overflow
Decimal precision overflow
- Contact Streamkap support to adjust how decimal values are handled on your source connector — Streamkap can configure it to convert decimals to floating-point numbers or represent them as strings, avoiding precision overflow
- Review the source table’s column definitions for excessively large precision values
Schema evolution permission failure (Snowflake)
Schema evolution permission failure (Snowflake)
OWNERSHIP privilege on the target tables, not just ALTER. When Snowpipe Streaming detects new columns in incoming data, it must alter the destination table, which Snowflake restricts to the table owner.Resolution:- Grant
OWNERSHIPon existing and future tables to the Streamkap role: - Alternatively, pre-create destination tables with the expected columns and manage schema changes manually
- Restart the destination connector after applying permission changes
Null constraint violation at destination
Null constraint violation at destination
NULL values in a column that the destination defines as NOT NULL, a transform removed or nullified a required field, or schema evolution added a new NOT NULL column at the destination without a default value.Resolution:- Identify the column with the null violation from the error message
- Either relax the
NOT NULLconstraint at the destination or add a default value - If the source legitimately contains nulls, add a transform to provide a default value before delivery
- Verify that schema evolution settings are compatible between source and destination
Two capture instances already exist (SQL Server)
Two capture instances already exist (SQL Server)
- Identify the oldest capture instance by running:
- Disable the oldest capture instance using the
create_datecolumn to identify it: - Retry the change table refresh script
Table rename not automatically handled
Table rename not automatically handled
- Add the renamed table to your Source Connector and Pipeline configuration
- Trigger a snapshot for the newly added table
- The old table’s destination data remains intact but stops receiving new change events
MySQL — Schema isn't known to this connector
MySQL — Schema isn't known to this connector
- Contact Streamkap support for schema history recovery
- Consider enabling Capture Only Captured Tables DDL in the source’s Advanced settings to limit schema history scope
- After recovery, a snapshot of affected tables may be required
BigDecimal type conflict in transforms
BigDecimal type conflict in transforms
- Use explicit type casting in the transform code to convert the decimal to a specific type (e.g.,
Number()or.toString()) - Contact Streamkap support to check how decimals are handled on the source connector — adjusting the decimal handling can prevent this conflict
- Test the transform with sample data in the Implementation tab before deploying
Permission
Permission errors occur when the connector’s database user or cloud IAM role lacks the required privileges to perform an operation.Snowflake Insufficient privileges
Snowflake Insufficient privileges
OWNERSHIP on tables.Resolution:- Verify the Snowflake role has the required privileges by running Script #2 from the Snowflake troubleshooting section
- For schema evolution, grant
OWNERSHIPon tables (see Schema Evolution Permissions) - Ensure the role is granted to the user and set as the default role:
AWS IAM AssumeRole access denied
AWS IAM AssumeRole access denied
- Update the IAM role’s trust policy to include Streamkap’s external ID and role ARN
- Verify the IAM role has the required service permissions (e.g.,
s3:PutObject,dynamodb:DescribeStream) - Confirm the role ARN in the Streamkap connector configuration matches the actual IAM role
Source database access denied or missing privileges
Source database access denied or missing privileges
REPLICATION, SELECT, or CDC-specific permissions).Resolution:- PostgreSQL: Verify the user has
REPLICATIONprivilege andSELECTon captured tables. Checkpg_hba.confallows the connection. See PostgreSQL Source FAQ — Troubleshooting - MySQL: Ensure the user has
REPLICATION SLAVE,REPLICATION CLIENT, andSELECTprivileges. See MySQL Source FAQ — Troubleshooting - Oracle: Verify LogMiner privileges and
SELECTon captured tables. See Oracle Source FAQ — Troubleshooting - SQL Server: Ensure the user has
db_ownerrole or equivalent CDC privileges. See SQL Server Source FAQ — Troubleshooting
Kafka topic authorization failed
Kafka topic authorization failed
Kafka group authorization failed
Kafka group authorization failed
Snapshot fails with insufficient permissions
Snapshot fails with insufficient permissions
SELECT or READ privileges on one or more tables being snapshotted, or the signal table permissions are missing.Resolution:- Verify the connector’s database user has
SELECTprivileges on all tables being snapshotted - For MySQL/SQL Server: confirm
SELECT,INSERT, andUPDATEprivileges on thestreamkap.streamkap_signaltable - For PostgreSQL: confirm the user has access to the publication and the signal table
- Re-trigger the snapshot after granting the necessary permissions
Connection
Connection errors arise from network issues, timeouts, SSL/TLS configuration problems, or firewall rules blocking access.MySQL server has gone away
MySQL server has gone away
wait_timeout, or network instability.Resolution:- Increase
wait_timeouton the MySQL server (e.g.,SET GLOBAL wait_timeout = 28800;) - Check network stability between Streamkap and the MySQL server
- If the error occurs during snapshots, consider using filtered (partial) snapshots to reduce operation time
- Verify firewall rules and security groups allow persistent connections
PostgreSQL connection failures
PostgreSQL connection failures
pg_hba.conf rules, firewall restrictions, or SSL configuration.Resolution:- Verify
pg_hba.confincludes an entry allowing the Streamkap IP addresses with the correct authentication method - Check firewall rules and security groups allow traffic on the PostgreSQL port (default 5432)
- Ensure SSL is correctly configured if required
- Confirm Streamkap IP addresses are allowlisted
Oracle connection failures — listener or TNS
Oracle connection failures — listener or TNS
- Check the Oracle listener status on the database server (
lsnrctl status) - Verify the TNS configuration (hostname, port, service name) in the Streamkap connector settings
- Ensure firewall rules allow traffic on the Oracle listener port (default 1521)
- For AWS RDS Oracle, verify the endpoint and port from the RDS console
SSL certificate verification failed
SSL certificate verification failed
- Check the database SSL certificate details using the
opensslcommands in the SSL Certificate Management Guide - If the certificate uses RSA 1024-bit or SHA-1, upgrade to RSA 2048-bit+ with SHA-256
- For cloud-managed databases (AWS RDS, Azure, GCP), check if a certificate rotation is needed
- Update the Streamkap connector if the SSL mode or certificate path has changed
SSL connection closed by peer (Kafka)
SSL connection closed by peer (Kafka)
- Ensure certificates are properly configured for your client:
- Python:
pip install --upgrade certifiand setssl.ca.locationtocertifi.where() - CLI tools: Try different certificate paths (e.g.,
/etc/ssl/cert.pemor/etc/ssl/certs/ca-certificates.crt)
- Python:
- Test the SSL handshake:
- Verify your network does not have a VPN or proxy interfering with SSL
SASL authentication failed (Kafka)
SASL authentication failed (Kafka)
- Verify the username and password are correct
- Ensure
sasl.mechanismis set toPLAINandsecurity.protocolis set toSASL_SSL - Confirm the Kafka user account is active and not disabled
Network timeout or connectivity loss during snapshot
Network timeout or connectivity loss during snapshot
- Verify network connectivity to your source database
- Check firewall rules and security group configurations
- Ensure any VPN or SSH tunnel is active and stable
- Once connectivity is restored, re-trigger the snapshot
MongoDB connection failures
MongoDB connection failures
- Verify firewall rules allow Streamkap IPs to connect
- Confirm SSL is enabled and the connection string includes
?ssl=true - Check the authentication credentials and database name
- For MongoDB Atlas, ensure Streamkap IP addresses are in the IP access list
DynamoDB ThrottlingException
DynamoDB ThrottlingException
- No action needed — the connector auto-recovers within 10-30 minutes via exponential backoff
- If the error persists beyond 30 minutes, verify that no other consumers are competing for the same DynamoDB Stream
- Contact Streamkap support if throttling is persistent and impacting data freshness
DocumentDB connection failures
DocumentDB connection failures
- Verify the VPC security group allows inbound traffic from Streamkap IPs on port 27017
- Confirm SSL is enabled and the connection string includes
?ssl=true&replicaSet=rs0 - Check IAM roles and database user credentials
- Ensure the DocumentDB cluster is in an active state
Replication
Replication errors relate to WAL/binlog/redo log/oplog issues, replication slot problems, and CDC configuration failures.PostgreSQL WAL buildup — disk space growing
PostgreSQL WAL buildup — disk space growing
pg_replication_slots shows large lag values.Cause: Inactive or slow replication slots prevent WAL files from being recycled. Low-traffic databases without heartbeats can also cause WAL accumulation because the replication slot position is not advancing.Resolution:- Enable heartbeats to keep replication slot positions advancing on low-traffic databases
- Monitor replication slots and drop inactive ones:
- Set WAL retention to 3-5 days
- Ensure
VACUUMandANALYZErun regularly
MySQL binlog buildup or missing events
MySQL binlog buildup or missing events
ROW.Resolution:- Ensure
binlog_format=ROWandbinlog_row_image=FULL - Enable heartbeats for low-traffic databases
- Set binlog retention to 3-5 days
- Verify the connector user has
REPLICATION SLAVEandREPLICATION CLIENTprivileges - Check that the target tables are included in the connector configuration
Oracle redo log buildup
Oracle redo log buildup
- Enable heartbeats for low-traffic databases
- Monitor archive log destination space and adjust retention (3-5 days minimum)
- Verify supplemental logging is enabled at database and table levels:
- Check LogMiner session resource usage and limit captured tables if needed
SQL Server CDC not working — SQL Server Agent stopped
SQL Server CDC not working — SQL Server Agent stopped
- Verify the SQL Server Agent is running:
- Start the SQL Server Agent if stopped
- Verify CDC is enabled on the database and tables:
- Check that the connector user has the required role membership
SQL Server CDC data loss after database restore
SQL Server CDC data loss after database restore
- Re-enable CDC on the database:
- Re-enable CDC on each table that was previously captured
- Trigger a new snapshot to backfill any data lost during the restore
MongoDB resume token expired or invalidated
MongoDB resume token expired or invalidated
- Set oplog retention to at least 3-5 days (7 days recommended for DocumentDB)
- Enable heartbeats to keep resume tokens fresh
- If the resume token is invalid, trigger a new snapshot to re-establish position
- For sharded clusters, verify oplog retention on all shards
PostgreSQL missing events — publication or REPLICA IDENTITY
PostgreSQL missing events — publication or REPLICA IDENTITY
REPLICA IDENTITY is set to DEFAULT (only logs primary key columns for updates/deletes).Resolution:- Verify the publication includes the target tables:
- Set
REPLICA IDENTITY FULLfor complete before/after images: - Check that the connector’s table inclusion list matches the publication
Oracle missing events — supplemental logging
Oracle missing events — supplemental logging
ALL COLUMNS at the table level.Resolution:- Enable supplemental logging at database level:
- Enable supplemental logging with ALL COLUMNS at table level:
- Verify LogMiner privileges are granted to the Streamkap user
- Confirm archive log mode is enabled
MongoDB ChangeStreamFatalError — resume token expired
MongoDB ChangeStreamFatalError — resume token expired
- Contact Streamkap support for an offset reset
- Increase oplog retention to 48 hours or more to prevent recurrence:
- After the offset reset, a snapshot may be required to backfill any missed data
PostgreSQL — Failed to re-select row (TOAST data timeout)
PostgreSQL — Failed to re-select row (TOAST data timeout)
- Contact Streamkap support to configure retriable exception handling for your connector
- Consider setting
REPLICA IDENTITY FULLon affected tables to avoid TOAST reselection:
Schema changes during snapshot
Schema changes during snapshot
- Avoid making schema changes to tables that are actively being snapshotted
- If a schema change was applied during a snapshot, cancel the snapshot and re-trigger it after the schema change is complete
- Wait for the DDL change to propagate before triggering a new snapshot
Transform
Transform errors occur during data transformation in Apache Flink, including JavaScript runtime failures, DLQ routing, and job management issues.Transform status shows RESTARTING continuously
Transform status shows RESTARTING continuously
RUNNING state.Cause: The transform code contains a runtime error, the input topic pattern matches no topics, or the project lacks sufficient resources for the configured parallelism.Resolution:- Check transform logs on the Logs page for specific error messages
- Verify the input pattern matches existing topics
- Test the transform logic in the Implementation tab with sample data
- Reduce parallelism to check if it is a resource issue
- Revert recent settings or code changes that may have caused the issue
Transform producing no output records
Transform producing no output records
RUNNING status but the written records count is zero.Cause: The input pattern regex does not match any actual topic names, the transform logic filters out all records, or the input topics are empty.Resolution:- Verify the input pattern regex matches actual topic names
- Confirm the transform logic does not filter out all records
- Review input topics to ensure they contain data
- Check the Errors metric and review logs for silent failures
- Test the implementation with sample data in the Implementation tab
Transform high latency
Transform high latency
- Increase parallelism in the Settings tab (set to at least the number of input topic partitions)
- Optimize the JavaScript transform logic — reduce unnecessary operations
- Check input topics for excessive consumer lag
- Increase partitions on input topics for better parallelism
Transform DLQ messages — failed processing
Transform DLQ messages — failed processing
- Inspect the DLQ topic messages to identify the failing records and error details
- Check headers for
_streamkap_erroror__connect.errors.exception.message - Fix the transform logic to handle edge cases (null values, unexpected types)
- Add error handling in the transform code to prevent job-level failures
Resource
Resource errors occur when system resources (memory, disk, compute) are exhausted or when internal timeouts are exceeded.Could not acquire minimum required resources
Could not acquire minimum required resources
- Reduce connector parallelism (lower the number of tasks)
- Contact Streamkap support to discuss scaling options
- Review whether other connectors on the same project can be optimized to free resources
Kafka Connect API call timed out
Kafka Connect API call timed out
- Retry the operation — transient timeouts often resolve on subsequent attempts
- If persistent, check the connector’s resource allocation and reduce parallelism
- Contact Streamkap support if the issue persists after multiple retries
Snowflake 503 / NullPointerException — transient API overload
Snowflake 503 / NullPointerException — transient API overload
- Do NOT restart the connector. The connector’s built-in retry mechanism with exponential backoff handles recovery automatically
- Recovery typically occurs within 5-30 minutes
- Monitor the pipeline — if the error persists beyond 30 minutes, contact Streamkap support
Snowflake offset misalignment (Append mode)
Snowflake offset misalignment (Append mode)
- Stop the destination connector in Streamkap UI
- Reset Consumer Group offsets via Consumer Groups Reset Procedure
- Reset Snowflake Channel offsets to
-1: - Resume the destination connector
- Verify data appears in the destination and lag decreases
Size and limit errors — payload too large
Size and limit errors — payload too large
- Identify the oversized field in the DLQ message payload
- Increase the column size limit at the destination if possible
- Consider adding a transform to truncate or filter oversized values before they reach the destination
- Review the destination connector’s batch size settings
Source database overload during snapshot
Source database overload during snapshot
- Schedule the snapshot during off-peak hours to reduce contention
- Use filtered (partial) snapshots to process smaller data ranges
- Review the source database’s connection limits and increase if needed
- Check for long-running queries or locks that may block snapshot reads
- Re-trigger the snapshot after the source database has recovered
Disk space or storage exhaustion
Disk space or storage exhaustion
- Check available disk space on the source database server
- For cloud-managed databases, verify storage auto-scaling is enabled or increase provisioned storage
- Review destination storage capacity
- Clean up unnecessary data, logs, or temporary files
- Re-trigger the operation after freeing sufficient resources
ClickHouse performance lag
ClickHouse performance lag
- Increase Maximum poll records in the connector’s Advanced settings (e.g.,
25000,50000, or80000) - Increase topic partitions to at least
5on the Topics page - Increase the Tasks setting to allow more parallel processing
- Adjust settings incrementally and monitor the impact
Database not found during incremental snapshot
Database not found during incremental snapshot
- Verify the database name in the source connector configuration matches an existing database
- If the database was renamed, update the connector configuration with the new name
- Re-trigger the snapshot after correcting the configuration
Schema history timeout on large instances
Schema history timeout on large instances
- Enable Capture Only Captured Databases DDL in the source’s Advanced settings
- Enable Capture Only Captured Tables DDL to limit schema history to only the tables you capture
- Consider restricting the database user’s access to only the captured databases and tables
Related Resources
- DLQ Operations — Monitor, inspect, and resolve failed messages in your pipelines
- Snapshots & Backfilling — Snapshot lifecycle, failure recovery, and snapshotting after schema changes
- Schema Evolution — How Streamkap handles schema changes between source and destination
- Monitoring the PostgreSQL WAL Log — Track WAL growth, replication lag, and slot status
- SSL Certificate Management — Identify and update SSL certificates for database connections
- Pipelines — Pipeline status, performance metrics, and troubleshooting
- Streaming Transforms — Transform job management and troubleshooting
- Kafka Access — Kafka connectivity, authentication, and ACL troubleshooting
- Logs — View detailed connector and pipeline logs
- Alerts — Configure notifications for pipeline errors and DLQ activity