Skip to main content
This page consolidates known errors from across Streamkap sources, destinations, transforms, and pipelines into a single searchable reference. Each entry includes the error message, its cause, and the steps to resolve it. For errors specific to a particular source or destination, cross-links point to the relevant connector documentation for additional context.
Use your browser’s search (Ctrl+F / Cmd+F) to find a specific error message quickly.

Schema

Schema errors occur when there is a mismatch between the source data schema and what the destination expects — including type conflicts, evolution failures, and unsupported data types.
Error message:
Column is of type double precision but expression is of type character varying
Cause: The source column type was altered (e.g., INTEGER changed to TEXT), or the destination column type does not match the incoming data type. This commonly happens after a schema change at the source that was not reflected at the destination.Resolution:
  1. Compare the source schema with the destination table schema
  2. Alter the destination column to match the expected type, or enable schema evolution to handle new columns automatically
  3. If using transforms, verify the transform output schema aligns with the destination
  4. Check the DLQ topic for affected messages and confirm new data flows correctly after the fix
Related: Schema Evolution | DLQ Operations
Error message:
Cannot encode decimal with precision 44 as max precision 38
Cause: The source database contains a decimal value whose precision exceeds the maximum precision (38) supported by the destination or the Kafka Connect Decimal logical type.Resolution:
  1. Contact Streamkap support to adjust how decimal values are handled on your source connector — Streamkap can configure it to convert decimals to floating-point numbers or represent them as strings, avoiding precision overflow
  2. Review the source table’s column definitions for excessively large precision values
Error message:
Insufficient privileges
Cause: Schema evolution in Snowflake requires the OWNERSHIP privilege on the target tables, not just ALTER. When Snowpipe Streaming detects new columns in incoming data, it must alter the destination table, which Snowflake restricts to the table owner.Resolution:
  1. Grant OWNERSHIP on existing and future tables to the Streamkap role:
    GRANT OWNERSHIP ON ALL TABLES IN SCHEMA <database>.<schema>
      TO ROLE STREAMKAP_ROLE COPY CURRENT GRANTS;
    
    GRANT OWNERSHIP ON FUTURE TABLES IN SCHEMA <database>.<schema>
      TO ROLE STREAMKAP_ROLE COPY CURRENT GRANTS;
    
  2. Alternatively, pre-create destination tables with the expected columns and manage schema changes manually
  3. Restart the destination connector after applying permission changes
Related: Snowflake Schema Evolution Permissions | Schema Evolution
Error message (varies by destination):
null value in column "X" violates not-null constraint
Cause: The source contains NULL values in a column that the destination defines as NOT NULL, a transform removed or nullified a required field, or schema evolution added a new NOT NULL column at the destination without a default value.Resolution:
  1. Identify the column with the null violation from the error message
  2. Either relax the NOT NULL constraint at the destination or add a default value
  3. If the source legitimately contains nulls, add a transform to provide a default value before delivery
  4. Verify that schema evolution settings are compatible between source and destination
Related: DLQ Operations — Null Constraint Violations
Error message:
Two capture instances already exist for source table
Cause: SQL Server limits each table to a maximum of 2 CDC capture instances. This error occurs when trying to refresh the change table structure (required after a schema change) while 2 capture instances already exist.Resolution:
  1. Identify the oldest capture instance by running:
    USE {database};
    GO
    EXEC sys.sp_cdc_help_change_data_capture
      @source_schema = N'{schema}',
      @source_name   = N'{table}'
    GO
    
  2. Disable the oldest capture instance using the create_date column to identify it:
    USE {database};
    GO
    EXEC sys.sp_cdc_disable_table
      @source_schema    = N'{source_schema}',
      @source_name      = N'{source_table}',
      @capture_instance = N'{capture_instance}'
    GO
    
  3. Retry the change table refresh script
Related: SQL Server Source FAQ
Error message (varies):
Renaming table(s) will report an error
Cause: Renaming a source table is not automatically tracked by schema evolution. The renamed table appears as a new table and requires explicit reconfiguration.Resolution:
  1. Add the renamed table to your Source Connector and Pipeline configuration
  2. Trigger a snapshot for the newly added table
  3. The old table’s destination data remains intact but stops receiving new change events
Related: Schema Evolution | Snapshots
Error message:
Schema isn't known to this connector
Cause: Affects databases with 1000+ tables when schema history optimization is enabled. The schema history may become incomplete, causing the connector to encounter tables or schemas it cannot resolve.Resolution:
  1. Contact Streamkap support for schema history recovery
  2. Consider enabling Capture Only Captured Tables DDL in the source’s Advanced settings to limit schema history scope
  3. After recovery, a snapshot of affected tables may be required
Related: MySQL Source FAQ
Error message:
Ambiguous method overloading for BigDecimal
Cause: A transform function receives a decimal value that causes ambiguous method resolution in the JavaScript transform runtime.Resolution:
  1. Use explicit type casting in the transform code to convert the decimal to a specific type (e.g., Number() or .toString())
  2. Contact Streamkap support to check how decimals are handled on the source connector — adjusting the decimal handling can prevent this conflict
  3. Test the transform with sample data in the Implementation tab before deploying
Related: Streaming Transforms | Transform Filter Records

Permission

Permission errors occur when the connector’s database user or cloud IAM role lacks the required privileges to perform an operation.
Error message:
Insufficient privileges
Cause: The Snowflake user or role used by the connector does not have the required privileges on the target object (warehouse, database, schema, or table). This is common during initial setup or when schema evolution requires OWNERSHIP on tables.Resolution:
  1. Verify the Snowflake role has the required privileges by running Script #2 from the Snowflake troubleshooting section
  2. For schema evolution, grant OWNERSHIP on tables (see Schema Evolution Permissions)
  3. Ensure the role is granted to the user and set as the default role:
    GRANT ROLE IDENTIFIER($role_name) TO USER IDENTIFIER($user_name);
    ALTER USER IDENTIFIER($user_name) SET DEFAULT_ROLE = $role_name;
    
Related: Snowflake Setup | Snowflake Setup Scripts Failing
Error message:
AccessDenied when calling AssumeRole
Cause: The IAM role trust policy does not include Streamkap’s role ARN, or the IAM role does not have the required permissions for the target AWS service (S3, DynamoDB, etc.).Resolution:
  1. Update the IAM role’s trust policy to include Streamkap’s external ID and role ARN
  2. Verify the IAM role has the required service permissions (e.g., s3:PutObject, dynamodb:DescribeStream)
  3. Confirm the role ARN in the Streamkap connector configuration matches the actual IAM role
Error message (varies by database):
Access denied for user 'X'@'Y' to database 'Z'
permission denied for table X
Cause: The database user configured for the Streamkap source connector lacks the required privileges (e.g., REPLICATION, SELECT, or CDC-specific permissions).Resolution:
Error message:
Topic authorization failed
TOPIC_AUTHORIZATION_FAILED
Cause: The Kafka user is missing TOPIC READ or TOPIC WRITE ACL permissions for the target topic.Resolution:
  1. Add the appropriate ACL permission:
    • Resource Type: TOPIC
    • Operation: READ (for consumers) or WRITE (for producers)
    • Pattern Type: LITERAL or PREFIXED
    • Name: Your topic name or prefix
Related: Kafka Access
Error message:
GROUP_AUTHORIZATION_FAILED
Group authorization failed
Cause: The Kafka user is missing GROUP READ ACL permissions for the consumer group.Resolution:
  1. Add the following ACL permission:
    • Resource Type: GROUP
    • Operation: READ
    • Pattern Type: LITERAL or PREFIXED
    • Name: Your consumer group ID (e.g., my-consumer-group)
This only affects consumers using consumer groups (Python consumers, CLI tools).
Related: Kafka Access
Error message (varies):
Access denied
Insufficient privileges to perform snapshot
Cause: The database user lacks SELECT or READ privileges on one or more tables being snapshotted, or the signal table permissions are missing.Resolution:
  1. Verify the connector’s database user has SELECT privileges on all tables being snapshotted
  2. For MySQL/SQL Server: confirm SELECT, INSERT, and UPDATE privileges on the streamkap.streamkap_signal table
  3. For PostgreSQL: confirm the user has access to the publication and the signal table
  4. Re-trigger the snapshot after granting the necessary permissions
Related: Snapshots — Insufficient Permissions

Connection

Connection errors arise from network issues, timeouts, SSL/TLS configuration problems, or firewall rules blocking access.
Error message:
MySQL server has gone away
Cause: The MySQL connection timed out, typically due to a long-running snapshot, idle connection exceeding wait_timeout, or network instability.Resolution:
  1. Increase wait_timeout on the MySQL server (e.g., SET GLOBAL wait_timeout = 28800;)
  2. Check network stability between Streamkap and the MySQL server
  3. If the error occurs during snapshots, consider using filtered (partial) snapshots to reduce operation time
  4. Verify firewall rules and security groups allow persistent connections
Related: MySQL Source FAQ — Troubleshooting
Error message (varies):
Connection refused
FATAL: no pg_hba.conf entry for host
Cause: The PostgreSQL server is rejecting the connection due to pg_hba.conf rules, firewall restrictions, or SSL configuration.Resolution:
  1. Verify pg_hba.conf includes an entry allowing the Streamkap IP addresses with the correct authentication method
  2. Check firewall rules and security groups allow traffic on the PostgreSQL port (default 5432)
  3. Ensure SSL is correctly configured if required
  4. Confirm Streamkap IP addresses are allowlisted
Related: PostgreSQL Source FAQ — Troubleshooting
Error message (varies):
ORA-12541: TNS:no listener
ORA-12514: TNS:listener does not currently know of service
Cause: The Oracle listener is not running, the TNS configuration is incorrect, or firewall rules are blocking access on the configured port.Resolution:
  1. Check the Oracle listener status on the database server (lsnrctl status)
  2. Verify the TNS configuration (hostname, port, service name) in the Streamkap connector settings
  3. Ensure firewall rules allow traffic on the Oracle listener port (default 1521)
  4. For AWS RDS Oracle, verify the endpoint and port from the RDS console
Related: Oracle Source FAQ — Troubleshooting
Error message (varies):
Certificate verify failed
SSL handshake failed due to weak encryption algorithm
Certificates do not conform to algorithm constraints
Cause: The database instance is using an outdated, weak, or expired SSL certificate. The SSL certificate’s encryption algorithm or key size does not meet the minimum requirements.Resolution:
  1. Check the database SSL certificate details using the openssl commands in the SSL Certificate Management Guide
  2. If the certificate uses RSA 1024-bit or SHA-1, upgrade to RSA 2048-bit+ with SHA-256
  3. For cloud-managed databases (AWS RDS, Azure, GCP), check if a certificate rotation is needed
  4. Update the Streamkap connector if the SSL mode or certificate path has changed
Related: SSL Certificate Management Guide
Error message:
SSL connection closed by peer
Cause: SSL certificate verification is failing when connecting to the Streamkap Kafka cluster, or the client’s certificate configuration is incorrect.Resolution:
  1. Ensure certificates are properly configured for your client:
    • Python: pip install --upgrade certifi and set ssl.ca.location to certifi.where()
    • CLI tools: Try different certificate paths (e.g., /etc/ssl/cert.pem or /etc/ssl/certs/ca-certificates.crt)
  2. Test the SSL handshake:
    openssl s_client -connect <hostname>:32400 -servername <hostname>
    
  3. Verify your network does not have a VPN or proxy interfering with SSL
Related: Kafka Access
Error message:
SASL authentication failed
Cause: Incorrect Kafka username or password, or the SASL mechanism is not configured correctly.Resolution:
  1. Verify the username and password are correct
  2. Ensure sasl.mechanism is set to PLAIN and security.protocol is set to SASL_SSL
  3. Confirm the Kafka user account is active and not disabled
Related: Kafka Access
Error message (varies):
Connection timed out
Read timed out
Cause: Network interruption between Streamkap and your source database during a snapshot — firewall change, VPN drop, or transient cloud networking issue.Resolution:
  1. Verify network connectivity to your source database
  2. Check firewall rules and security group configurations
  3. Ensure any VPN or SSH tunnel is active and stable
  4. Once connectivity is restored, re-trigger the snapshot
Related: Snapshots — Failed Snapshot Recovery
Error message (varies):
Connection refused
Authentication failed
Cause: Firewall, SSL configuration, or authentication issues between Streamkap and the MongoDB cluster.Resolution:
  1. Verify firewall rules allow Streamkap IPs to connect
  2. Confirm SSL is enabled and the connection string includes ?ssl=true
  3. Check the authentication credentials and database name
  4. For MongoDB Atlas, ensure Streamkap IP addresses are in the IP access list
Related: MongoDB Source FAQ — Troubleshooting
Error message:
ThrottlingException
Cause: DynamoDB Streams throughput limit exceeded. This is a transient error that occurs when the connector reads from DynamoDB Streams faster than the service allows.Resolution:
  1. No action needed — the connector auto-recovers within 10-30 minutes via exponential backoff
  2. If the error persists beyond 30 minutes, verify that no other consumers are competing for the same DynamoDB Stream
  3. Contact Streamkap support if throttling is persistent and impacting data freshness
Related: DynamoDB Source
Error message (varies):
Connection refused
VPC security group denied
Cause: VPC security group rules, SSL configuration, or IAM authentication issues prevent Streamkap from connecting to the DocumentDB cluster.Resolution:
  1. Verify the VPC security group allows inbound traffic from Streamkap IPs on port 27017
  2. Confirm SSL is enabled and the connection string includes ?ssl=true&replicaSet=rs0
  3. Check IAM roles and database user credentials
  4. Ensure the DocumentDB cluster is in an active state
Related: DocumentDB Source FAQ — Troubleshooting

Replication

Replication errors relate to WAL/binlog/redo log/oplog issues, replication slot problems, and CDC configuration failures.
Symptoms: Disk usage on the PostgreSQL server is growing rapidly; pg_replication_slots shows large lag values.Cause: Inactive or slow replication slots prevent WAL files from being recycled. Low-traffic databases without heartbeats can also cause WAL accumulation because the replication slot position is not advancing.Resolution:
  1. Enable heartbeats to keep replication slot positions advancing on low-traffic databases
  2. Monitor replication slots and drop inactive ones:
    SELECT slot_name, active,
      pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS lag
    FROM pg_replication_slots;
    
    -- Drop an inactive slot
    SELECT pg_drop_replication_slot('{slot_name}');
    
  3. Set WAL retention to 3-5 days
  4. Ensure VACUUM and ANALYZE run regularly
Related: Monitoring the PostgreSQL WAL Log | PostgreSQL Source FAQ
Symptoms: Binlog files are accumulating and consuming disk space; expected change events are not appearing in the destination.Cause: Binlog retention is too short (events expire before being consumed), heartbeats are not enabled for low-traffic databases, or the binlog format is not set to ROW.Resolution:
  1. Ensure binlog_format=ROW and binlog_row_image=FULL
  2. Enable heartbeats for low-traffic databases
  3. Set binlog retention to 3-5 days
  4. Verify the connector user has REPLICATION SLAVE and REPLICATION CLIENT privileges
  5. Check that the target tables are included in the connector configuration
Related: MySQL Source FAQ — Troubleshooting
Symptoms: Archive log destination disk space is growing; LogMiner sessions are consuming resources.Cause: Heartbeats are not enabled for low-traffic databases, archive log retention is too long, or supplemental logging overhead is high.Resolution:
  1. Enable heartbeats for low-traffic databases
  2. Monitor archive log destination space and adjust retention (3-5 days minimum)
  3. Verify supplemental logging is enabled at database and table levels:
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    ALTER TABLE schema.table ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    
  4. Check LogMiner session resource usage and limit captured tables if needed
Related: Oracle Source FAQ — Troubleshooting
Symptoms: No change events are being captured; change tables are not being populated.Cause: The SQL Server Agent service is stopped. CDC relies on the SQL Server Agent to populate change tables and run cleanup jobs.Resolution:
  1. Verify the SQL Server Agent is running:
    EXEC master.dbo.xp_servicecontrol N'QUERYSTATE', N'SQLSERVERAGENT'
    
  2. Start the SQL Server Agent if stopped
  3. Verify CDC is enabled on the database and tables:
    SELECT name, is_cdc_enabled FROM sys.databases WHERE name = '{database}';
    EXEC sys.sp_cdc_help_change_data_capture;
    
  4. Check that the connector user has the required role membership
Related: SQL Server Source FAQ — Troubleshooting
Symptoms: CDC stops working after a database restore; no change events captured.Cause: A database restore operation disables CDC on the database and all tables. CDC must be re-enabled manually after a restore.Resolution:
  1. Re-enable CDC on the database:
    USE {database};
    EXEC sys.sp_cdc_enable_db;
    
  2. Re-enable CDC on each table that was previously captured
  3. Trigger a new snapshot to backfill any data lost during the restore
Related: SQL Server Source FAQ — Troubleshooting
Symptoms: The connector fails to resume from its last position; logs mention resume token issues.Cause: The oplog has been rotated past the connector’s last resume token position, typically because the connector was offline for too long or oplog retention is too short.Resolution:
  1. Set oplog retention to at least 3-5 days (7 days recommended for DocumentDB)
  2. Enable heartbeats to keep resume tokens fresh
  3. If the resume token is invalid, trigger a new snapshot to re-establish position
  4. For sharded clusters, verify oplog retention on all shards
Related: MongoDB Source FAQ — Troubleshooting | DocumentDB Source FAQ — Troubleshooting
Symptoms: Some change events (especially updates and deletes) are not appearing in the destination.Cause: The PostgreSQL publication does not include the affected tables, or REPLICA IDENTITY is set to DEFAULT (only logs primary key columns for updates/deletes).Resolution:
  1. Verify the publication includes the target tables:
    SELECT * FROM pg_publication_tables WHERE pubname = 'streamkap_pub';
    
  2. Set REPLICA IDENTITY FULL for complete before/after images:
    ALTER TABLE schema.table REPLICA IDENTITY FULL;
    
  3. Check that the connector’s table inclusion list matches the publication
Related: PostgreSQL Source FAQ
Symptoms: Change events for updates or deletes are incomplete (only primary key values captured); some tables produce no events.Cause: Supplemental logging is not enabled at the database level or with ALL COLUMNS at the table level.Resolution:
  1. Enable supplemental logging at database level:
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    
  2. Enable supplemental logging with ALL COLUMNS at table level:
    ALTER TABLE schema.table ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    
  3. Verify LogMiner privileges are granted to the Streamkap user
  4. Confirm archive log mode is enabled
Related: Oracle Source FAQ
Error message:
ChangeStreamFatalError (Error 280)
Cause: The MongoDB oplog has been recycled before the connector could process all events. The connector’s resume token points to a position that no longer exists in the oplog.Resolution:
  1. Contact Streamkap support for an offset reset
  2. Increase oplog retention to 48 hours or more to prevent recurrence:
    db.adminCommand({ replSetResizeOplog: 1, minRetentionHours: 48 })
    
  3. After the offset reset, a snapshot may be required to backfill any missed data
Related: MongoDB Source FAQ | Snapshots
Error message:
Failed to re-select row
Cause: TOAST data reselect connection timeout on tables with large TOAST columns. When PostgreSQL stores large column values in TOAST tables, the connector must re-select them during processing, which can time out on tables with very large values.Resolution:
  1. Contact Streamkap support to configure retriable exception handling for your connector
  2. Consider setting REPLICA IDENTITY FULL on affected tables to avoid TOAST reselection:
    ALTER TABLE schema.table REPLICA IDENTITY FULL;
    
Related: PostgreSQL Source FAQ
Symptoms: Snapshot fails or produces inconsistent data when DDL changes are applied to a table during an active snapshot.Cause: Schema changes during an active snapshot are not supported. The snapshot process reads data using the schema at the time it started, and a DDL change mid-snapshot causes conflicts.Resolution:
  1. Avoid making schema changes to tables that are actively being snapshotted
  2. If a schema change was applied during a snapshot, cancel the snapshot and re-trigger it after the schema change is complete
  3. Wait for the DDL change to propagate before triggering a new snapshot
Related: Snapshots — Snapshotting After Schema Changes

Transform

Transform errors occur during data transformation in Apache Flink, including JavaScript runtime failures, DLQ routing, and job management issues.
Symptoms: The transform job continuously restarts and never reaches a stable RUNNING state.Cause: The transform code contains a runtime error, the input topic pattern matches no topics, or the project lacks sufficient resources for the configured parallelism.Resolution:
  1. Check transform logs on the Logs page for specific error messages
  2. Verify the input pattern matches existing topics
  3. Test the transform logic in the Implementation tab with sample data
  4. Reduce parallelism to check if it is a resource issue
  5. Revert recent settings or code changes that may have caused the issue
Related: Streaming Transforms — Troubleshooting
Symptoms: The transform shows RUNNING status but the written records count is zero.Cause: The input pattern regex does not match any actual topic names, the transform logic filters out all records, or the input topics are empty.Resolution:
  1. Verify the input pattern regex matches actual topic names
  2. Confirm the transform logic does not filter out all records
  3. Review input topics to ensure they contain data
  4. Check the Errors metric and review logs for silent failures
  5. Test the implementation with sample data in the Implementation tab
Related: Streaming Transforms — Troubleshooting
Symptoms: Transform latency is continuously increasing; downstream destinations are falling behind.Cause: The transform parallelism is too low for the input volume, the JavaScript logic is computationally expensive, or the input topics have excessive consumer lag.Resolution:
  1. Increase parallelism in the Settings tab (set to at least the number of input topic partitions)
  2. Optimize the JavaScript transform logic — reduce unnecessary operations
  3. Check input topics for excessive consumer lag
  4. Increase partitions on input topics for better parallelism
Related: Streaming Transforms — Troubleshooting
Symptoms: The transform’s DLQ topic is receiving messages; errors visible in the DLQ message headers.Cause: Individual records fail during the transform — type mismatches, null values in required fields, or unexpected data formats.Resolution:
  1. Inspect the DLQ topic messages to identify the failing records and error details
  2. Check headers for _streamkap_error or __connect.errors.exception.message
  3. Fix the transform logic to handle edge cases (null values, unexpected types)
  4. Add error handling in the transform code to prevent job-level failures
Related: DLQ Operations | Streaming Transforms

Resource

Resource errors occur when system resources (memory, disk, compute) are exhausted or when internal timeouts are exceeded.
Error message:
Could not acquire minimum required resources
Cause: The Kafka Connect cluster has reached its capacity limit and cannot allocate resources for the connector or task.Resolution:
  1. Reduce connector parallelism (lower the number of tasks)
  2. Contact Streamkap support to discuss scaling options
  3. Review whether other connectors on the same project can be optimized to free resources
Error message:
Kafka Connect API call timed out
Cause: An internal API call to the Kafka Connect cluster timed out due to high load, network latency, or resource contention.Resolution:
  1. Retry the operation — transient timeouts often resolve on subsequent attempts
  2. If persistent, check the connector’s resource allocation and reduce parallelism
  3. Contact Streamkap support if the issue persists after multiple retries
Error message:
503 Service Unavailable
NullPointerException
Cause: Transient Snowflake Streaming API overload. The Snowflake service is temporarily unable to handle the request volume.Resolution:
  1. Do NOT restart the connector. The connector’s built-in retry mechanism with exponential backoff handles recovery automatically
  2. Recovery typically occurs within 5-30 minutes
  3. Monitor the pipeline — if the error persists beyond 30 minutes, contact Streamkap support
Restarting the connector during a transient 503 error can make the situation worse by triggering additional reconnection overhead. Allow the automatic retry to handle recovery.
Related: Snowflake
Symptoms: New data is not appearing in destination tables; lag shows as negative (e.g., -1) or unusually high; Snowflake channels show offset positions that do not match Consumer Group offsets.Cause: The Consumer Group offsets and Snowflake Channel offsets have become misaligned, typically after topic deletion and recreation.Resolution:
  1. Stop the destination connector in Streamkap UI
  2. Reset Consumer Group offsets via Consumer Groups Reset Procedure
  3. Reset Snowflake Channel offsets to -1:
    SELECT SYSTEM$SNOWPIPE_STREAMING_UPDATE_CHANNEL_OFFSET_TOKEN(
        '<DATABASE>.<SCHEMA>.<TABLE_NAME>',
        '<TOPIC_NAME_0>',
        '-1'
    );
    
  4. Resume the destination connector
  5. Verify data appears in the destination and lag decreases
Related: Snowflake — Offset Management
Error message (varies):
Message size exceeds maximum
Column value exceeds maximum length
Cause: A row or column value exceeds the maximum message size supported by the destination, or batch size settings cause aggregated payloads to exceed limits.Resolution:
  1. Identify the oversized field in the DLQ message payload
  2. Increase the column size limit at the destination if possible
  3. Consider adding a transform to truncate or filter oversized values before they reach the destination
  4. Review the destination connector’s batch size settings
Related: DLQ Operations — Size and Limit Errors
Symptoms: Snapshot fails partway through; source database logs show slow queries or connection pool exhaustion.Cause: The snapshot’s read queries compete with the production workload, causing timeouts or resource exhaustion on the source database.Resolution:
  1. Schedule the snapshot during off-peak hours to reduce contention
  2. Use filtered (partial) snapshots to process smaller data ranges
  3. Review the source database’s connection limits and increase if needed
  4. Check for long-running queries or locks that may block snapshot reads
  5. Re-trigger the snapshot after the source database has recovered
Related: Snapshots — Failed Snapshot Recovery
Symptoms: Snapshot or connector fails with errors related to disk space, memory, or storage limits.Cause: The source database, destination system, or intermediate storage has run out of available disk space or memory during operations.Resolution:
  1. Check available disk space on the source database server
  2. For cloud-managed databases, verify storage auto-scaling is enabled or increase provisioned storage
  3. Review destination storage capacity
  4. Clean up unnecessary data, logs, or temporary files
  5. Re-trigger the operation after freeing sufficient resources
Related: Snapshots — Failed Snapshot Recovery
Symptoms: ClickHouse destination lag is continuously growing; records are being written slowly.Cause: The connector configuration is not optimized for the workload — batch size, parallelism, or topic partitions may be insufficient.Resolution:
  1. Increase Maximum poll records in the connector’s Advanced settings (e.g., 25000, 50000, or 80000)
  2. Increase topic partitions to at least 5 on the Topics page
  3. Increase the Tasks setting to allow more parallel processing
  4. Adjust settings incrementally and monitor the impact
Error message:
Database 'X' not found
Cause: The database referenced in the incremental snapshot target does not exist, has been renamed, or the connector’s configuration references a stale database name.Resolution:
  1. Verify the database name in the source connector configuration matches an existing database
  2. If the database was renamed, update the connector configuration with the new name
  3. Re-trigger the snapshot after correcting the configuration
Symptoms: Source connector startup is slow or times out; schema history topic is very large.Cause: The source connector records schema structures from all databases and tables in the instance, even those not being captured. For large instances, this causes slow startup and excessive topic growth.Resolution:
  1. Enable Capture Only Captured Databases DDL in the source’s Advanced settings
  2. Enable Capture Only Captured Tables DDL to limit schema history to only the tables you capture
  3. Consider restricting the database user’s access to only the captured databases and tables