SQL Server Sources FAQ for Streamkap
This FAQ focuses on using SQL Server as a source in Streamkap, including self-hosted setups and cloud variants (AWS RDS SQL Server, Azure SQL Database, Google Cloud SQL for SQL Server). Streamkap’s SQL Server connector provides real-time CDC with managed features like automatic scaling, UI setup, and ETL transformations.What is a SQL Server source in Streamkap?
What is a SQL Server source in Streamkap?
What SQL Server versions are supported as sources?
What SQL Server versions are supported as sources?
- SQL Server 2016+ for basic CDC; 2017+ for enhanced features and Linux support; 2019+ for advanced data types and performance.
- Compatible with SQL Server 2014+ in limited modes.
- Azure SQL Database (all service tiers with CDC support)
What SQL Server deployments are supported?
What SQL Server deployments are supported?
- Self-hosted (on-prem/VM)
- AWS RDS SQL Server (Standard, Enterprise, Web editions)
- Azure SQL Database (including Managed Instance)
- Google Cloud SQL for SQL Server
What are the key features of SQL Server sources in Streamkap?
What are the key features of SQL Server sources in Streamkap?
- CDC: Change table-based via SQL Server CDC; captures inserts/updates/deletes
- Snapshots: Ad-hoc/initial backfills using signal table methods; locking or non-locking modes
- Schema Evolution: Requires manual refresh of CDC change tables when source table structure changes
- Heartbeats: Enabled by default to prevent offset staleness in low-traffic databases
- Data Types: Standard mappings (numerics, temporals, strings, binary as bytes/base64/hex, XML, hierarchyid)
- Ingestion Modes: Inserts (append) or upserts
- Security: SSL, Windows/SQL authentication, IP allowlisting
- Monitoring: Latency, lag, CDC cleanup metrics in-app
- Streamkap adds transaction metadata and ad-hoc snapshots with filters
How does CDC work for SQL Server sources?
How does CDC work for SQL Server sources?
What is CDC cleanup and why does it matter?
What is CDC cleanup and why does it matter?
- Default retention: 3 days
- Monitor cleanup: Ensure SQL Server Agent is running and cleanup jobs are scheduled
- Storage impact: Change tables can consume significant disk space if cleanup fails
How do snapshots work for SQL Server sources?
How do snapshots work for SQL Server sources?
- Trigger ad-hoc at source/table level
- Requires a signal table (
streamkap_signal) with CDC enabled - Methods: Blocking (locks tables briefly) or incremental (chunked reads)
- Modes:
initial(default),always,initial_only,no_data,when_needed,configuration_based,custom
What are heartbeats and how do they work?
What are heartbeats and how do they work?
Configure source database heartbeats
What data types are supported?
What data types are supported?
- Basics: Integers (TINYINT, SMALLINT, INT, BIGINT), floats (REAL, FLOAT), strings (CHAR, VARCHAR, NCHAR, NVARCHAR, TEXT), dates/timestamps (DATE, DATETIME, DATETIME2, DATETIMEOFFSET, TIME)
- Advanced: Binary (BINARY, VARBINARY, IMAGE as bytes/hex), decimals (DECIMAL, NUMERIC, MONEY), XML, uniqueidentifier (GUID)
- Custom: Hierarchyid, spatial types (GEOMETRY, GEOGRAPHY) with limitations
- Unsupported: Non-UTF8/UTF16 encodings, deprecated types (TEXT, NTEXT, IMAGE in older modes), sql_variant without explicit casting
How to handle table schema changes?
How to handle table schema changes?
- Create a new capture instance using
sys.sp_cdc_enable_tablewith a new@capture_instancename - Wait for Streamkap to switch to the new capture instance
- Disable the old capture instance using
sys.sp_cdc_disable_table
How to monitor CDC for SQL Server sources?
How to monitor CDC for SQL Server sources?
- Check SQL Server Agent status:
EXEC master.dbo.xp_servicecontrol N'QUERYSTATE', N'SQLSERVERAGENT' - Monitor change table sizes:
SELECT * FROM sys.dm_cdc_log_scan_sessions - Check cleanup job status:
SELECT * FROM msdb.dbo.cdc_jobs - Monitor lag via Streamkap in-app metrics
Can CDC capture database Views, Materialized Views, and other virtual objects?
Can CDC capture database Views, Materialized Views, and other virtual objects?
CDC captures changes by reading the database transaction log (binlog, WAL, oplog, redo log, etc.). Views are query-time computations over base tables—they don’t store data or generate transaction log entries. When you query a view, the database engine executes the underlying SELECT statement against the base tables. Since views don’t store data, they don’t generate transaction log entries.What cannot be captured:
- Views: Virtual tables with no physical storage or change tables
- Indexed Views (Materialized Views): While they have physical storage, they don’t generate CDC events. Capture the base tables instead.
- Table Variables: Session-scoped, not logged in transactions
- Temporary Tables (#temp, ##temp): Session or connection-scoped, excluded from CDC
- Memory-Optimized Tables (In-Memory OLTP): Extremely limited CDC support. While technically possible with SQL Server 2016 SP2+ (compatibility level 130+), memory-optimized tables use In-Memory OLTP engine which bypasses traditional transaction logging. CDC can be enabled but with significant performance penalties that negate the benefits of memory optimization. Not recommended for CDC; capture from traditional disk-based tables instead or use alternative change tracking approaches.
- System Tables (sys.*, information_schema): Metadata representations, not user data
- CTEs (Common Table Expressions): Query-time constructs with no persistent storage
- External Tables: Reference external data sources (PolyBase), not local storage
Configure CDC on the underlying base tables that power your views. The view logic can be recreated in your destination or transformation layer.SQL Server-specific notes:
- Indexed Views: Even though they store data physically, changes to indexed views are captured through the base table CDC, not the view itself
- Partitioned Views: Capture each underlying base table separately
- Memory-Optimized Tables (In-Memory OLTP): While CDC can technically be enabled on memory-optimized tables in SQL Server 2016 SP2+ with database compatibility level 130+, doing so introduces significant performance overhead. Memory-optimized tables use In-Memory OLTP engine designed for lock-free, log-optimized operations. Enabling CDC forces traditional transaction logging which defeats the performance benefits. Recommendation: Use traditional disk-based tables for CDC capture, or consider Change Tracking (lighter weight alternative) or Temporal Tables for memory-optimized scenarios.
If you have a view
vw_sales_summary that queries tables Orders and Customers, enable CDC on the Orders and Customers tables, then recreate the view logic downstream.What are common limitations?
What are common limitations?
- SQL Server Agent must be running for CDC to function
- Maximum 2 capture instances per table (impacts schema evolution)
- Change table cleanup requires active SQL Server Agent jobs
- High transaction volume can cause change table growth
- Schema changes require manual intervention
- Read replicas may have limitations for CDC
- Restores disable CDC (must re-enable after restore)
How to handle deletes?
How to handle deletes?
What security features are available?
What security features are available?
Troubleshooting common issues
Troubleshooting common issues
- CDC Not Working: Verify SQL Server Agent is running; check CDC is enabled on database and tables
- Change Table Growth: Monitor cleanup job execution; adjust retention settings
- Missing Events: Ensure capture instance is active; verify role membership for CDC access
- After Database Restore: CDC is disabled; must re-enable on database and tables
- Schema Changes Not Reflected: Must manually refresh change tables (see setup docs)
Table schema evolution and change tracking tables
Table schema evolution and change tracking tables
Refresh change table structure (Online)
The procedure for completing an Online refresh is simpler than running it Offline, and you can complete it without any downtime to your systems and Streamkap pipelines.For every source table that has changed, copy paste the script below into SQL Server Management Studio, change placeholders as required, and then run all queries.{refresh_table} name to use, use {schema}_{table}_v{N}. For example, if the source table is sales.orders then you’d use sales_orders_v2Two capture instances already exist for source table
Two capture instances already exist for source table
{table} specified. Typically you would disable the oldest change table, so use the create_date column to identify the oldest one.When you’ve identified the change table to disable, use its source_schema, source_table and capture_instance names in the query below and execute. Then, try your refresh table script again.The Connector's running but there is no data
The Connector's running but there is no data
- you are connected to the SQL Server database with a user that has
DB_OWNERrole privileges. - if the
EXEC sys.sp_cdc_help_change_data_capturestored procedure returns no results, the source tables you want Streamkap to capture may not be CDC enabled.
- the table(s) captured by the Connector are listed.
- there are
capture_instances for the capture tables. - the database user or role used by the Connector are members of the
rolenamefor the capture table(s).
The SQL Server Setup scripts are failing
The SQL Server Setup scripts are failing
- Check you connected to the SQL Server database with a user that has
DB_OWNERrole privileges - If the
SELECT ... FROM sys.databases ...returns no results, the database you want Streamkap to capture may not be CDC enabled - If the
EXEC sys.sp_cdc_help_change_data_capturestored procedure returns no results, the source tables you want Streamkap to capture may not be CDC enabled
How can I optimize schema history for large databases?
How can I optimize schema history for large databases?
Best practices for SQL Server sources
Best practices for SQL Server sources
- Use dedicated CDC user with minimal privileges
- Ensure SQL Server Agent is always running and monitored
- Set appropriate CDC cleanup retention (default 3 days)
- Limit capture to needed tables only to reduce change table size
- Test schema evolution procedures in staging
- Monitor disk space for change tables
- For cloud: Use provider monitoring; enable heartbeats for low traffic
- Document schema change procedures for your team