SQL Server
Stream data into SQL Server
Prerequisites
- A database user with sufficient privileges to create users and schemas
SQL Server Setup
1. Create Database User
It's recommended to create a separate user, schema and database for Streamkap in your SQL Server instance to avoid potential conflicts with existing data. Below is an example script that does that.
-- Connect to the SQL Server instance as admin
-- Create the Streamkap User
CREATE LOGIN STREAMKAP_USER WITH PASSWORD = 'user_password';
CREATE USER STREAMKAP_USER FOR LOGIN STREAMKAP_USER;
-- Create the Streamkap database and grant permissions
CREATE DATABASE STREAMKAPDB;
USE STREAMKAPDB;
-- Create the Streamkap schema
CREATE SCHEMA STREAMKAP;
-- Set the Streamkap User's default schema
ALTER USER STREAMKAP_USER WITH DEFAULT_SCHEMA = STREAMKAP;
-- Grant specific permissions to the user on the schema and tables
GRANT CREATE TABLE, CREATE FUNCTION, CREATE PROCEDURE TO STREAMKAP_USER;
GRANT ALTER, SELECT, INSERT, UPDATE, DELETE, EXECUTE, REFERENCES, VIEW DEFINITION ON SCHEMA::STREAMKAP TO STREAMKAP_USER;
Default schema required
Currently there is no option to specify the schema to use in the Streamkap setup for this Destination.
The schema to use is determined by the user's
DEFAULT_SCHEMA
. If not set, it'll default todbo
schema.You can set the
DEFAULT_SCHEMA
using the following script:-- Replace {...} placeholders as required ALTER USER {USERNAME} WITH DEFAULT_SCHEMA = {SCHEMA};
Streamkap Setup
Follow these steps to configure your new connector:
1. Create the Destination
- Navigate to Add Connectors.
- Choose SQL Server.
2. Connection Settings
- Name: Enter a name for your connector.
- Hostname: Specify the hostname.
- Port: Default is
1433
. - Database: Name of the database to use.
- Username: Username to access the database. By default, Streamkap scripts use
STREAMKAP_USER
. - Password: Password to access the database.
3. Ingestion Settings
- Schema: The target schema where tables should be created.
- Schema evolution mode: Controls how schema evolution is handled by the sink connector.
Self-managed schema evolution
If you have created the destination tables and want to manage their schemas instead of the connector, set Schema evolution mode to
none
.
- Insert mode: Specifies the strategy used to insert events into the database.
- Delete mode: Specifies whether the connector processes DELETE or tombstone events and removes the corresponding row from the database.
- Primary key mode: Specifies how the connector resolves the primary key columns from the event.
- Custom primary key: Optional. Either the name of the primary key column or a comma-separated list of fields to derive the primary key from.
- Tasks: The maximum number of active tasks. This controls the amount of parallelism in writing events.
Click Save.
How this Connector Works
The SQL Server connector supports idempotent write operations by using upsert semantics and basic schema evolution.
The following features are supported:
- At-least-once delivery
- Delete mode
- Idempotent writes (Insert/Upsert mode)
- Schema evolution
At-least-once delivery
The SQL Server connector guarantees that events that is consumes are processed at least once.
Delete mode
The SQL Server connector can delete rows in the destination database when a DELETE
or tombstone event is consumed.
Idempotent writes
The SQL Server connector supports idempotent writes, allowing the same records to be replayed repeatedly and the final database state to remain consistent. In order to support idempotent writes, the SQL Server connector must be set to Upsert mode. An upsert operation is applied as either an update or an insert, depending on whether the specified primary key already exists. If the primary key value already exists, the operation updates values in the row. If the specified primary key value doesn’t exist, an insert adds a new row.
Schema evolution
The SQL Server connector supports schema evolution.
The connector automatically detects fields that are in the event payload but that do not exist in the destination table. The connector alters the destination table to add the new fields.
When schema evolution is set to basic
, the connector automatically creates or alters the destination database table according to the structure of the incoming event.
When an event is received from a topic for the first time, and the destination table does not yet exist, the SQL Server connector uses the event’s key, or the schema structure of the record to resolve the column structure of the table. If schema evolution is enabled, the connector prepares and executes a CREATE TABLE
SQL statement before it applies the DML event to the destination table.
When the SQL Server connector receives an event from a topic, if the schema structure of the record differs from the schema structure of the destination table, the connector uses either the event’s key or its schema structure to identify which columns are new, and must be added to the database table. If schema evolution is enabled, the connector prepares and executes an ALTER TABLE
SQL statement before it applies the DML event to the destination table. Because changing column data types, dropping columns, and adjusting primary keys can be considered dangerous operations, the connector is prohibited from performing these operations.
Updated 6 days ago