DB2

Stream data into DB2

Prerequisites

  • A database user with sufficient privileges to create users and schemas

DB2 Setup

1. Create Database User

It's recommended to create a separate user and role for Streamkap to access your DB2 database. Below is an example script that does that.

-- Connect to the DB2 database as admin

-- Create the Streamkap User
CREATE USER STREAMKAP_USER IDENTIFIED BY user_password;

-- Create the Streamkap schema
CREATE SCHEMA STREAMKAP;

-- Grant necessary privileges to the user
GRANT CREATEIN, ALTERIN, DROPIN, SELECT, INSERT, UPDATE, DELETE ON SCHEMA STREAMKAP TO USER STREAMKAP_USER;

Streamkap Setup

Follow these steps to configure your new connector:

1. Create the Destination

2. Connection Settings

  • Name: Enter a name for your connector.
  • Hostname: Specify the hostname.
  • Port: Default is 50000.
  • Database: Name of the database to use.
  • Username: Username to access the database. By default, Streamkap scripts use STREAMKAP_USER.
  • Password: Password to access the database.

3. Ingestion Settings

  • Schema evolution mode: Controls how schema evolution is handled by the sink connector.

📘

Self-managed schema evolution

If you have created the destination tables and want to manage their schemas instead of the connector, set Schema evolution mode to none.

  • Insert mode: Specifies the strategy used to insert events into the database.
  • Delete mode: Specifies whether the connector processes DELETE or tombstone events and removes the corresponding row from the database.
  • Primary key mode: Specifies how the connector resolves the primary key columns from the event.
    • Custom primary key: Optional. Either the name of the primary key column or a comma-separated list of fields to derive the primary key from.
  • Tasks: The maximum number of active tasks. This controls the amount of parallelism in writing events.

Click Save.

How this Connector Works

The DB2 connector supports idempotent write operations by using upsert semantics and basic schema evolution.

The following features are supported:

  • At-least-once delivery
  • Delete mode
  • Idempotent writes (Insert/Upsert mode)
  • Schema evolution

At-least-once delivery

The DB2 connector guarantees that events that is consumes are processed at least once.

Delete mode

The DB2 connector can delete rows in the destination database when a DELETE or tombstone event is consumed.

Idempotent writes

The DB2 connector supports idempotent writes, allowing the same records to be replayed repeatedly and the final database state to remain consistent. In order to support idempotent writes, the DB2 connector must be set to Upsert mode. An upsert operation is applied as either an update or an insert, depending on whether the specified primary key already exists. If the primary key value already exists, the operation updates values in the row. If the specified primary key value doesn’t exist, an insert adds a new row.

Schema evolution

The DB2 connector supports schema evolution.

The connector automatically detects fields that are in the event payload but that do not exist in the destination table. The connector alters the destination table to add the new fields.

When schema evolution is set to basic, the connector automatically creates or alters the destination database table according to the structure of the incoming event.

When an event is received from a topic for the first time, and the destination table does not yet exist, the DB2 connector uses the event’s key, or the schema structure of the record to resolve the column structure of the table. If schema evolution is enabled, the connector prepares and executes a CREATE TABLE SQL statement before it applies the DML event to the destination table.

When the DB2 connector receives an event from a topic, if the schema structure of the record differs from the schema structure of the destination table, the connector uses either the event’s key or its schema structure to identify which columns are new, and must be added to the database table. If schema evolution is enabled, the connector prepares and executes an ALTER TABLE SQL statement before it applies the DML event to the destination table. Because changing column data types, dropping columns, and adjusting primary keys can be considered dangerous operations, the connector is prohibited from performing these operations.