Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.streamkap.com/llms.txt

Use this file to discover all available pages before exploring further.

Prerequisites

IBM Informix Change Data Capture requiredThe Connector uses Informix’s built-in CDC API, which is exposed through the syscdcv1 system database. syscdcv1 ships with Informix and must be present on the server. CDC is supported on Informix Enterprise Edition.
  • Informix version ≥ 12.10 (14.10 or 15 recommended)
  • The source database must be created with logging (ANSI-logged or buffered-log). Non-logged databases cannot be captured.
  • A database user with privileges to:
    • Connect to the source database and syscdcv1
    • SELECT on tables you want to capture
    • SELECT, INSERT, UPDATE on the Streamkap signal table

Informix Setup

The Informix CDC API reads the physical logical log and streams row-level change events back to the Connector over a JDBC session against syscdcv1. The Connector opens a capture session, enables Full Row Logging on the tables you’ve selected, and then consumes change records as they are written to the log.
Streamkap keeps Full Row Logging enabled on captured tables across Connector restarts (e.g. pod reschedules, deployment updates) so that DML occurring while the Connector is disconnected is still written to the log and can be replayed on reconnect. Without this, a restart would create a silent data gap.

1. Grant Database Access

2. Create Database User

It’s recommended to create a separate user for Streamkap. Below is an example script that does that.
-- Run as user 'informix' or another DBA
-- Replace { ... } placeholders as required

-- Create the OS-level user 'streamkap_user' on the Informix host first
-- (e.g. `useradd streamkap_user` on Linux). Informix authenticates against OS users
-- unless you are using an alternative authentication module.

-- Grant Streamkap access to the source database
DATABASE {databaseName};
GRANT CONNECT TO streamkap_user;

-- Grant SELECT on every table Streamkap should capture
GRANT SELECT ON {schemaName}.{tableName} TO streamkap_user;

-- Grant access to the CDC API database
DATABASE syscdcv1;
GRANT CONNECT TO streamkap_user;

3. Enable Change Data Capture

CDC itself is activated at runtime by the Connector — you do not need to manually enable full row logging on each table. However, the syscdcv1 database must be available on the server and the source database must be a logged database. Verify the syscdcv1 database exists:
-- Run as user 'informix' (DBA)
DATABASE sysmaster;
SELECT name FROM sysdatabases WHERE name = 'syscdcv1';
If syscdcv1 is missing, create it by running the installer script shipped with Informix:
# Run on the Informix server host as the 'informix' user
dbaccess sysadmin $INFORMIXDIR/etc/syscdcv1.sql
Verify your source database is logged:
-- Replace {databaseName} placeholder
DATABASE sysmaster;
SELECT name, is_logging, is_buff_log, is_ansi
  FROM sysdatabases
 WHERE name = '{databaseName}';
If is_logging is 0, enable logging on the database:
# Replace {databaseName} placeholder
# Requires a level-0 archive to exist before logging can be enabled
ontape -s -L 0
ondblog buf {databaseName}

4. Enable Snapshots

To backfill your data, the Connector needs to be able to perform snapshots (See Snapshots & Backfilling for more information). To enable this process, a table must be created for the Connector to use.
Please create the signal table with the name streamkap_signal. It will not be recognised if given another name.
-- Run as a DBA (e.g. the 'informix' user) so the table can be created on behalf of streamkap_user.
-- Replace {databaseName} placeholder.
DATABASE {databaseName};

-- Create the signal table owned by streamkap_user.
-- Using the explicit schema prefix guarantees the table is owned by streamkap_user
-- regardless of which user runs the script, and makes the fully-qualified name you
-- enter in the Streamkap UI unambiguous: streamkap_user.streamkap_signal.
CREATE TABLE streamkap_user.streamkap_signal (
  id   VARCHAR(255) NOT NULL PRIMARY KEY,
  type VARCHAR(32)  NOT NULL,
  data VARCHAR(2000)
);

5. Heartbeats

Connectors use “offsets”—like bookmarks—to track their position in the database’s log or change stream. When no changes occur for long periods, these offsets may become outdated, and the Connector might lose its place or stop capturing changes. Heartbeats ensure the Connector stays active and continues capturing changes. There are two layers of heartbeat protection:

Layer 1: Connector heartbeats (enabled by default)

The Connector periodically emits heartbeat messages to an internal topic, even when no actual data changes are detected. This keeps offsets fresh and prevents staleness. No configuration is necessary for this layer; it is automatically enabled. We recommend keeping this layer enabled for all deployments.
Why we recommend configuring Layer 2While Layer 2 is crucial for low-traffic or intermittent databases, we recommend configuring it for all deployments. It provides additional resilience and helps prevent issues during periods of inactivity.
You can configure regular updates to a dedicated heartbeat table in the source database. This simulates activity, ensuring change events are generated consistently, maintaining log progress and providing additional resilience. How this layer is configured depends on the connection type (if supported by the Source):
  • Read-write connections (when Read only is No during Streamkap Setup): The Connector updates the heartbeat table directly.
  • Read-only connections (when Read only is Yes during Streamkap Setup): A scheduled job on the primary database updates the heartbeat table, and these changes replicate to the read replica for the Connector to consume.
This layer requires you to set up a heartbeat table—and for read-only connections, a scheduled job (e.g., pg_cron for PostgreSQL, event_scheduler for MySQL)—on your source database.
For read-write connections, the Connector writes to the heartbeat table directly.
-- Run as a DBA (e.g. the 'informix' user) so the table is created on behalf of streamkap_user.
-- Replace {databaseName} placeholder.
DATABASE {databaseName};

-- Create the heartbeat table owned by streamkap_user.
CREATE TABLE streamkap_user.streamkap_heartbeat (
  id          INT          NOT NULL PRIMARY KEY,
  text        VARCHAR(255),
  last_update DATETIME YEAR TO FRACTION(3) DEFAULT CURRENT YEAR TO FRACTION(3)
);

-- Insert the first row that the Connector will update on each heartbeat tick.
INSERT INTO streamkap_user.streamkap_heartbeat (id, text) VALUES (1, 'test_heartbeat');

Streamkap Setup

Follow these steps to configure your new connector:

1. Create the Source

2. Connection Settings

  • Name: Enter a name for your connector.
  • Hostname: IP address or hostname of the Informix database server.
  • Port: Default is 9088 (the Informix SQLI listener port).
  • Connect via SSH Tunnel: The Connector will connect to an SSH server in your network which has access to your database. This is necessary if the Connector cannot connect directly to your database.
  • Username: Username to access the database. By default, Streamkap scripts use streamkap_user.
  • Password: Password to access the database.
  • Source Database: The name of the Informix database from which to stream the changes.
  • Heartbeats:
    • Heartbeat Table Schema: Streamkap will use a table in this schema to manage heartbeats. For Informix this is typically the owner of the streamkap_heartbeat table (e.g. streamkap_user). See Heartbeats for setup instructions.

3. Snapshot Settings

  • Signal Table (Schema.Table): The Connector will use this table to manage snapshots. You can specify either just the schema/owner name (e.g., streamkap_user) or the full path in schema.table format (e.g., streamkap_user.streamkap_signal). See Enable Snapshots for setup instructions.

4. Advanced Parameters

  • Represent binary data as: Specifies how the data for binary columns e.g. byte, blob should be interpreted. Your destination for this data can impact which option you choose. Default is bytes.
  • CDC Engine Buffer Size (bytes) (Default 65536) — Size of the read buffer used by the CDC engine. Increase only if you see CDC-side back-pressure on very high-volume tables.
  • CDC Engine Timeout (seconds) (Default 5) — How long the CDC engine will wait on a blocking read before breaking out to check for shutdown signals. The default is correct for most deployments.
  • Capture Only Captured Databases DDL (Default false) — Whether the Connector records schema structures for all databases on the server or only the one you’ve configured. Enabling this can improve performance and reduce startup time when you have many databases. See Schema History Optimization for details.
  • Capture Only Captured Tables DDL (Default false) — Whether the Connector records schema structures for all tables in the configured database or only the ones it is capturing. Enabling this can improve performance when the database has many tables. See Schema History Optimization for details.
Click Next.

5. Schema and Table Capture

  • Add Schemas/Tables: Specify the schema(s) and table(s) for capture. Enter each entry as schema.table (for example informix.customers). Streamkap automatically qualifies each entry with the Source Database you configured above, so you do not need to repeat the database name in the UI.
    • You can bulk upload here. The format is a simple list of schemas and tables, with each entry on a new row. Save as a .csv file without a header.
CDC only captures base tables, not ViewsChange Data Capture reads Informix’s logical log, which only records changes to physical base tables. Database Views are query-time computations with no physical storage — they generate no log records.What you cannot capture: Views, synonyms, temporary tables, external tables, virtual tables, or system catalog tables (sys*).Solution: Specify only the underlying base tables that feed your views. You can recreate the view logic in your destination or transformation layer.
Click Save.

Troubleshooting

There can be a number of reasons. The most common are misconfiguration of CDC and missing privileges.1. Confirm syscdcv1 is present and reachable
DATABASE sysmaster;
SELECT name FROM sysdatabases WHERE name = 'syscdcv1';
If syscdcv1 is missing, re-run dbaccess sysadmin $INFORMIXDIR/etc/syscdcv1.sql on the Informix host as the informix user.2. Confirm the source database is logged
-- Replace {databaseName} placeholder
DATABASE sysmaster;
SELECT name, is_logging, is_buff_log
  FROM sysdatabases
 WHERE name = '{databaseName}';
If is_logging is 0, the database is unlogged and no change events can be captured. Enable logging (see Enable Change Data Capture).3. Confirm the Streamkap user has access
-- Replace {databaseName} placeholder
DATABASE {databaseName};
SELECT username, usertype FROM sysusers WHERE username = 'streamkap_user';
The user must have C (Connect) access to both the source database and syscdcv1, plus SELECT on every table you want to capture.If you’re still stuck after these checks, please reach out to us.
Informix refuses to change a database’s logging mode until a full archive has been taken. Run a level-0 archive first and then retry the ondblog command.
# Run on the Informix server host as the 'informix' user
ontape -s -L 0

# Once the archive completes, enable buffered logging on the database
ondblog buf {databaseName}
If you don’t need to retain the archive, you can target /dev/null (or Windows equivalent) for the tape device in your $INFORMIXDIR/etc/onconfig (TAPEDEV) before running ontape -s -L 0.
Informix’s CDC API emits a metadata record to the change stream whenever a captured table’s structure changes (for example, after an ALTER TABLE ... ADD COLUMN). The Connector processes these records inline — it re-reads the current table definition from the Informix catalog, writes the updated structure to the schema-history topic, and subsequent change events for that table use the new schema automatically. No Connector restart is required.If you still don’t see new columns after a DDL change:
  1. Confirm the DDL ran against the captured base table (not a view or synonym — CDC only fires for base tables).
  2. Confirm the Connector is streaming (not paused or in error). The metadata record is only processed during active streaming.
  3. Wait for at least one DML event on the altered table after the DDL. The metadata record is emitted as part of the ongoing CDC stream; if the table is completely idle, downstream consumers may not immediately observe the updated schema.
If the Connector is stuck emitting events against an outdated schema despite the above, please reach out to us.