Prerequisites
Standby databasesAn Oracle database can be configured with either a physical or a logical standby database for recovery after a production failure. At this time, Streamkap does not support them.
- Oracle 12c or above, Standard or Enterprise Edition
- Hostname and Port
- (single-tenant architecture): Database name
- (multi-tenant architecture): Container database name (CDB) and pluggable database name (PDB)
- The Oracle database master user credentials or equivalent
Oracle Setup
For the Connector to ingest changes from your database it is dependent on Oracle’s redo logs and archive logs. It is important that these redo logs are large enough and, the archive logs are retained for long enough to ensure all changes are captured.1. Grant Database Access
- Configure one of the Connection Options to ensure Streamkap can reach your database.
2. Enable Archive Logs
Reboot requiredWhen
ARCHIVELOG mode is enabled, your Oracle database will be taken offline.ARCHIVELOG mode, run this script:
ARCHIVELOG mode has been enabled, run this query:
Database log mode is Archive Mode then it is enabled.
3. Enable LogMiner
For the Connector to query the redo and archive logs, it is dependent on the Oracle LogMiner utility. To enable that, supplemental logging must be enabled.SUPPLEMENTAL_LOG_DATA_MIN is YES it is enabled.
Resize Redo Logs
An Oracle instance typically starts with three, online redo log files, 50MB each. That is too small and too few; more log files are necessary, and the logs should be resized to at least 1024MB or more, especially for production databases. Before making any changes, run this query to check the current log file sizes:| GROUP# | SIZE_MB | STATUS |
|---|---|---|
| 1 | 50 | INACTIVE |
| 2 | 50 | INACTIVE |
| 3 | 50 | CURRENT |
| GROUP# | LOCATION | STATUS |
|---|---|---|
| 1 | /opt/oracle/oradata/ORCLCDB/redo01.log | ACTIVE |
| 2 | /opt/oracle/oradata/ORCLCDB/redo02.log | INACTIVE |
| 3 | /opt/oracle/oradata/ORCLCDB/redo03.log | UNUSED |
INACTIVE and UNUSED groups can be dropped and recreated.
ACTIVE log so we can drop it.
ACTIVE group to INACTIVE. The switch could take several minutes, so be patient and recheck the size periodically. Once the status reaches INACTIVE, replace the last, old log file and confirm all logs are resized:
| GROUP# | SIZE_MB | STATUS |
|---|---|---|
| 1 | 1024 | CURRENT |
| 2 | 1024 | UNUSED |
| 3 | 1024 | ACTIVE |
| … | … | … |
Group numbering and log file sizingThe
{group_number} should follow sequentially. There is no benefit to organising log groups in sequences such as 10,20,30 or mixed sizing such as 128MB, 1024MB, 512MB.For example, if the highest group number is 3 and log file /opt/oracle/oradata/ORCLCDB/redo03.log of size 1024MB, the next log file added should be group 4 and log file /opt/oracle/oradata/ORCLCDB/redo04.log of size 1024MB.| GROUP# | SIZE_MB | STATUS |
|---|---|---|
| 1 | 1024 | CURRENT |
| 2 | 1024 | UNUSED |
| 3 | 1024 | ACTIVE |
| 4 | 1024 | INACTIVE |
| 5 | 1024 | INACTIVE |
| … | … | … |
4. Create Database User
Depending on your database architecture, choose the correct script below to create a database user with privileges for the Connector.5. Enable Snapshots
To backfill your data, the Connector needs to be able to perform Snapshots (See Snapshots & Backfilling for more information). To enable this process, a table must be created for the Connector to use. Depending on your database architecture, choose the correct script below.Please create the signal table with the name
STREAMKAP_SIGNAL. It will not be recognised if given another name.6. Heartbeats
Heartbeats ensure the Connector stays active and continues capturing changes, which is especially important for low-traffic or intermittent databases. Connectors use “offsets”—like bookmarks—to track their position in the database’s log or change stream. When no changes occur for long periods, these offsets may become outdated, and the Connector might lose its place or stop capturing changes. There are two layers of heartbeat protection: Layer 1: Connector heartbeats (enabled by default) The Connector periodically emits heartbeat messages to an internal topic, even when no actual data changes are detected. This keeps offsets fresh and prevents staleness. No configuration is necessary for this layer; it is automatically enabled. Layer 2: Source database heartbeats (requires configuration) For read-write connections (when Read only is No during Streamkap Setup—if applicable), you can configure the Connector to update a dedicated heartbeat table in the source database at regular intervals. This simulates activity, ensuring change events are generated consistently, maintaining log progress and preventing staleness. It’s especially useful for databases like PostgreSQL, where lack of regular changes can cause the write-ahead log (WAL) to grow rapidly during inactivity. This layer requires you to set up a heartbeat table in your source database.Configure source database heartbeats
Streamkap Setup
Follow these steps to configure your new connector:1. Create the Source
- Navigate to Add Connectors.
- Choose Oracle.
2. Connection Settings
- Name: Enter a name for your connector.
- Hostname: Specify the database endpoint.
-
Port: Default is
1521. -
Connect via SSH Tunnel: The Connector will connect to an SSH server in your network which has access to your database. This is necessary if the Connector cannot connect directly to your database.
- See SSH Tunnel for setup instructions.
-
Username (case sensitive): Username to access the database. By default, Streamkap scripts use
STREAMKAP_USER. - Password: Password to access the database.
- Database: The database name (single-tenant architecture) or container database name (multi-tenant architecture).
- Pluggable Database (optional, multi-tenant architecture only): The pluggable database name.
-
Heartbeats: Crucial for low and intermittent traffic databases. Enabled by default.
- Heartbeat Table Schema: Streamkap will use a table in this schema to manage heartbeats. Usually this is the same as the Signal Table Schema. See Heartbeats for setup instructions.
3. Snapshot Settings
- Signal Table Schema: Streamkap will use a collection in this schema to manage snapshots. See Enable Snapshots for more information.
4. Advanced Parameters
- Represent binary data as: Specifies how the data for binary columns should be interpreted. Your destination for this data can impact which option you choose. Default is
bytes. - Capture Only Captured Databases DDL: Specifies whether the connector records schema structures from all logical databases in the database instance or only captured databases. Enabling this when you have many databases in your instance can improve performance and avoid timeouts. Default is
false. See Schema History Optimization for details. - Capture Only Captured Tables DDL: Specifies whether the connector records schema structures from all logical tables in the captured schemas or databases, or only captured tables. Enabling this when you have many tables can improve performance and avoid timeouts. Default is
false. See Schema History Optimization for details.
5. Schema and Table Capture
- Add Schemas/Tables: Specify the schema(s) and table(s) for capture.
- You can bulk upload here. The format is a simple list of schemas and tables, with each entry on a new row. Save as a
.csvfile without a header.
- You can bulk upload here. The format is a simple list of schemas and tables, with each entry on a new row. Save as a
Have questions? See the Oracle Source FAQ for answers to common questions about Oracle sources, troubleshooting, and best practices.