Nikhil Patel
09/05/2025, 4:14 PMloaders:
- name: target-postgres
variant: meltanolabs
config:
# ... connection details ...
flattening_enabled: false
flattening_max_depth: 0
add_record_metadata: true
Current result (unwanted):
-- Creates 270+ individual columns
CREATE TABLE raw.user (
Id TEXT,
Username TEXT,
LastName TEXT,
FirstName TEXT,
-- ... 260+ more columns
);
Desired result:
-- Single JSONB column for flexibility
CREATE TABLE raw.user (
Id TEXT PRIMARY KEY,
data JSONB,
_sdc_extracted_at TIMESTAMP,
-- ... other Singer metadata columns
);
What I've Tried
1. Flattening configuration: Set flattening_enabled: false and flattening_max_depth: 0 - still creates individual
columns
2. Schema transformation: Modified the catalog.json to declare all fields as "object" type - tap-salesforce ignores
this and sends typed schemas
3. Source investigation: Found that target-postgres uses field-specific type handlers, and Salesforce sends
strongly-typed schemas
Questions
1. Is there a target-postgres configuration I'm missing that forces all fields to be stored as JSONB regardless of
their declared types?
2. Are there any Singer transformations or middlewares that can convert typed field schemas to generic objects before
they reach the target?
3. Has anyone successfully achieved JSONB storage with Salesforce data while using the standard target-postgres? If
so, what's your approach?
4. Alternative targets: Should I be using a different PostgreSQL target that's designed for schema-flexible JSONB
storage?
Use Case
I need JSONB storage because Salesforce schemas change frequently (custom fields added/removed), and I want my raw
data layer to gracefully handle these changes without pipeline failures. The plan is to use dbt for typed
transformations downstream.
Any insights, workarounds, or alternative approaches would be greatly appreciated!
Thanks in advance for your help! 🙏Holly Evans
09/05/2025, 4:44 PMHolly Evans
09/05/2025, 5:18 PM