Hello here, I'm coming back with my previous issue...
# troubleshooting
c
Hello here, I'm coming back with my previous issue, adding a bit more context: I have developed two taps so far and both of them have a common issue: The encoding of character such as '*é*', '*è*' are not written correctly in the database. They are encoded as '*\u00e9*' and '*\u00e8*' The issue is coming from the 
messages.py
 file: the format _message function is not making possible this kind of character to be encoded. It works however when I add the '`ensure_ascii=False`' to the 
json.dumps
. However, the target i'm using (postgres) is writting the character such as '\u00e9' in my database even with the option mentioned above. Do you know how I can work that issue?
Here is an example of the output provided by my tap:
{"type": "RECORD", "stream": "regions", "record": {"zips": ["4000", "4020", "4030", "4031", "4040", "4050", "4100", "4120", "4130", "4340", "4420", "4430", "4450", "4460", "4600", "4607", "4610", "4620", "4630", "4650", "4670", "4680", "4690", "4870", "4877", "4880", "4890"], "id": 58634, "name": "Li\u00e8ge", "synchronization_date": "2021-09-02 11:48:47.360308"}}
When I add the
ensure_ascii=False
, I end up with the value of
name
being
Liège
. But the target-postgres will keep the value as
Li\u00e8ge
and save it as such in the database.
v
Seems like a target bug to me
\u00e8ge is valid json
Try running with target-csv and see what you get!
c
@visch Thanks for your recommandation. I have the same feeling. I'll keep you posted in a second
@visch Thanks for your idea. Actually there is a bug in the Meltano variant of Postgres target ! @aaronsteers