Hi, I am having the following error when trying to...
# troubleshooting
r
Hi, I am having the following error when trying to deploy a new version of the meltano.yml (changing a couple of schedules, nothing more).
airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the LocalExecutor
The config looks right and we’re using a Postgresql database, not a SQLite one.
Copy code
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
          valueFrom:
            secretKeyRef:
              name: airflow
              key: db_connection
        - name: AIRFLOW__CORE__DAGBAG_IMPORT_TIMEOUT
          value: "120"
        - name: AIRFLOW__CORE__EXECUTOR
          value: "LocalExecutor"
Now, trying to revert back to the last stable Meltano installation, I get another error unfortunately - the database has been corrupted.
Copy code
│ meltano-tap alembic.script.revision.ResolutionError: No such revision or branch '6828cc5b1a4f'                                                      │
│ meltano-tap Cannot upgrade the system database. It might be corrupted or was created before database migrations where introduced (v0.34.0)          │
│ meltano-tap (psycopg2.errors.UndefinedTable) relation "job" does not exist                                                                          │
│ meltano-tap LINE 2: FROM job
Can someone please help me out debugging this? The Meltano version we’re on is
meltano, version 1.104.0
I did try to perform a clean re-install of airflow from the pod, running
meltano install orchestrator airflow --clean
and, while this seems to work, if I try to list all existing schedules, I get the following error:
Copy code
Cannot upgrade the system database. It might be corrupted or was created before database migrations where introduced (v0.34.0)
(psycopg2.errors.UndefinedTable) relation "job" does not exist
LINE 2: FROM job
             ^

[SQL: SELECT job.state AS job_state, job.id AS job_id_1, job.job_id AS job_job_id, job.run_id AS job_run_id, job.started_at AS job_started_at, job.last_heartbeat_at AS job_last_heartbeat_at, job.ended_at AS job_ended_at, job.payload AS job_payload, job.payload_flags AS job_payload_flags, job.trigger AS job_trigger
FROM job
WHERE job.state = %(param_1)s AND (job.last_heartbeat_at IS NOT NULL AND job.last_heartbeat_at < %(last_heartbeat_at_1)s OR job.last_heartbeat_at IS NULL AND job.started_at < %(started_at_1)s)]
[parameters: {'param_1': 'RUNNING', 'last_heartbeat_at_1': datetime.datetime(2022, 12, 19, 11, 45, 39, 860908), 'started_at_1': datetime.datetime(2022, 12, 18, 11, 50, 39, 860908)}]
(Background on this error at: <https://sqlalche.me/e/14/f405>)
Looks like the airflow db that is created during this clean install is somehow “different” than the version Meltano is looking for, because a field called job.job_id does not exist anymore (in fact, in the newly created database it is called just id)
c
👋 Hi Rigerta, for the second error, I suspect that the db that's causing issues is Meltano's backend system db rather than Airflow's db (it matches this error that we throw when Meltano fails to apply system db migrations during upgrade). Within the environment where this is running, could you double-check the output of
meltano --version
? One possible source of this issue is that the meltano version was somehow upgrade, so it attempts to upgrade the system database and then that db migration is failing during upgrade.