I am currently deploying Meltano Pipelines as dock...
# troubleshooting
b
I am currently deploying Meltano Pipelines as docker containers running with UI on Kubernetes Deployment Method : Kubernetes Deployment Runtime: Docker Containers Source: Mysql Target: Snowflake Mode: Incremental loads My current container output even after the job has run and did a huge data load into snowflake does not reflect in the container output I run jobs via the Meltano UI My meltano.yaml
Copy code
version: 1
send_anonymous_usage_stats: false
project_id: 7f7551be-a52e-4cbb-9a57-511c77012aa2
plugins:
  extractors:
  - name: acadia_db
    inherit_from: tap-mysql
    variant: transferwise
    pip_url: pipelinewise-tap-mysql
    select:
    - acadia-jhi_user.*
    - acadia-program.*
    - acadia-form.*
    metadata:
      'acadia-jhi_user':
        replication-key: last_modified_date
        replication-method: INCREMENTAL
      'acadia-program':
        replication-key: updated_on
        replication-method: INCREMENTAL
      'acadia-form':
        replication-key: updated_on
        replication-method: INCREMENTAL

  loaders:
  - name: data-warehouse
    inherit_from: target-snowflake
    variant: transferwise
    pip_url: pipelinewise-target-snowflake
    config:
      parallelism: 4
  transformers:
  - name: dbt
    pip_url: 'dbt-core~=1.0.0 dbt-snowflake~=1.0.0'
  files:
  - name: dbt
    pip_url: git+<https://gitlab.com/meltano/files-dbt.git@config-version-2>
    update:
      transform/profile/profiles.yml: false
schedules:
- name: daily-user-form-consent
  extractor: acadia_db
  loader: data-warehouse
  transform: run
  interval: '@daily'
  start_date: 2022-02-02 20:00:00.028109
Below is Container Log output, the E/L logs are not printed here ```{"app":"vault-env","level":"info","msg":"initial Vault token arrived","time":"2022-02-10T125124Z"} │ {"app":"vault-env","level":"info","msg":"spawning process: [./entrypoint.sh]","time":"2022-02-10T125124Z"} │ {"app":"vault-env","level":"info","msg":"in daemon mode...","time":"2022-02-10T125124Z"} │ {"app":"vault-env","level":"info","msg":"renewed Vault token","time":"2022-02-10T125124Z","ttl":2764800000000000} │ {"app":"vault-env","level":"info","msg":"received signal: urgent I/O condition","time":"2022-02-10T125124Z"} │ {"app":"vault-env","level":"info","msg":"received signal: urgent I/O condition","time":"2022-02-10T125124Z"} │ {"app":"vault-env","level":"info","msg":"received signal: urgent I/O condition","time":"2022-02-10T125124Z"} │ ***************************************************************************************** │ Using Meltano version :: meltano, version 1.94.0 │ ***************************************************************************************** │ │ Meltano pipeline starting via Command = [meltano ui] │ 2022-02-10T125128.133517Z [info ] Auto-compiling models in '/project/model' │
v
Ken has a Helm chart that you should probably look at if you're using K8s to take some inspiration from. Since you have everything working a quick and dirty solution since you're using Meltano as the orchestrator would be to point your log scrapers to
.meltano/run/elt/*
I can't remember off hand the exact directory for the log files but they are in there somewhere. Another approach without changing the orchestrator could be to alter to logger.yaml file and point the data that you care about to a certain place. 🤷 lots of options