About the explore option, in the meltano ui, basic...
# announcements
m
About the explore option, in the meltano ui, basically, I can explore data from custom extractor/target also?
d
Yes, but you'll need both transform and model plugins for the data source in question. When you install one of the "known" (not custom) extractors through the UI, you'll see these extra plugins added automatically as well, but for a custom datasource you'd have to build and add them yourself: https://www.meltano.com/tutorials/create-custom-transforms-and-models.html And as for the loader, only Postgres, SQLite and Snowflake are currently supported, but it should be possible to extend that to other DBs
m
but, where the job data is saved? In my case I am saving the data to csv file on s3
how meltano knows to get the data for the dashboard?
d
@meir.shamay Ah okay— Meltano's Analyze (Explore) UI will only work with data stored in a database: Postgres, Snowflake, or SQLite. If you're using a different target format, Meltano UI can't be used to explore it or create dashboards
I intend to make that more clear: https://gitlab.com/meltano/meltano/-/issues/2163
m
ok, so, I need to create a model for the data and save it also on one of the datastores to mentioned?
d
If you want to use Meltano for analysis, yes (https://meltano.com/docs/analysis.html)
m
I mean I need to save the data also to Postgres for example, and set new connection to that datasource?
d
Yes, exactly 🙂 You'll want to install
target-postgres
, set up a pipeline to use it, add a model plugin (and optionally a transform plugin) and you can use Explore
But you can also just keep using Meltano UI without the Analysis functionality, depending on your needs
m
yes, sure, I want to use it also with the analysis option.
what do you mean install target-postgres, so, my pipeline will have two targets?
or, do I need to create another pipeline and download the data from s3 and put it inside Postgres?
d
The latter
They'd be separate pipelines from Meltano's perspective and your data would be getting downloaded from S3 separately for each
One extractor with multiple loaders is not currently supported, although it is an interesting idea 🙂
m
ok, I know that on airflow I can create tasks, and the first task can be the meltano elt, and the other one, is sending the data to postgres also
d
Would you still be using
meltano elt
with
target-postgres
to load the data into Postgres, then?
m
I can
so, I can run two elt tasks that will do that, yes?
d
Yep!
r
Hi @douwe_maan Is the "One extractor with multiple loaders" feature in the roadmap? The use case we have is extracting from MySQL, our current target is Redshift. Now we want to maintain a parallel operation with S3 as target while we finish our migration. Thanks.