If I schedule a large number (lets say 300) pipelines to all kick off at 8:00am, will the scheduler attempt to start them all at the exact same moment? If so, will this cause errors as the scheduler attempts to start too many parallel processes and runs out of resources?
Hi @Ramesh! Automate pods only handle orchestration, metadata and job submission. So all actual work (Spark, SQL, BigQuery, Snowflake) happens on your data warehouse compute not Prophecy pods.
Each scheduled job results in a lightweight HTTP API call to Databricks or Snowflake, so you wouldn’t have 300 heavy pipelines running inside Prophecy.
The scheduler will evaluate all 300 pipelines at the same scheduled time, but automate will not execute all of them concurrently in a way that overloads the system. For more information on architecture and prophecy automate, check out this help doc page.