Demo · fictional data for showcase purposes · Aimlitics

Pipelines

The data refresh jobs that keep your dashboards current. Most run on a schedule; a few you trigger yourself from the admin pages when you make a change that needs to flow through.

Incremental refresh

Pulls the latest Veeva updates and new sales files, then refreshes the data the dashboards read. Runs on a schedule.

global
Last run: 1 day ago · succeeded
Last success: 1 day ago

Managed by Aimlitics ops. Schedule in Fabric workspace.

Weekly full refresh

Full Veeva re-pull and complete data rebuild. Catches anything the incremental refresh missed (deletes, late updates).

global
Last run:
Last success:

Managed by Aimlitics ops. Schedule in Fabric workspace.

Delta maintenance

Background storage maintenance — compacts small files and prunes old versions to keep queries fast.

global
Last run:
Last success:

Managed by Aimlitics ops. Schedule in Fabric workspace.

Mapping propagate

Pushes Veeva mapping changes through to sales attribution. Triggered from the Mappings page after you save a mapping.

tenant
Last run:
Last success:

Trigger from /admin/mappings.

Recent runs

Last 39 runs across all pipelines visible to you.

PipelineScopeStatusStartedDurationByDetail
Incremental refreshglobalsucceeded1 day ago15m 15sschedule
OK in 916.1s across 30 steps
{
  "veeva_incremental_ingest": {
    "status": "ok",
    "duration_s": 97.6,
    "exit_value": ""
  },
  "sftp_ingest": {
    "status": "ok",
    "duration_s": 15.7,
    "exit_value": ""
  },
  "email_ingest": {
    "status": "ok",
    "duration_s": 27.7,
    "exit_value": ""
  },
  "config_sync": {
    "status": "ok",
    "duration_s": 102.8,
    "exit_value": ""
  },
  "goals_sync": {
    "status": "ok",
    "duration_s": 21.8,
    "exit_value": ""
  },
  "silver_picklist_build": {
    "status": "ok",
    "duration_s": 27.7,
    "exit_value": ""
  },
  "silver_hcp_build": {
    "status": "ok",
    "duration_s": 42.8,
    "exit_value": ""
  },
  "silver_hco_build": {
    "status": "ok",
    "duration_s": 41.5,
    "exit_value": ""
  },
  "silver_user_build": {
    "status": "ok",
    "duration_s": 27.7,
    "exit_value": ""
  },
  "silver_territory_build": {
    "status": "ok",
    "duration_s": 24.7,
    "exit_value": ""
  },
  "silver_account_territory_build": {
    "status": "ok",
    "duration_s": 33.7,
    "exit_value": ""
  },
  "silver_user_territory_build": {
    "status": "ok",
    "duration_s": 24.7,
    "exit_value": ""
  },
  "silver_call_build": {
    "status": "ok",
    "duration_s": 30.7,
    "exit_value": ""
  },
  "silver_sale_build": {
    "status": "ok",
    "duration_s": 27.8,
    "exit_value": ""
  },
  "silver_account_xref_build": {
    "status": "ok",
    "duration_s": 24.7,
    "exit_value": ""
  },
  "silver_hcp_attribute_build": {
    "status": "ok",
    "duration_s": 24.7,
    "exit_value": ""
  },
  "silver_hco_attribute_build": {
    "status": "ok",
    "duration_s": 18.8,
    "exit_value": "no_config"
  },
  "gold_dim_date_build": {
    "status": "ok",
    "duration_s": 30.8,
    "exit_value": ""
  },
  "gold_dim_hcp_build": {
    "status": "ok",
    "duration_s": 25,
    "exit_value": ""
  },
  "gold_dim_hco_build": {
    "status": "ok",
    "duration_s": 18.7,
    "exit_value": ""
  },
  "gold_dim_user_build": {
    "status": "ok",
    "duration_s": 18.7,
    "exit_value": ""
  },
  "gold_dim_account_build": {
    "status": "ok",
    "duration_s": 18.7,
    "exit_value": ""
  },
  "gold_dim_territory_build": {
    "status": "ok",
    "duration_s": 21.6,
    "exit_value": ""
  },
  "gold_bridge_account_territory_build": {
    "status": "ok",
    "duration_s": 21.6,
    "exit_value": ""
  },
  "gold_fact_call_build": {
    "status": "ok",
    "duration_s": 27.7,
    "exit_value": ""
  },
  "gold_fact_sale_build": {
    "status": "ok",
    "duration_s": 27.7,
    "exit_value": ""
  },
  "gold_dim_hcp_attribute_build": {
    "status": "ok",
    "duration_s": 24.7,
    "exit_value": ""
  },
  "gold_dim_hco_attribute_build": {
    "status": "ok",
    "duration_s": 18.6,
    "exit_value": ""
  },
  "gold_dim_hcp_score_wide_build": {
    "status": "ok",
    "duration_s": 21.6,
    "exit_value": ""
  },
  "gold_hcp_target_score_build": {
    "status": "ok",
    "duration_s": 24.6,
    "exit_value": ""
  }
}
Incremental refreshglobalfailed1 day ago6m 4sschedule
FAILED after 365.4s
Py4JJavaError: An error occurred while calling o5298.throwExceptionIfHave.
: com.microsoft.spark.notebook.msutils.NotebookExecutionException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `silver_table` cannot be resolved. Did you mean one of the following? [`enabled`, `feed_name`, `id`, `tenant_id`, `updated_at`].; line 2 pos 33;
'Project [tenant_id#136585, feed_name#136586, 'silver_table, 'column_mapping, 'sample_headers, 'mapping_status]
+- Filter (enabled#136589 = true)
   +- SubqueryAlias spark_catalog.chimcobldhq2ajjfehim4rrfddpj89bkd1p6utb7d1m6irj5btm62qr5d1nnasr54lhmurj6d5jg.tenant_email_drop
      +- Relation spark_catalog.chimcobldhq2ajjfehim4rrfddpj89bkd1p6utb7d1m6irj5btm62qr5d1nnasr54lhmurj6d5jg.tenant_email_drop[id#136584,tenant_id#136585,feed_name#136586,source_address#136587,subject_pattern#136588,enabled#136589,updated_at#136590] parquet

---------------------------------------------------------------------------AnalysisException                         Traceback (most recent call last)Cell In[3], line 126
    120 print(f"Tenants to scan: {[t['slug'] for t in tenants] or '(none)'}")
    122 # %%
    123 # Configured email drops, keyed by (tenant_id, feed_name). Only drops
    124 # with mapping_status='configured' get processed; others get logged as
    125 # skipped so admin sees them in the ops log.
--> 126 drop_rows = spark.sql("""
    127     SELECT tenant_id, feed_name, silver_table, column_mapping,
    128            sample_headers, mapping_status
    129     FROM config.tenant_email_drop
    130     WHERE enabled = true
    131 """).collect()
    133 drops_by_key: dict[tuple[str, str], dict] = {}
    134 for r in drop_rows:
File /opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py:1631, in SparkSession.sql(self, sqlQuery, args, **kwargs)
   1627         assert self._jvm is not None
   1628         litArgs = self._jvm.PythonUtils.toArray(
   1629             [_to_java_column(lit(v)) for v in (args or [])]
   1
Incremental refreshtenantsucceeded1 day ago13m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2 days ago13m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded3 days ago13m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded4 days ago13m 42sscheduleOK in 822s across 5 steps
Incremental refreshglobalsucceeded5 days ago21m 40sschedule
OK in 1301.1s across 29 steps
{
  "veeva_incremental_ingest": {
    "status": "ok",
    "duration_s": 524.3,
    "exit_value": ""
  },
  "sftp_ingest": {
    "status": "ok",
    "duration_s": 15.9,
    "exit_value": ""
  },
  "config_sync": {
    "status": "ok",
    "duration_s": 63.7,
    "exit_value": ""
  },
  "goals_sync": {
    "status": "ok",
    "duration_s": 25,
    "exit_value": ""
  },
  "silver_picklist_build": {
    "status": "ok",
    "duration_s": 27.7,
    "exit_value": ""
  },
  "silver_hcp_build": {
    "status": "ok",
    "duration_s": 34,
    "exit_value": ""
  },
  "silver_hco_build": {
    "status": "ok",
    "duration_s": 39.7,
    "exit_value": ""
  },
  "silver_user_build": {
    "status": "ok",
    "duration_s": 30.8,
    "exit_value": ""
  },
  "silver_territory_build": {
    "status": "ok",
    "duration_s": 24.6,
    "exit_value": ""
  },
  "silver_account_territory_build": {
    "status": "ok",
    "duration_s": 33.8,
    "exit_value": ""
  },
  "silver_user_territory_build": {
    "status": "ok",
    "duration_s": 28.9,
    "exit_value": ""
  },
  "silver_call_build": {
    "status": "ok",
    "duration_s": 34,
    "exit_value": ""
  },
  "silver_sale_build": {
    "status": "ok",
    "duration_s": 27.7,
    "exit_value": ""
  },
  "silver_account_xref_build": {
    "status": "ok",
    "duration_s": 27.8,
    "exit_value": ""
  },
  "silver_hcp_attribute_build": {
    "status": "ok",
    "duration_s": 27.7,
    "exit_value": ""
  },
  "silver_hco_attribute_build": {
    "status": "ok",
    "duration_s": 18.9,
    "exit_value": "no_config"
  },
  "gold_dim_date_build": {
    "status": "ok",
    "duration_s": 21.7,
    "exit_value": ""
  },
  "gold_dim_hcp_build": {
    "status": "ok",
    "duration_s": 24.6,
    "exit_value": ""
  },
  "gold_dim_hco_build": {
    "status": "ok",
    "duration_s": 18.7,
    "exit_value": ""
  },
  "gold_dim_user_build": {
    "status": "ok",
    "duration_s": 18.7,
    "exit_value": ""
  },
  "gold_dim_account_build": {
    "status": "ok",
    "duration_s": 18.7,
    "exit_value": ""
  },
  "gold_dim_territory_build": {
    "status": "ok",
    "duration_s": 30.9,
    "exit_value": ""
  },
  "gold_bridge_account_territory_build": {
    "status": "ok",
    "duration_s": 27.8,
    "exit_value": ""
  },
  "gold_fact_call_build": {
    "status": "ok",
    "duration_s": 27.8,
    "exit_value": ""
  },
  "gold_fact_sale_build": {
    "status": "ok",
    "duration_s": 30.9,
    "exit_value": ""
  },
  "gold_dim_hcp_attribute_build": {
    "status": "ok",
    "duration_s": 24.8,
    "exit_value": ""
  },
  "gold_dim_hco_attribute_build": {
    "status": "ok",
    "duration_s": 21.9,
    "exit_value": ""
  },
  "gold_dim_hcp_score_wide_build": {
    "status": "ok",
    "duration_s": 21.6,
    "exit_value": ""
  },
  "gold_hcp_target_score_build": {
    "status": "ok",
    "duration_s": 27.6,
    "exit_value": ""
  }
}
Incremental refreshtenantsucceeded5 days ago13m 42sscheduleOK in 822s across 5 steps
Incremental refreshglobalfailed5 days ago14m 4sschedule
FAILED after 844.9s
Py4JJavaError: An error occurred while calling o5505.throwExceptionIfHave.
: com.microsoft.spark.notebook.msutils.NotebookExecutionException: [SCALAR_SUBQUERY_IS_IN_GROUP_BY_OR_AGGREGATE_FUNCTION] The correlated scalar subquery '"scalarsubquery(tenant_id)"' is neither present in GROUP BY, nor in an aggregate function. Add it to GROUP BY using ordinal position or wrap it in `first()` (or `first_value`) if you don't care which value you get.; line 6 pos 4;
Sort [tenant_id#128478 ASC NULLS FIRST], true
+- Aggregate [tenant_id#128478], [tenant_id#128478, count(1) AS gold_rows#128470L, count(distinct hcp_key#128477) AS hcps_with_attrs#128471L, scalar-subquery#128472 [tenant_id#128478] AS silver_rows#128473L, round(((100.0 * cast(count(1) as decimal(20,0))) / cast(nullif(scalar-subquery#128474 [tenant_id#128478], 0) as decimal(20,0))), 1) AS pct_silver_resolved_to_dim#128475]
   :  :- Aggregate [count(1) AS count(1)#128502L]
   :  :  +- Filter (tenant_id#128491 = outer(tenant_id#128478))
   :  :     +- SubqueryAlias s
   :  :        +- SubqueryAlias spark_catalog.chimcobldhq2ajjfehim4rrfddpj89bkd1p6utb7d1m6irj5btm62qr5d1nnasr54lpmir3mclp0.hcp_attribute
   :  :           +- Relation spark_catalog.chimcobldhq2ajjfehim4rrfddpj89bkd1p6utb7d1m6irj5btm62qr5d1nnasr54lpmir3mclp0.hcp_attribute[tenant_id#128491,hcp_id#128492,attribute_name#128493,attribute_value#128494,attribute_type#128495,source_system#128496,source_label#128497,scope_tag#128498,valid_as_of#128499,silver_built_at#128500] parquet
   :  :- Aggregate [count(1) AS count(1)#128515L]
   :  :  +- Filter (tenant_id#128504 = outer(tenant_id#128478))
   :  :     +- SubqueryAlias s
   :  :        +- SubqueryAlias spark_catalog.chimcobldhq2ajjfehim4rrfddpj89bkd1p6utb7d1m6irj5btm62qr5d1nnasr54lpmir3mclp0.hcp_attribute
   :  :           +- Relation spark_catalog.chimcobldhq2ajjfehim4rrfddpj89bkd1p6utb7d1m6irj5btm62qr5d1nnasr54lpmir3mclp0.hcp_attribute[tenant_id#128504,hcp_id#128505,attribute_name#128506,attribute_value#128507,
Incremental refreshglobalfailed5 days ago12m 45sschedule
FAILED after 765.8s
Py4JJavaError: An error occurred while calling o5406.throwExceptionIfHave.
: com.microsoft.spark.notebook.msutils.NotebookExecutionException: 
[PARSE_SYNTAX_ERROR] Syntax error at or near '('.(line 41, pos 7)

== SQL ==
(
WITH ranked AS (
  SELECT *,
    ROW_NUMBER() OVER (
      PARTITION BY id
      ORDER BY modified_date__v DESC NULLS LAST, _ingested_at DESC
    ) AS _rn
  FROM bronze_acme_pharma.veeva_obj_account__v
  WHERE ispersonaccount__v = 'true'
),
deduped AS (
  SELECT * FROM ranked WHERE _rn = 1
)
SELECT
  '3b422d2b-d883-4d75-981d-5cd77c6c932d' AS tenant_id,
  CAST(deduped.`id` AS STRING) AS hcp_id,
  attribute_name,
  attribute_value,
  attribute_type,
  'veeva' AS source_system,
  source_label,
  scope_tag,
  CAST(NULL AS DATE) AS valid_as_of,
  current_timestamp() AS silver_built_at
FROM deduped
LATERAL VIEW stack(14,
      'cervical_cancer_patients_under_40', CAST(deduped.`fen_2024_cervical_cancer_patients_039__c` AS STRING), 'volume', 'komodo', 'cervical_cancer',
      'cisplatin_patients_under_40', CAST(deduped.`fen_2024_cisplatin_patients_age_039__c` AS STRING), 'volume', 'komodo', 'cisplatin',
      'cisplatin_patients_all', CAST(deduped.`fen_2024_cisplatin_patients_all_ages__c` AS STRING), 'volume', 'komodo', 'cisplatin',
      'head_neck_cancer_patients_under_40', CAST(deduped.`fen_2024_head_neck_cancer_patients_039__c` AS STRING), 'volume', 'komodo', 'head_neck_cancer',
      'head_neck_cancer_patients_all', CAST(deduped.`fen_2024_head_neck_cancer_patients_all__c` AS STRING), 'volume', 'komodo', 'head_neck_cancer',
      'other_cancer_patients_under_40', CAST(deduped.`fen_2024_other_cancer_patients_039__c` AS STRING), 'volume', 'komodo', 'other_cancer',
      'other_cancer_patients_all', CAST(deduped.`fen_2024_other_cancer_patients_all_ages__c` AS STRING), 'volume', 'komodo', 'other_cancer',
      'ovarian_cancer_patients_under_40', CAST(deduped.`fen_2024_ovarian_cancer_patients_039__c` AS STRING), 'volume', 'komodo', 'ovarian_cancer',
      '
Incremental refreshglobalfailed5 days ago15m 46sschedule
FAILED after 947.2s
Py4JJavaError: An error occurred while calling o5280.throwExceptionIfHave.
: com.microsoft.spark.notebook.msutils.NotebookExecutionException: Timeout when exe cell - 10 in notebook veeva_incremental_ingest, code length = 788 and it costs 900.001s. You can set timeout parameter to mitigate the issue. Please check the doc https://go.microsoft.com/fwlink/?linkid=2152237#notebook-utilities for details.You can check driver log or snapshot for detailed error info! See how to check logs: https://go.microsoft.com/fwlink/?linkid=2157243 .
	at com.microsoft.spark.notebook.workflow.JobSessionClient.runCell(JobSessionClient.scala:456)
	at com.microsoft.spark.notebook.workflow.JobSessionClient.$anonfun$run$4(JobSessionClient.scala:191)
	at com.microsoft.spark.notebook.workflow.JobSessionClient.$anonfun$run$4$adapted(JobSessionClient.scala:174)
	at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985)
	at scala.collection.immutable.List.foreach(List.scala:431)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984)
	at com.microsoft.spark.notebook.workflow.JobSessionClient.run(JobSessionClient.scala:174)
	at com.microsoft.spark.notebook.msutils.impl.MSNotebookUtilsImpl._run(MSNotebookUtilsImpl.scala:162)
	at com.microsoft.spark.notebook.msutils.impl.MSNotebookUtilsImpl.runWithRetry(MSNotebookUtilsImpl.scala:338)
	at com.microsoft.spark.notebook.msutils.impl.MSNotebookUtilsImpl.$anonfun$buildDAG$2(MSNotebookUtilsImpl.scala:516)
	at com.microsoft.spark.notebook.common.SimpleDAG.$anonfun$executeJob$1(SimpleDAG.scala:311)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(P
Incremental refreshglobalfailed5 days ago16m 39sschedule
FAILED after 999.6s
Py4JJavaError: An error occurred while calling o5280.throwExceptionIfHave.
: com.microsoft.spark.notebook.msutils.NotebookExecutionException: Timeout when exe cell - 10 in notebook veeva_incremental_ingest, code length = 788 and it costs 900.001s. You can set timeout parameter to mitigate the issue. Please check the doc https://go.microsoft.com/fwlink/?linkid=2152237#notebook-utilities for details.You can check driver log or snapshot for detailed error info! See how to check logs: https://go.microsoft.com/fwlink/?linkid=2157243 .
	at com.microsoft.spark.notebook.workflow.JobSessionClient.runCell(JobSessionClient.scala:456)
	at com.microsoft.spark.notebook.workflow.JobSessionClient.$anonfun$run$4(JobSessionClient.scala:191)
	at com.microsoft.spark.notebook.workflow.JobSessionClient.$anonfun$run$4$adapted(JobSessionClient.scala:174)
	at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985)
	at scala.collection.immutable.List.foreach(List.scala:431)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984)
	at com.microsoft.spark.notebook.workflow.JobSessionClient.run(JobSessionClient.scala:174)
	at com.microsoft.spark.notebook.msutils.impl.MSNotebookUtilsImpl._run(MSNotebookUtilsImpl.scala:162)
	at com.microsoft.spark.notebook.msutils.impl.MSNotebookUtilsImpl.runWithRetry(MSNotebookUtilsImpl.scala:338)
	at com.microsoft.spark.notebook.msutils.impl.MSNotebookUtilsImpl.$anonfun$buildDAG$2(MSNotebookUtilsImpl.scala:516)
	at com.microsoft.spark.notebook.common.SimpleDAG.$anonfun$executeJob$1(SimpleDAG.scala:311)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(P
Incremental refreshtenantsucceeded6 days ago13m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-2713m 42sscheduleOK in 822s across 5 steps
Incremental refreshglobalsucceeded2026-04-2721m 34sschedule
OK in 1294.9s across 23 steps
{
  "veeva_incremental_ingest": {
    "status": "ok",
    "duration_s": 710.8,
    "exit_value": ""
  },
  "sftp_ingest": {
    "status": "ok",
    "duration_s": 15.9,
    "exit_value": ""
  },
  "config_sync": {
    "status": "ok",
    "duration_s": 54.8,
    "exit_value": ""
  },
  "goals_sync": {
    "status": "ok",
    "duration_s": 21.6,
    "exit_value": ""
  },
  "silver_picklist_build": {
    "status": "ok",
    "duration_s": 24.7,
    "exit_value": ""
  },
  "silver_hcp_build": {
    "status": "ok",
    "duration_s": 33.8,
    "exit_value": ""
  },
  "silver_hco_build": {
    "status": "ok",
    "duration_s": 36.7,
    "exit_value": ""
  },
  "silver_user_build": {
    "status": "ok",
    "duration_s": 27.6,
    "exit_value": ""
  },
  "silver_territory_build": {
    "status": "ok",
    "duration_s": 21.7,
    "exit_value": ""
  },
  "silver_account_territory_build": {
    "status": "ok",
    "duration_s": 33.7,
    "exit_value": ""
  },
  "silver_user_territory_build": {
    "status": "ok",
    "duration_s": 24.9,
    "exit_value": ""
  },
  "silver_call_build": {
    "status": "ok",
    "duration_s": 28.8,
    "exit_value": ""
  },
  "silver_sale_build": {
    "status": "ok",
    "duration_s": 25.1,
    "exit_value": ""
  },
  "silver_account_xref_build": {
    "status": "ok",
    "duration_s": 24.8,
    "exit_value": ""
  },
  "gold_dim_date_build": {
    "status": "ok",
    "duration_s": 21.7,
    "exit_value": ""
  },
  "gold_dim_hcp_build": {
    "status": "ok",
    "duration_s": 22.2,
    "exit_value": ""
  },
  "gold_dim_hco_build": {
    "status": "ok",
    "duration_s": 18.7,
    "exit_value": ""
  },
  "gold_dim_user_build": {
    "status": "ok",
    "duration_s": 21.7,
    "exit_value": ""
  },
  "gold_dim_account_build": {
    "status": "ok",
    "duration_s": 18.8,
    "exit_value": ""
  },
  "gold_dim_territory_build": {
    "status": "ok",
    "duration_s": 21.7,
    "exit_value": ""
  },
  "gold_bridge_account_territory_build": {
    "status": "ok",
    "duration_s": 25.9,
    "exit_value": ""
  },
  "gold_fact_call_build": {
    "status": "ok",
    "duration_s": 27.8,
    "exit_value": ""
  },
  "gold_fact_sale_build": {
    "status": "ok",
    "duration_s": 30.6,
    "exit_value": ""
  }
}
Incremental refreshglobalfailed2026-04-2716m 44sschedule
FAILED after 1003.9s
Py4JJavaError: An error occurred while calling o5211.throwExceptionIfHave.
: com.microsoft.spark.notebook.msutils.NotebookExecutionException: Timeout when exe cell - 10 in notebook veeva_incremental_ingest, code length = 788 and it costs 900.001s. You can set timeout parameter to mitigate the issue. Please check the doc https://go.microsoft.com/fwlink/?linkid=2152237#notebook-utilities for details.You can check driver log or snapshot for detailed error info! See how to check logs: https://go.microsoft.com/fwlink/?linkid=2157243 .
	at com.microsoft.spark.notebook.workflow.JobSessionClient.runCell(JobSessionClient.scala:450)
	at com.microsoft.spark.notebook.workflow.JobSessionClient.$anonfun$run$4(JobSessionClient.scala:191)
	at com.microsoft.spark.notebook.workflow.JobSessionClient.$anonfun$run$4$adapted(JobSessionClient.scala:174)
	at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985)
	at scala.collection.immutable.List.foreach(List.scala:431)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984)
	at com.microsoft.spark.notebook.workflow.JobSessionClient.run(JobSessionClient.scala:174)
	at com.microsoft.spark.notebook.msutils.impl.MSNotebookUtilsImpl._run(MSNotebookUtilsImpl.scala:162)
	at com.microsoft.spark.notebook.msutils.impl.MSNotebookUtilsImpl.runWithRetry(MSNotebookUtilsImpl.scala:338)
	at com.microsoft.spark.notebook.msutils.impl.MSNotebookUtilsImpl.$anonfun$buildDAG$2(MSNotebookUtilsImpl.scala:516)
	at com.microsoft.spark.notebook.common.SimpleDAG.$anonfun$executeJob$1(SimpleDAG.scala:311)
	at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
	at scala.util.Success.$anonfun$map$1(Try.scala:255)
	at scala.util.Success.map(Try.scala:213)
	at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
	at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(P
Incremental refreshtenantsucceeded2026-04-2613m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-2513m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-2413m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-2313m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-2213m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-2113m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-2013m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1913m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1813m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1713m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1613m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1513m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1413m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1313m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1213m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1113m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-1013m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantfailed2026-04-0913m 42sschedulestep silver_sale_build failed
Incremental refreshtenantsucceeded2026-04-0813m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-0713m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-0613m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-0513m 42sscheduleOK in 822s across 5 steps
Incremental refreshtenantsucceeded2026-04-0413m 42sscheduleOK in 822s across 5 steps