diff --git a/metadata-integration/java/spark-lineage-legacy/README.md b/metadata-integration/java/spark-lineage-legacy/README.md index d0120d4c6a3f1..c163fe485c74c 100644 --- a/metadata-integration/java/spark-lineage-legacy/README.md +++ b/metadata-integration/java/spark-lineage-legacy/README.md @@ -96,9 +96,10 @@ The Spark agent can be configured using Databricks Cluster [Spark configuration] - Open Databricks Cluster configuration page. Click the **Advanced Options** toggle. Click the **Spark** tab. Add below configurations under `Spark Config`. ```text - spark.extraListeners datahub.spark.DatahubSparkListener - spark.datahub.rest.server http://localhost:8080 - spark.datahub.databricks.cluster cluster-name + spark.extraListeners datahub.spark.DatahubSparkListener + spark.datahub.rest.server http://localhost:8080 + spark.datahub.stage_metadata_coalescing true + spark.datahub.databricks.cluster cluster-name ``` - Click the **Init Scripts** tab. Set cluster init script as `dbfs:/datahub/init.sh`.