Lineage system tables reference
This system table is in Public Preview. To access the table, the schema must be enabled in your system
catalog. For more information, see Enable system table schemas.
This page includes a reference for the two lineage system tables. These system tables build on Unity Catalog's data lineage feature, allowing you to programmatically query lineage data to fuel decision making and reports.
Both lineage tables represent a subset of all read/write events, as it is not always possible to capture lineage. Records are only emitted when lineage can be inferred.
Table lineage table
The table lineage system table includes a record for each read or write event on a Unity Catalog table or path. This includes, but is not limited to, job runs, notebook runs, and dashboards updated with the read or write event.
Table path: This system table is located at system.access.table_lineage
.
Table lineage schema
The table lineage system table uses the following schema.
Column name | Data type | Description | Example |
---|---|---|---|
| string | The ID of the Databricks account. |
|
| string | The ID of the Unity Catalog metastore. |
|
| string | Three-part name to identify the source table. |
|
| string | The catalog of the source table. |
|
| string | The schema of the source table. |
|
| string | The name of the source table. |
|
| string | Location in cloud storage of the source table, or the path if it's reading from cloud storage directly. |
|
| string | The type of the source. The value is |
|
| string | Three-part name to identify the target table. |
|
| string | The catalog of the target table. |
|
| string | The schema of the target table. |
|
| string | The name of the target table. |
|
| string | Location in cloud storage of the target table. |
|
| string | The type of the target. The value is |
|
| string | The user who generated this lineage. This can be a Databricks username, a Databricks service principal ID, “System-User”, or | |
| timestamp | The timestamp when the lineage was generated. Timezone information is recorded at the end of the value with |
|
| date | The date when the lineage was generated. This is a partitioned column. |
|
| string | The unique ID of the lineage record. This value is auto-generated and cannot be joined with any tables. |
|
| string | The unique ID for a single lineage event. Multiple rows may share the same |
|
| string | The unique ID of a query statement that generated the lineage event. This is a foreign key to join with query history system table. This value is only set when the query is run from a SQL warehouse. |
|
| struct | Metadata about the entity responsible for the lineage event. | See Entity metadata |
The columns entity_type
, entity_run_id
, and entity_id
have been deprecated. For a complete picture of the entities involved in the lineage record, use the entity_metadata
column.
Column lineage table
The column lineage table does not include events that do not have a source. For example, if you insert into a column using explicit values, it is not captured. If you read a column, it is captured whether or not you write the output. Column lineage is not supported for DLT.
Table path: This system table is located at system.access.column_lineage
.
Column lineage schema
The column lineage system table uses the following schema:
Column name | Data type | Description | Example |
---|---|---|---|
| string | The ID of the Databricks account. |
|
| string | The ID of the Unity Catalog metastore. |
|
| string | The ID of the workspace |
|
| string | Three-part name to identify the source table. |
|
| string | The catalog of the source table. |
|
| string | The schema of the source table. |
|
| string | The name of the source table. |
|
| string | Location in cloud storage of the source table, or the path if it's reading from cloud storage directly. |
|
| string | The type of the source. The value is |
|
| string | The name of the source column. |
|
| string | Three-part name to identify the target table. |
|
| string | The catalog of the target table. |
|
| string | The schema of the target table. |
|
| string | The name of the target table. |
|
| string | Location in cloud storage of the target table. |
|
| string | The type of the target. The value is |
|
| string | The name of the target column. |
|
| string | The user who generated this lineage. This can be a Databricks username, a Databricks service principal ID, “System-User”, or | |
| timestamp | The timestamp when the lineage was generated. Timezone information is recorded at the end of the value with |
|
| date | The date when the lineage was generated. This is a partitioned column. |
|
| string | The unique ID of the lineage record. This value is auto-generated and cannot be joined with any tables. |
|
| string | The unique ID for a single lineage event. Multiple rows may share the same |
|
| string | The unique ID of a query statement that generated the lineage event. This is a foreign key to join with query history system table. This value is only set when the query is run from a SQL warehouse. |
|
| struct | Metadata about the entity responsible for the lineage event. | See Entity metadata |
The columns entity_type
, entity_run_id
, and entity_id
have been deprecated. For a complete picture of the entities involved in the lineage record, use the entity_metadata
column.
Reading lineage system tables
- To determine whether the event was a read or a write, you can view the source type and the target type.
- Read-only: The source type is not null, but target type is null.
- Write-only: The target type is not null, but the source type is null.
- Read and write: The source type and the target type are not null.
Entity metadata reference
The entity_metadata
struct has the following schema:
job_info:
job_id: "string"
job_run_id: "string"
dashboard_id: "string"
legacy_dashboard_id: "string"
notebook_id: "string"
sql_query_id: "string"
dlt_pipeline_info:
dlt_pipeline_id: "string"
dlt_update_id: "string"
Databricks records lineage events from DLT, notebooks, jobs, Databricks SQL queries, and dashboards. Events from other entities aren't supported.
Multiple values can be populated depending on the event type. For example, a job running a notebook task would populate both job_info
and notebook_id
.
If all the values in entity_metadata
are null
, it means no Databricks entity was involved in the event. For example, it could be the result of a JDBC query or from a user clicking into the Sample Data tab in the Databricks UI.
Lineage system table example
As an example of how lineage is recorded in system tables, here is an example query followed by the lineage records the query creates:
CREATE OR REPLACE TABLE car_features
AS SELECT *, in1+in2 as premium_feature_set
FROM car_features_exterior
JOIN car_features_interior
USING(id, model);
The record in system.access.table_lineage
would look like:
|
|
|
|
|
|
---|---|---|---|---|---|
|
|
|
|
| |
|
|
|
|
|
The record in system.access.column_lineage
would look like:
|
|
|
|
|
|
|
---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Not all lineage columns are shown in above example. For the full schema, see the above lineage schema.
Troubleshooting external table queries
When you reference an external table using its cloud storage path, the associated lineage record only includes the path name and not the table name. As an example, the lineage record for this query would include the path name and not the table name:
SELECT * FROM delta.`s3://mybucket/table1`;
If you are trying to query lineage records for an external table referenced by path, you must filter the query using source_path
or target_path
instead of source_table_full_name
or target_table_full_name
. For example, the following query pulls all lineage records for an external table:
SELECT *
FROM system.access.table_lineage
WHERE
source_path = "s3://mybucket/table1" OR
target_path = "s3://mybucket/table1";
Example: Retrieve lineage records based on external table name
If you don't want to manually retrieve the cloud storage path to find lineage, you can use the following function to get lineage data using the table name. You can also replace system.access.table_lineage
with system.access.column_lineage
in the function if you want to query column lineage.
def getLineageForTable(table_name):
table_path = spark.sql(f"describe detail {table_name}").select("location").head()[0]
df = spark.read.table("system.access.table_lineage")
return df.where(
(df.source_table_full_name == table_name)
| (df.target_table_full_name == table_name)
| (df.source_path == table_path)
| (df.target_path == table_path)
)
Then use the following command to call the function and display lineage records for the external table:
display(getLineageForTable("table_name"))