Profiling
Examining and analyzing data to ensure its quality, consistency, and integrity ..
Data Profiling
Before diving into data analytics within a Data Catalog, it's critical to ensure your data has been properly profiled. Data profiling is not just a step but a prerequisite. Without a valid data profile, proceeding to data identification tasks is premature.
What is Data Profiling?
Data profiling involves a detailed examination of your dataset, focusing on specific data objects. During this process, you collect valuable statistics and summaries that provide insights into the nature of your data. The beauty of data profiling is its immediacy. As soon as data from a column, table, or schema is processed, the results are ready for review.
When a data source schema is initially ingested, you're limited to the database metadata. Data profiling enriches this by uncovering additional layers of information, making it an indispensable tool in the data analyst's toolkit.

If data profiling is not done, Data Catalog highlights as 'Required'.
Click the Data Profiling tile.

The Profiling page opens with the following additional options to configure data profiling.

Structured files
.csv, .tsv, .psv
Structured files with consistent field delimiters. You can configure header row detection and delimiter type during profiling.
Compressed files
.gz, .snappy, .deflate, .bz2, .lzo, .lz4
Unstructured documents
.pdf, .doc, .docx, .txt, .rtf
Profiling extracts document metadata and textual content. Includes string detection, summarization, and duplicate detection.
Semi-structured files
.parquet , .json, .avro, .orc
Stores structured data in columnar format. Profiling includes schema detection, field types, null values, and value frequency analysis.
Data Profiling
In the Data Profiling process, Data Catalog examines AW structured data within JDBC data sources and gathers statistics about the data. It profiles data using algorithms to compute detailed properties, including field-level data quality metrics, data statistics, and data patterns.
When configuring data profiling, it is considered a best practice to use the default settings as they are suitable for most situations. With the default settings, the data profiling is limited to 500,000 rows.
Extract samples
Extracts the sample data during profiling and displays it in the summary tab.
Skip Recent (days)
Skips profiling for recently profiled tables. For example, if the days field is set to 7, any table profiled within the last 7 days is skipped.
Sample Type
Specifies the sampling method for profiling:
Sample Clause: Profiles the sample data based on the percentage or rows*.
First N Rows: Profiles the first N rows of the data resource.
Every Nth Row: Profiles every Nth row of the data resource.
Filter: Profiles data using a custom SQL
WHEREclause, that helps to target specific subsets of data based on user-defined conditions.Where Clause#: SQL condition used for data selection when Filter is enabled. For example,
country = 'USA'.
Clear: Resets the sampling configuration.
Split Job by Columns
Splits profiling jobs by columns, allowing parallel processing for wide tables.
Columns Per Job
When splitting by columns is enabled, specifies the number of columns included in each job.
Number of Tables Per Job
Specifies the number of tables included in a single profiling job.
Persist Threads
Defines the number of threads used for persisting profiling results to improve performance.
Persist File Threads
Sets the number of threads for persisting profiling data into files for large datasets.
Profile Threads
Indicates the number of threads allocated for profiling tasks, enabling parallel task execution.
The Sample Clause > Rows option is only supported for: Microsoft SQL & Snowflake data sources.
Enable: Extract Samples.
Set Sample Type: Percentage (30)

Keep the other default values ..
The Profiling will use Reservoir Sampling 30% of total records.
Click: 'Start'.
You can You can view the status of the Profiling process on the Workers page.

Profiled Data
If you select a data source, you can view available items like schema for structured data and folders for file systems. In addition, you can also get the number of tables and columns, including associated tags. For schemas, you can get additional information like row count and the last profiled date and time.
Navigate to: Person Schema
Enter a description:
HumanResources
Contains tables related to employee information, departmental structures, payroll history, and job candidates for the fictional company's workforce management.
HIGH
Person
Stores information about human beings involved with Adventure Works including employees, customer contacts, and vendor contacts, along with their addresses, phone numbers, and email information.
HIGH
Production
Manages data related to products sold or used in manufacturing, including product details, bills of materials, work orders, and manufacturing processes.
MEDIUM
Purchasing
Handles vendor relationships, purchase orders, product sourcing, and supplier management for the company's procurement operations.
MEDIUM
Sales
Contains all sales-related data including customer information, sales orders, sales territories, sales representatives, credit cards, and revenue tracking.
HIGH
Data Discovery - also determined the level of Sensitivity. Set the level for each schema.
Data Discovery - also calculated a Confidence Score - set the number of stars for each Schema.

Click: Details tab.

Let's dig a bit deeper and see what other stats are available.
Expand the Person schema > Person table.
Select: Summary tab.
Enter a description (only 70 tables left):
The Person.Person table stores fundamental personal information for all human beings involved with Adventure Works, including employees, customer contacts, and vendor contacts, serving as the central repository for individual identity data such as names, titles, and demographic information that links to other business entities throughout the database.Verify the Data Lineage.

Click on: Details


In data Canvas, Select: Person > Person.
Expand the Person table > FirstName.

Stdev Length
is the number of data points (or time periods) used to calculate the standard deviation.
Stdev Length of 10 = calculate standard deviation using the last 10 values
Blanks
Empty cells that contain no characters (not even a space). Count of cells with literally nothing in them.
Avg Length
Average number of characters in the values. For example, if you have "Cat" (3), "Dog" (3), "Bird" (4), the average length is 3.33.
Bytes
Total storage size of the data in bytes. Shows how much memory/disk space the data occupies.
Min Length
Shortest value in the column by character count. If your shortest entry is "NY" that's 2 characters
Nulls
Missing or undefined values (database NULL). Different from blanks - these are explicitly marked as "no data."
Rows
Total number of records/entries in the dataset.
Cardinality
Number of unique/distinct values. If a column has 1000 rows but only 5 different values, cardinality = 5. High cardinality = many unique values; low cardinality = few unique values.
Max Length
Longest value in the column by character count. If your longest entry is "Massachusetts" that's 13 characters.
Last updated
Was this helpful?
