Test Failures
This video explains how Elementary handles dbt test failures and anomalies. Every dbt test failure is recorded and uploaded, allowing users to receive alerts and perform triage directly from the UI. Users can view a sample of faulty rows, examine the test query, and analyze failed data.
In addition to that, you can also dive right into the dbt test failures. We treat dbt test as like first class citizens. So each and every failure for a dbt test will be recorded and like uploaded here, and you will be able to get alerts for them and also do that triage from the UI here as well.
As you can see, you will be able to get A sample of the faulty rows. You can also see the query of the test. You can copy to your data warehouse and start doing a thorough analysis on the failed rows and slice and dice them with, like, all the entire data set that was failed. But here we also show you a glimpse of a few failures.
If you have tables with PII or tables that you don't want to collect samples, then you can disable it. We got, we also support the configuring retention periods for samples. So if, for example, you want the sample to be deleted within a week, that's also a possibility. So we really con let you configure if like you are okay with us showing you the samples or not.
And it's part of the configuration process of the tests.
Cool. In addition to that, you can also see anomalies on content. For example, here you can see that we calculated a metric that is called zero count on a column that is named revenue. And you can see there are like patterns of like a number of zero counts for this column, probably like a weekly seasonality or something like that.
And then there is a spike and it was detected as an outlier. This is like another type of an anomaly that is more opt in because you need to tell us that you want us to calculate the metric on this column. And then we will start monitoring it and calculating the Zero count metric on this revenue table as opposed to the automated ones, because automated ones rely on metadata and they don't require any compute and cost from your end because we only sync metadata into our Platform and run all the validations on them for the column anomalies.
It's opt-in because you need to tell us when you need to calculate zero count, and this is being done by the SDK by the dbt package. So the dbt package calculates the zero count on this column. And it does require some compute on your end. We do it efficiently we always limit the time that we scan back, and it's incremental, so we don't recalculate metrics that were calculated.
So we do take compute and costs into consideration, and it's a top priority for us. And usually we don't see any issues with that during POCs and implementations. So that's an example for an anomaly that runs on the data itself and not on the metadata like table updates and things like that.