What is

Elementary?

The data observability platform built for data & analytics engineers and trusted by 5000+ data professionals. Prevent, detect, and resolve data quality issues, in your dbt pipelines and beyond.

Automated monitors

Freshness, volume and schema changes for all production tables

  • Out-of-the-box monitors activated automatically, without manual configuration.
  • Monitors leverage metadata such as information schema and query history, for monitoring with low compute cost
  • Automated adjustments based on frequency of updates, seasonality, and trends.
Anomaly Detection

Add monitors to detect unexpected changes

  • Add tests as you develop in code, or from the Elementary UI.
  • Detect anomalies in nullness, distribution, dimensions, completeness and more
  • Highly configurable for better accuracy: seasonality, where expressions, sensitivity and more.
data TESTS

One solution for all your data tests -
dbt, elementary & custom

  • No need to reconfigure or duplicate logic, your existing tests are part of your Elementary coverage.
  • Leverage the dbt ecosystem with tests from packages like dbt-expectations, dbt-utils, and more.
  • Extend coverage with your custom tests.
End-to-end lineage

Column-level lineage, automated across your stack

  • Column-level lineage from your code, data warehouse, sources and BI tools.
  • Elementary’s lineage is enriched with test results, to show incidents across the DAG.
  • Quickly understand the origin of issues, and which assets are impacted.
Alerts

Actionable alerts to different systems, channels and recipients

  • Route alerts to different recipients and owners and avoid alert fatigue.
  • Alert on failures of Elementary monitors, dbt tests, model runs, and source freshness issues.
  • Enrich your alerts with additional properties and custom formats.
Code-first

Configuration as Code built for data & analytics engineers

  • All configurations are managed in your dbt code, enabling version control, code review, and CI/CD.
  • Observability configuration becomes part of the development process.
  • Escape vendor lock-in and onboarding hassle, utilize existing configurations and own new additions.
Data quality dashboard

Understand and communicate overall data health

  • Get an overview of your data health in a user-friendly dashboard.
  • Filters and drill-down options to allow users to focus on specific data subsets and investigate issues.
  • Quality dimensions scores (coming soon).
Data catalog

For your assets, maintained in code

  • Explore data sets, including underlying code, overall health, dependencies, ownership, and descriptions.
  • All descriptions, tags, and owners are managed and maintained in your code.
  • Easily navigate from catalog to test results, lineage, and runs history.
Performance & Cost

Models run duration history and performance trends

  • Analyze model execution time and run results over time.
  • Detect deteriorating models and performance bottlenecks.
  • Identify opportunities to reduce cost.
Data CI/CD

Prevent data quality issues at the pull request

  • Prevent breaking changes from making it to production.
  • Run tests and preview the impact of your PR on the pipeline.
  • Enforce policies to ensure high data quality standards.
Integrations

Works with your favorite tools

  • Communication tools: Slack, Microsoft Teams, Opsgenie, and PagerDuty
  • BI tools: Tableau, Looker, and more
  • Data warehouse: Snowflake, BigQuery, Redshift, Databricks and Postgress
  • Code repositories: GitHub and GitLab
Secure by design, no data access