🚀
definity emerges from stealth with $4.5M in funding and is now live! Read more on TechCrunch
We've just launched Performance Optimization!
Blog
Shift Right to Better Shift Left

Shift Right to Better Shift Left

Shift testing left. Shift security left. Shift [insert discipline here] left. This is the mantra. 

Unfortunately, in data engineering, this mantra only tells half of the story. Yes, you need to test early. Yes, you need to make sure your pipeline is secure. And yes, you need visibility all the way from the source. But shifting left the effort doesn’t guarantee any of this. It only guarantees one thing – that data engineers will be burdened with the manual (and often mundane) task of implementing all this testing – which means lower engineering velocity. 

Instead, to ultimately achieve a shift left in visibility, coverage, & detection, you really need to shift right your effort. This will allow you to seamlessly gain full visibility into your data pipelines, shift your detection left, automate your testing processes, free up data engineers to build core business logic, and speed up velocity.

Shift Left Effort Means More Work

Shifting left is set up as the answer to pipeline reliability and data quality because it emphasizes testing early and often in the data lifecycle.

But what does that mean? 

It means data developers have to write tests. Nobody wants to write tests – it’s a manual, mundane, tedious task that few data developers, analysts, or others want to do. Tools exist to help write tests, but it is still down to the individual developer to write the correct tests for their data and pipeline logic.

This makes shifting left effort a time-consuming process, especially as the complexity of the data pipeline grows, so developers spend a significant portion of their time writing and maintaining tests. 

Maybe this wouldn’t be so bad if the tests worked. But there are two fundamental problems with this approach:

  1. It doesn’t scale. Even the most diligent developers don’t have the capacity to test every single data asset, every single dimension, and every single scenario, leading to gaps in test coverage. Manual tests are, therefore, very limited in scope. When each developer writes their own tests, you also lack standardization regarding coverage dimensions, test structure, naming conventions, and best practices across the data platform.
  2. It’s static. In a data-led organization, there can be hundreds of tests for any asset. Since data and pipeline behavior constantly change over time, static tests and thresholds (written in the development phase) need to be continually adapted (in production), which requires both a manual effort and a redeployment to production (with the hope that nothing else breaks). Moreover, every change to pipeline code requires review and update to the tests, which can quickly become a bottleneck in the development process. 

You can simply never get the coverage you need with manual testing. You end up with low adoption by developers and overall low coverage of datasets and pipelines. Developers end up in an impossible situation – they have to do more work to code tests, but the tests they need to create are ever-expanding and incomplete. Not only is it a never-ending, tiresome process, but it’s also one doomed to failure.

More Work Means Lower Velocity

What does all this lead to, apart from a miserable development team?

Bad data. We mean this in two ways.

The first is the obvious way – Garbage In, Garbage Out. With shifting left effort, you end up with bad data throughout your pipelines. 

Why?  If developers need to manually write tests for your data and pipelines – this is impossible; there are just too many data assets, transformations, infra resources, and metrics to test manually. So, by default, you have low coverage, and therefore, you end up with unhealthy pipelines and poor-quality data.

Moreover, any testing you do have is reactive. Tests will only be added after a problem has occurred. If a data consumer or customer catches this problem downstream, then the damage is already done.

You also have no visibility or testing within those pipelines to help root cause issues. 

This lack of visibility and comprehensive testing leads to a constant stream of data quality issues that can significantly impact the business. From incorrect analytics and reporting to faulty machine learning models and flawed decision-making, bad data undermines the very foundation of a data-driven organization.

Secondly, we mean bad data in an even more nuanced way. If developers write tests, they aren’t working on core business logic. If they are fighting data quality fires or pipeline breakages, they aren’t building out the data infrastructure that Product, ML/AI, and Analytics/BI teams need. They aren’t optimizing pipelines for efficient runs if they struggle with technical debt. If they try to rewrite tests and play catch-up, they aren't focusing on the high-value, strategic work that drives innovation and growth.

This second result is even more deadly. You end up with an unhappy, stagnant data engineering team that isn’t driving the company forward. You get lower development velocity and worse data engineering.

But tests are critical. So, to test right, you need to shift right.

What is Shift Right?

The problems above aren’t just a matter of data developers not having the proper tooling, but of data developers not having the right capabilities:

  • Out-of-the-box metrics collection, to monitor quality, health, and performance of data & pipelines. 
  • End-to-end ubiquitous monitoring, from source to consumption. 
  • AI-generated tests, to establish complete coverage.
  • Dynamic anomaly detection in run-time, to identify shifts in data and pipeline behavior. 

Shift right effort is about giving developers these capabilities, seamlessly in post-production, rather than having them invest more effort in the development phase.

Out-of-the-box metrics collection

With out-of-the-box telemetry you gain access to hundreds of metrics. These metrics should go beyond simple data quality and include deep visibility into pipeline runs & SLAs and infrastructure resources consumption & performance.

By tracking these metrics, you can establish an understanding of baseline behavior and monitor trends over time. This gives developers the ability to understand how the data behaves, to proactively identify potential issues, to optimize pipeline performance, and to ensure the overall reliability and efficiency of the data operation.

End-to-end ubiquitous monitoring

When you enable ubiquitous monitoring that seamlessly runs inline with every pipeline, you are no longer dependent on injecting code at each pipeline to establish visibility, connecting outside-in to every data source, or parse after-the-fact audit logs. So you avoid partial visibility and looking at static output data, and can now see how data is generated, consumed, and flows across the platform, to establish end-to-end data lineage.

This allows you to pinpoint how issues are initiated upstream, unify your view of data quality and pipelines' health, build a full context of the pipeline execution, and understand what is the potential downstream impact.

AI-generated tests

Instead of relying on developers to manually write tests, auto-generated tests powered by an AI/ML engine give you full coverage immediately, in post-production. This not only reduces manual effort, but also increases each data asset's coverage (test count and dimensions) and standardizes pipeline health and data quality across the platform.

Dynamic anomaly detection in run-time

Traditional rule-based approaches (in the development stage) often fall short in catching subtle anomalies and adapting to evolving data patterns. Shifting right to an ML-based anomaly detection enables continuous learning of your data and pipeline behavior and identification of meaningful shifts in behavior. This enables developers to catch data quality issues that static methods might miss and minimize false positives.

Altogether, the increased coverage and run-time anomaly detection effectively shifts left detection as early as possible, allowing for proactive resolution of pipeline and data quality problems before they impact downstream. Moreover, it means you can now alert both data producers and consumers on outliers in run-time, establishing de-facto data contracts. Lastly, it can even unlocks the option to preempt runs, to prevent propagation and downstream effects.

What You Get from Shifting Right

Shifting right means developers eliminate the pain points of manual testing. With auto-generated tests, you get 100% coverage and scalability.

The increased coverage protects you against data downtime. When combined with in-motion coverage (shifting left detection), rather than at-rest monitoring, it is better for both data developers and consumers. Not only do they not have to worry about writing tests, they don’t have to worry about fighting the fires from the tests they didn’t write. 

Scalability comes from centralizing and automating all testing. With that, you get tests for every new data point and transformation as it is generated without manual intervention. As it is centralized and automated, all tests are standardized, increasing consistency and accountability.

But these are just the technical benefits. The true benefit is that developers get more capacity. Shifting right lets data developers focus on business value. They can dedicate their time and energy to building and optimizing data pipelines, developing new features, and collaborating with consumers to drive data-driven decision-making. 

By freeing developers from the burden of manual testing, shift right empowers them to become strategic partners in the organization's data initiatives, accelerating innovation and delivering results.

Shift Right Your Effort to Shift Left Your Protection

Shifting right doesn’t eliminate shifting left – it complements it. Shifting right allows you to better understand & control your entire pipeline and its outcomes. With it, you automatically end up with high-quality data from the first data point, taking testing off the plate of developers (who hate it), allowing them to increase value for your data organization. 

This is what we’re building with definity, using a unique instrumentation to enable out-of-the-box data pipeline observability across your entire platform, in post-production.

If your data team is stuck with manual testing effort or firefighting data issue, book a demo with us today to see how definity can help you to shift right effort and shift left protection.