Data Stack

Most dbt Projects Use Jinja Wrong: 3 Practical Patterns for Cleaner Models

Quick answer

Most dbt projects do not need more Jinja everywhere. The patterns that usually pay off are targeted: use a list-driven for loop for repetitive column logic, use set/endset for shared SQL snippets, and create macros only when logic is reused across multiple models.

dbt Jinja helps analytics engineers keep SQL dry, but it can also make a project harder to read when abstraction spreads faster than the actual business logic. The goal is not to squeeze every repeated line into templating. The goal is to use Jinja where it clearly reduces maintenance work without hiding what a model does.

Video walkthrough

A walkthrough of three practical ways to implement Jinja in a dbt project: list-driven loops, reusable SQL snippets, and macros.

Watch on YouTube

Start with ref and source before adding more abstraction

The default dbt primitives already solve a big part of the readability problem. ref and source make lineage clearer, keep dependency management explicit, and remove a lot of hard-coded object naming from your models.

That means Jinja should usually be introduced after a team has already written a few models and can point to repeated logic that is genuinely painful to maintain. If the pattern is not recurring yet, plain SQL is often the clearer choice.

Use a list-driven for loop for repetitive field logic

A good first use case is repeated field-level cleanup. In the video, the repeated task is wrapping multiple output columns in the same coalesce pattern. Instead of rewriting the same expression for every field, define a list once and iterate over it.

This works well when the SQL shape stays constant and only the field names change. It keeps the model shorter while preserving a predictable compiled output.

{% set coalesce_fields = ["orders", "total_number_of_items"] %}

select
  {% for field in coalesce_fields %}
  coalesce(o.{{ field }}, 0) as {{ field }}{% if not loop.last %},{% endif %}
  {% endfor %}
from orders o
  • Put the list near the top of the model so the repeated fields are visible immediately.
  • Compile the model and inspect the generated SQL before merging changes.
  • Skip the loop when only one or two fields are involved and the abstraction saves almost no maintenance.

Use set/endset when the repeated unit is a SQL snippet

Another practical pattern is reusing a filter or clause across multiple CTEs. When the same business rule appears in several places, storing the snippet once can reduce copy-paste drift.

The video shows a common mistake here as well: if you forget the curly brace interpolation when inserting the snippet, the compiled SQL will literally output the variable name instead of the intended clause. That is why compile-time review matters.

{% set filter_values %}
status not in ("cancelled")
{% endset %}

with orders as (
  select *
  from analytics.orders
  where {{ filter_values }}
)
  • Keep snippets small and business-specific so a reader can still understand the model in one pass.
  • Use compiled SQL as the source of truth when validating the pattern.
  • Prefer a short one-line snippet if formatting becomes noisy in the compiled output.

Create macros only for logic you will reuse across models

Macros are the most powerful abstraction in the set because they behave like reusable functions. They make sense when a team has logic that appears in several models and needs consistent behavior everywhere.

A safe_divide macro is a good example. Instead of repeating a case statement to guard against division by zero in every model, define the behavior once and call it with different numerator and denominator inputs.

{% macro safe_divide(numerator, denominator) %}
case
  when {{ denominator }} = 0 then null
  else {{ numerator }} / {{ denominator }}
end
{% endmacro %}

select {{ safe_divide("10", "5") }} as example_ratio
  • Macros are strongest when they encode a clear convention, not when they hide business logic that should stay visible in the model.
  • Name macros after the business or transformation task they solve.
  • Test example inputs so downstream model behavior is obvious to reviewers.

Know when Jinja is adding complexity instead of removing it

The caution at the end of the video is the real operating rule. It is easy to over-engineer a dbt project with abstraction that saves a few lines but makes models slower to review, harder to onboard into, and more confusing in the compiled output.

The best balance is keeping code dry while preserving human readability. If a loop only replaces two repeated fields, or if a snippet forces readers to jump between files just to understand a filter, the abstraction may not be worth it.

  • Favor readability when the maintenance savings are small.
  • Document conventions for when a team should use loops, snippets, or macros.
  • Review compiled SQL during code review, not just the Jinja source.

Frequently asked questions

When should I use Jinja in dbt?

Use Jinja in dbt when the same SQL pattern clearly repeats across fields, CTEs, or models. If the abstraction removes meaningful maintenance work and the compiled SQL stays easy to understand, it is usually a good fit.

What is the difference between a dbt snippet and a macro?

A snippet created with set/endset is useful for reusing a block of SQL text inside a model. A macro is more like a reusable function that can accept arguments and be called across multiple models or packages.

Why should I compile dbt models after adding Jinja?

Compiling lets you inspect the SQL that will actually run in your warehouse. That is the fastest way to catch issues like missing interpolation braces, awkward formatting, or logic that no longer reads clearly once rendered.

Can too much Jinja make a dbt project worse?

Yes. Overusing Jinja can make a dbt project harder to read, harder to onboard into, and more difficult to debug. The best dbt projects use abstraction selectively and keep business logic understandable to humans.

Related service

Need help standardizing macros, modeling conventions, and review workflows across a growing analytics engineering team?

Explore dbt consulting

Available for new projects

Let's work
together

Ready to transform your analytics operations? Get in touch with our team to discuss how we can help unlock the value in your data.

Location

44 Montgomery St
San Francisco, CA 94104

Ready to get started?

Schedule a consultation

Discuss your analytics priorities and build an actionable roadmap with our team.

Schedule a Consultation

© 2026 DF Insights. All rights reserved.