Reusable Prompts in Snowflake: Building Templates That Actually Rock

If your first foray into AI inside Snowflake felt like a garage band session — noisy, creative, but a little out of tune — you’re not alone. Most data engineers start by hardcoding prompts in their SQL or Python scripts, quickly realizing they’ve got eighteen slightly different versions of “summarize this data” scattered across their warehouses.

That’s fine for a demo. But if you want your AI workflow to go on tour, it’s time to write some reusable hits.

The Problem: Too Many Prompts, Not Enough Harmony

Now we can all sing in harmony!

You wouldn’t entirely rebuild a data model for every dashboard. Yet, that’s what most teams are doing with AI prompts — rewriting them over and over, embedding them directly in SQL or task logic.

The result?

  • Inconsistent outputs
  • Hard-to-debug behavior
  • Zero version control
  • “What the heck does this prompt even do?” moments

The fix is simple: treat prompts like source code. Modularize, version control them, and feed them structured inputs — straight from your Snowflake data.

Step 1: Store Your Prompts in a Table

Snowflake doesn’t need to be just a data store — it can actually hold your AI patterns.

CREATE OR REPLACE TABLE ai_prompts (
    prompt_name STRING,
    prompt_text STRING,
    description STRING
);

INSERT INTO ai_prompts VALUES
('summarize_sales', 'Summarize sales trends for the past ${days} days, focusing on top regions and anomalies.', 'Summarize daily sales'),
('detect_anomalies', 'Identify any anomalies in ${metric} for the last ${days} days, explaining potential causes.', 'Detect anomalies in key metrics');

By parameterizing your prompts with ${variables}, you’ve just written your first reusable template. Think of these like the chord progressions that can drive multiple songs.

Step 2: Inject Data Dynamically

You can pass real data right into your prompt text before sending it to Snowflake Cortex.

Here’s an example using CORTEX.COMPLETE() — Snowflake’s built-in LLM function:

WITH filled_prompt AS (
  SELECT
      REPLACE(
          REPLACE(p.prompt_text, '${days}', '30'),
          '${metric}', 'revenue'
      ) AS prompt
  FROM ai_prompts p
  WHERE prompt_name = 'detect_anomalies'
)
SELECT SNOWFLAKE.CORTEX.COMPLETE(
    'gpt-4o-mini',
    filled_prompt.prompt,
    OBJECT_CONSTRUCT('temperature', 0.2)
) AS response
FROM filled_prompt;

“Under Pressure” – Queen’s version – plays faintly in the background as you realize your entire prompt logic just became dynamic, reproducible, and deployable.

Step 3: Add Context with Live Data

Static prompts are just plain boring and not always useful. The real fun starts when you merge live warehouse data into the message.

WITH top_products AS (
  SELECT PRODUCT_NAME, SUM(SALES_AMOUNT) AS total_sales
  FROM SALES
  WHERE SALE_DATE >= DATEADD('day', -30, CURRENT_DATE())
  GROUP BY PRODUCT_NAME
  ORDER BY total_sales DESC
  LIMIT 5
),
filled_prompt AS (
  SELECT
    'Based on the last 30 days, summarize the top-selling products: ' ||
    LISTAGG(PRODUCT_NAME, ', ') || '.' AS prompt
  FROM top_products
)
SELECT SNOWFLAKE.CORTEX.COMPLETE(
    'gpt-4o-mini',
    prompt,
    OBJECT_CONSTRUCT('temperature', 0.3)
) AS product_summary
FROM filled_prompt;

Now you’ve got real-time AI tied directly to query results — no middleware, no duct-taped Python, just real useful results.

Step 4: Version & Govern Your Prompts

Just like code, prompts evolve. Save versions as separate rows, or better yet, maintain them in Git and load via Snowflake’s Git integration.

Keep an audit trail: who changed it, why, and what it impacts.
Snowflake’s built-in Access History and Query Tags can help you trace prompt lineage across workloads.

ALTER SESSION SET QUERY_TAG = 'AI_PROMPT:detect_anomalies_v2';

Later, you can query it like a semi-responsible adult:

SELECT * FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE QUERY_TAG LIKE '%detect_anomalies%';

Step 5: Benchmark Your Prompts

You can measure quality (subjectively) or consistency (objectively).

For example, run your prompt over 10 sample datasets, store responses, and use SNOWFLAKE.CORTEX.EMBED_TEXT() to vectorize and compare similarity — just to see if your AI behaves consistently.

Because when your LLM starts free-styling like Prince in a guitar solo, you’ll want to know why.

Closing Verse

Prompt reuse isn’t just an optimization; it’s a professional move. It’s how you go from “I have an AI demo” to “We’ve productized our intelligence layer.”

Treat your prompts like you treat SQL views — modular, tested, and versioned.

And next time your Snowflake LLM pipeline nails an answer on the first try, go ahead and hum a little Journey:

🎵 “Don’t stop believin’, hold on to that prompt feelin’…” 🎵