Goodbye Manual Prompts, Hello DSPy

Today I learned about a smarter way to deal with the headache of prompts in production. Drew Brunig’s talk at the Databricks Data + AI Summit is hands down the clearest explanation I’ve seen of why traditional prompting doesn’t scale well. He compares it to regex gone wild: what starts as a neat solution quickly becomes a brittle mess of instructions, examples, hacks, and model quirks buried inside giant text blocks that no one wants to touch. A single “good” prompt can have so many moving parts that it becomes practically unreadable.

DSPy takes a very different approach. Instead of hand-crafting and maintaining prompts, you define the task in a structured way and let the framework generate and optimise the prompts for you. You describe what goes in and what should come out, pick a strategy (like simple prediction, chain-of-thought, or tool use), and DSPy handles the formatting, parsing, and system prompt details behind the scenes. Because the task is decoupled from any specific model, switching to a better or cheaper model later is as easy as swapping it out and re-optimising.

This feels like a glimpse of where prompt engineering is heading: less manual tinkering, more structured task definitions and automated optimisation. I’ll definitely be trying DSPy out soon.

https://www.youtube.com/watch?v=I9ZtkgYZnOw

Subscribe to The AI Engineering Brief

No spam, no sharing to third party. Only you and me.