top of page

What does evaluating an addiction treatment programme really mean in practice?

There's a moment that happens in treatment rooms everywhere. A client walks in for their twelfth session, or their twentieth, and something has shifted. Maybe it's in how they hold themselves, or the way they speak about their week. Maybe it's nothing you can name at all. Just a quiet sense that change is happening. And then comes the question we should all be asking, but often don't: How do we know? And if we know, can we say why?


This is where evaluation lives, not in spreadsheets and tick-boxes alone, but in the space between intention and outcome, between what we hoped would happen and what actually did.


Programme evaluation, at its heart, is the systematic collection of information about the design, implementation, and outcomes of our work. It's how we make judgements about what we're doing, improve our effectiveness, and inform decisions about what comes next.


But here's what twenty years in this field has taught me: evaluation is more than methodology. It's an attitude, a way of being curious about our work without being defensive, of asking difficult questions because we care deeply about the answers.


The culture of asking


I've seen services transform not because they implemented a new model or secured better funding, but because they cultivated what I call a "healthy culture of evaluation". That is an environment where questioning becomes second nature, where looking closely at outcomes isn't threatening but energizing. This culture doesn't emerge from policy documents. It grows from a collective commitment to asking: Are we doing what we think we're doing? Is it working? For whom? And if not, what needs to change?


This attitude requires courage. It means not being afraid when the answers challenge our assumptions. It means accepting that our most cherished programme components might not be the ones making the difference. And it means continually gathering information not to justify what we're already doing, but to genuinely improve it.


The three horizons of evaluation


Evaluation isn't a one-time event that happens at the end of a quarter, the end of a year or the end of funding cycle when commissioners come knocking. It's a continuous conversation with our work, and that conversation changes depending on where we stand in relation to the intervention itself.


Before: The Architecture of Hope


Before we begin, evaluation asks us to look backward and forward simultaneously. What can we learn from others who've walked this path? How do we expect this intervention to work in the messy complexity of real lives? What are our assumptions, and are they valid?


This is where we design the roll-out to maximize learning, where we pilot and test before committing resources and, more importantly, before committing clients to something untested. I've seen too many well-meaning interventions launched on faith and good intentions, only to realize six months in that the theory of change was built on sand. The questions we ask before implementation become the foundation for everything that follows. They help us identify and reduce uncertainty, not eliminate it entirely, but hold it consciously as we move forward.


During: The Living System


Once we're underway, evaluation becomes diagnostic. Is the intervention working as intended? Is it reaching the people it was designed for, or are we inadvertently serving a different population entirely? What unintended consequences are emerging, because there are always unintended consequences, and how do we respond?


This is evaluation as iterative improvement, as real-time adjustment. It's where we notice that the group work scheduled for Tuesday afternoons never fills because that's when the bus schedule changes, or that the psychoeducation sessions resonate deeply with some clients but alienate others. This ongoing attention to implementation and emerging outcomes allows us to be responsive, to continually improve rather than rigidly adhere to a plan that isn't serving the people in front of us.


After: The Reckoning and the Harvest


Eventually, we step back and ask the bigger questions. Did it work? By how much? At what cost? What have we learned about both the design and the implementation? And perhaps most importantly for recovery work: Are the changes sustained?


This is where we harvest the learning, where we contribute to the collective knowledge of our field. Not every intervention succeeds in the way we hoped, but every intervention that's properly evaluated teaches us something valuable. The question is whether we're willing to look closely enough to see it.


What makes evaluation good?


There are no universal criteria for defining a good evaluation. It's context-dependent, intervention-specific, and tied to the questions that matter most to the people involved.


But some principles hold steady:


A good evaluation is useful: designed to meet the needs of multiple stakeholders and producing insights that actually inform decisions. It helps us determine not just whether clients are "better" at the end, but whether our programme

contributed to that improvement, and if so, which components mattered most. This kind of usefulness requires intellectual honesty about limitations. We must communicate clearly what our evaluation can and cannot tell us, so findings are used responsibly rather than over-claimed.


A good evaluation is credible: achieving a degree of objectivity through independent evaluators or respected figures who can steer and peer-review the work. Independent peer review and independent steering committees can help quality-assure the design and execution of an evaluation. Credibility isn't about perfection, but about transparency and rigor that allows others to trust what we've found.


A good evaluation is feasible: realistic given the time, resources, and expertise available. I've seen ambitious evaluation plans collapse under their own weight, generating more burden than insight. Better to do a modest evaluation well than an elaborate one poorly.


A good evaluation is robust: well-designed with appropriate methods, well-executed with attention to validity and reliability. The approach to establishing impact should involve rigorous, valid and reliable methodologies.  This doesn't mean every evaluation needs a randomized controlled trial. It means matching the approach to the question and doing it with care.


And a good evaluation is proportionate: scaled to the risk, impact, and learning needs of the intervention itself. Not all interventions will require the same level of scrutiny or have the same learning needs. A well-established, low-risk intervention might need only light-touch monitoring to ensure it's delivered as intended. A high-stakes innovation with significant resources attached needs deeper scrutiny.


Who benefits?


Ultimately, clients are the beneficiaries of good evaluation. When we evaluate well, we learn how to serve better. We identify what works and amplify it. We spot what harms and stop it. We understand not just outcomes but experiences, not just whether people complete treatment but whether treatment was worth completing.

An effective evaluation reflects treatment results and, where appropriate, client satisfaction, as a genuine inquiry into whether services are meeting the needs of the people they're designed to serve.


The invitation


Evaluation, done well, is an act of honesty and hope. It says: we believe we can learn; we can improve, we can do better by the people who trust us with their recovery. It's a commitment to staying humble and curious, to building services based on evidence rather than habit, to contributing to a field that grows wiser with each cycle of inquiry.


The question isn't whether we have the resources to evaluate. The question is whether we can afford not to. Whether we can, in good conscience, continue work without knowing whether it's truly serving the people who need us most.


So, I invite you to consider: What questions are you avoiding? What assumptions remain unexamined? And what might become possible if you brought the attitude of evaluation, that blend of rigor and curiosity, of care and courage, into your daily work?


The answers might surprise you. They might challenge you. But they will certainly make you better at what you do. And in this field, that matters more than we can measure.

 
 
 

Comments


Get Great Recovery Insights  Straight to Your Mailbox.

+44 744 823 7212

London

United Kingdom

© 2025 by Dragos Dragomir.

bottom of page