February 5, 2026
MERL as an Early Warning System: What Your Data Should Help You Fix Next Week

Monitoring, Evaluation, Research, and Learning (MERL) often gets treated as a rearview mirror: it helps explain what happens after a program ends. That’s important for reporting. But it’s not enough for inclusion because exclusion typically happens during delivery: when timing, formats, or support structures quietly make participation harder for certain founders.

In the latest session of ANDE Asia Access and Opportunity Learning Lab, Tom Sebastian (Seedstars) described an alternative approach: treating MERL as an early warning system. The goal isn’t only to measure outcomes. It’s to detect friction in real time, then change the program design before founders disengage.

From reporting to prevention

If inclusion is systems design, MERL is one of the fastest ways to locate where the system is failing.

A predictive MERL approach asks:

  • Where are people dropping off?
  • What barriers correlate with disengagement?
  • Which program elements are working differently for different groups?
  • What can we change this week to reduce friction?

This is less about producing a perfect endline report and more about making weekly adjustments that keep founders in the room.

What Seedstars changed once MERL revealed the pattern

Tom shared two examples of “format and timing” issues that initially looked like performance or motivation problems.

1) Retention dips linked to caregiving peaks
Seedstars observed women dropping out between weeks 4 and 6. MERL data suggested the dip aligned with caregiving pressure points. They adjusted program timing and added a confidence module. Retention rebounded—suggesting the issue wasn’t capability or commitment, but competing demands on time.

2) Underperformance that turned out to be a design issue
In disability cohorts, disabled founders appeared to underperform on certain assignments. MERL helped isolate the problem: the format of the assignment was creating an accessibility barrier. When Seedstars changed how information was gathered, performance improved quickly. The skills were there; the system wasn’t reading them.

The lesson: when founders struggle, the first question shouldn’t be “What’s wrong with them?” It should be “What in our design is making this harder than it needs to be?”

e

What “predictive” MERL looks like in practice

A predictive MERL system is lightweight enough to run weekly and specific enough to drive action. Seedstars described tools such as:

  • Weekly pulse surveys to identify fatigue, confusion, or access issues before they become dropouts
  • Dashboards that disaggregate participation and performance (by demographic, and importantly, by barrier type)
  • Rapid feedback loops where program teams decide what to change and track whether it worked

A practical way to think about it: MERL should help you identify where your program is leaking potential—and what plug to use.

Shift from identity-only reporting to barrier-aware insight

Seedstars emphasized barrier-based segmentation as a complement to demographics.

Demographics help you understand who is in your program. Barrier tracking helps you understand what conditions participants need to succeed.

Examples of barriers you can track without overcomplicating your data:

  • connectivity (device + bandwidth constraints)
  • time constraints (caregiving, multiple jobs)
  • access constraints (language, disability accommodations)
  • network constraints (mentor/investor access)
  • confidence constraints (pricing, negotiation, public speaking formats)

When you track barriers, your program improvements become more targeted—and often improve the experience for everyone.

Practical MERL moves you can implement quickly

If you want MERL to guide action, you need a few repeatable habits.

1) Ask questions you can act on next week
Use pulse surveys that point to fixable causes, not vague satisfaction ratings. Examples:

  • “What prevented you from completing this week’s task?” (select one)
  • “Which format would work better next week: live / recorded / asynchronous?”
  • “Was anything inaccessible in today’s session?” (yes/no + optional details)

2) Watch the “middle weeks” closely
Many programs lose founders after the initial momentum fades. Identify your typical dropout window and instrument it: attendance, assignment completion, confidence signals, and qualitative feedback.

3) Treat the dashboard as a decision tool
A dashboard should not be a display. It should trigger decisions:

  • What do we change next week?
  • Who needs outreach?
  • Which format adjustments will reduce friction?

4) Document what you changed—and what happened
If MERL is a learning system, your changes are the experiment. Keep a simple log:

  • change made
  • hypothesis (what barrier it addresses)
  • indicator you expect to move
  • result after 1–2 weeks
The question to end every MERL meeting with

If your MERL is working, it should repeatedly answer one question:

“What should we fix next?”

Inclusion isn’t a statement of intent. It’s a set of design choices—validated (or challenged) by data. When MERL becomes predictive, it turns measurement into a practical tool for building fairer programs in real time.