Find the questions AI struggles with most, then teach it exactly those. Spend your effort where it matters.
When you give AI examples to learn from, which examples should you pick? Most people choose randomly or grab whatever's handy. But some examples are too easy — AI already knows how to handle them. The real value is in showing AI the kinds of problems that trip it up.
Active Prompting finds those tricky problems automatically. It asks AI to answer each candidate question several times, then looks at how much the answers vary. High disagreement means AI is confused — and confused is exactly where your examples will have the most impact.
This composition combines:
Show by Example Self-ConsistencyIt uses Self-Consistency's multiple-sampling technique to measure uncertainty, then feeds the hardest examples into Show by Example for maximum learning impact.
Imagine you have 50 math questions and can only afford to write detailed solutions for 4 of them to use as examples. Which 4 do you pick?
Green = consistent answers (AI is confident). Red = different answers each time (AI is confused).
Not all examples are created equal. Showing AI how to solve "2 + 2" teaches it nothing — it already knows that. Showing AI how to solve a tricky word problem it keeps getting wrong? That's where the learning happens.
Active Prompting is like a teacher who gives a diagnostic quiz first, finds out which topics the students struggle with, and then focuses the lesson on exactly those topics. It's the same amount of teaching effort, but aimed where it matters most.
Test AI on many questions to find where it's confused. Write detailed examples for those hard questions. Use them as your few-shot demonstrations. Targeted examples beat random examples every time.
Active Prompting combines Show by Example with the sampling idea from Self-Consistency, but uses it for a completely different purpose. Self-Consistency samples multiple answers to find the right one. Active Prompting samples multiple answers to find where AI is weakest, then teaches it there.
It's complementary with Self-Consistency too — you can use Active Prompting to select better examples, then use Self-Consistency at inference time for even more accuracy. The two address different parts of the problem: better examples going in, and better answer selection coming out.