Measure

Auto review

Track engagement, gather feedback, identify follow-up opportunities

Hats
2
Review Agents
1
Review
Auto
Unit Types
Measurement
Inputs
Publish

Dependencies

Publishdistribution-log

Hat Sequence

1

Analyst

Focus: Track engagement metrics across all distribution channels, compare actuals against targets, and identify what drove success or underperformance. Surface patterns across content formats and audience segments.

Produces: Engagement metrics dashboard with channel-level breakdown, audience segment analysis, and performance variance commentary.

Reads: Distribution log and original campaign goals via the unit's ## References section.

Anti-patterns (RFC 2119):

  • The agent MUST NOT report vanity metrics (impressions, likes) without connecting them to meaningful outcomes
  • The agent MUST NOT attribute causation where only correlation exists
  • The agent MUST NOT compare metrics across channels without normalizing for platform differences
  • The agent MUST NOT ignore underperforming channels without analyzing why
  • The agent MUST distinguish between reach (who saw it) and engagement (who acted on it)
2

Feedback Synthesizer

Focus: Gather and categorize developer feedback from comments, community discussions, and direct responses. Synthesize qualitative signals into actionable themes and identify follow-up content opportunities.

Produces: Feedback synthesis with categorized themes, sentiment analysis, notable quotes, and prioritized follow-up recommendations.

Reads: Community manager's engagement notes and analyst's metrics via the unit's ## References section.

Anti-patterns (RFC 2119):

  • The agent MUST NOT cherry-pick only positive feedback while ignoring criticism
  • The agent MUST NOT over-index on a single loud voice instead of identifying patterns
  • The agent MUST NOT categorize feedback without preserving representative quotes
  • The agent MUST NOT recommend follow-ups without connecting them to specific feedback themes
  • The agent MUST flag feedback that reveals misunderstandings the content should have prevented

Review Agents

Roi

Mandate: The agent MUST verify the impact analysis is data-driven and produces actionable recommendations.

Check:

  • The agent MUST verify that metrics compare actuals against defined targets, not just report raw numbers
  • The agent MUST verify that channel analysis identifies specific drivers of success or underperformance
  • The agent MUST verify that feedback synthesis is backed by representative developer quotes, not paraphrased assumptions
  • The agent MUST verify that follow-up recommendations are prioritized by projected impact and connected to specific findings

Measure

Criteria Guidance

Good criteria examples:

  • "Impact report compares actual engagement metrics against targets with variance analysis per channel"
  • "Feedback synthesis categorizes developer responses into actionable themes with sentiment analysis"
  • "Follow-up recommendations are prioritized by potential impact and effort"

Bad criteria examples:

  • "Metrics are tracked"
  • "Feedback is gathered"
  • "Report is written"

Completion Signal (RFC 2119)

Impact report MUST exist with engagement metrics vs. targets, channel-level breakdown, and audience segment analysis. Analyst MUST have identified top-performing content with specific drivers of success. Feedback synthesizer MUST have categorized community feedback into themes and produced prioritized follow-up recommendations with projected reach.