Putting Working Group Scorecards to Use: Turning Feedback into Meaningful Change

Guide
Putting Working Group Scorecards to Use:
Turning Feedback into Meaningful Change
You have the scorecards. Now what? This guide walks through how to read coalition health data, identify what patterns mean, and translate what you're seeing into targeted support and structural change.

Overview

No matter the data source, it is only useful if it changes what people do. The Coalition Health Diagnostic Tool was designed to make collaboration visible. Its real value is not in the scorecards themselves, but in how leaders and backbone staff use those insights to decide where to focus attention, how to support working groups, and when to intervene.What follows is a practical way to interpret coalition health data and use it to guide support, alignment, and change. If you haven’t already read the grounding piece, Coalition Health Diagnostic & Governance Framework, it provides helpful context for what comes next.

Two Views, Two Purposes

The tool produces two kinds of insight: (1) Individual Coalition Scorecards, which helps each group understand its own strengths, tensions, and opportunities for growth, and (2) Cross-Coalition Summary, which allows leadership and staff to see patterns across the system and decide where additional facilitation, redesign, or additional support is needed.

Individual-Coalition Cross-Coalition Summary
Focus Learning and alignment Governance and strategy
Who uses it Working group chairs, members, and staff Leadership and backbone staff
What it shows How collaboration is experienced within the group Patterns across the system as a whole
Key questions Where are we aligned? Where do we have gaps? Where does the system need support? What needs to change structurally?
Primary value Creates shared language and entry points for reflection Distinguishes one-off challenges from structural ones

The most important thing to remember before looking at examples: these are not report cards. They are tools for understanding where support is needed and where the system is working. The goal is never to rank groups against each other or assign blame when scores are low. It is to surface patterns, ask better questions, and direct resources and attention where they will have the most impact. When data is used this way, it builds trust rather than eroding it. Groups become more willing to be honest about what is and is not working, because they have seen that honesty leads to support, not consequences.

What does this look like in practice? Here are two examples showing how the same data looks from each vantage point — one at the working group level, one at the system level.

Individual Group Scorecards focus on learning & alignment
A working group focused on reproductive justice might see strong scores for meeting structure and facilitation, but lower scores for accountability and data use. Chairs feel the meetings are productive, but members are unsure what they are responsible for between sessions.
  • Pattern Identified: Mismatched views between members and chairs.
  • Solution: Using the scorecard, the backbone team works with the chair to clarify decision points, create a simple action tracker, and send follow-up notes so commitments are visible.
  • Outcome: Within a few months, participation stabilizes and members report greater confidence in how their work connects to the group's goals.
Data leads to support, not punishment!
System-level Summary focus on governance & strategy
Across multiple working groups, leadership sees consistently low scores in the Define Result domain. No single group is "failing," but many are struggling to clearly articulate what they are working toward or how their activities connect to shared outcomes.
  • Pattern Identified: A system-wide lack of clarity about goals and results.
  • Solution: The backbone organization supports groups in defining shared outcomes, aligns goal-setting across the network, and provides tools to connect activities to results.
  • Outcome: Backbone staff implemented a Results-Based Accountability framework. With this shift, groups gained a clearer sense of purpose, could better describe what success looks like, and were better able to coordinate their work around shared goals.
Data guides organizational learning, not blame!

These two views serve different purposes and should not be used in the same way. Confusing the two can lead to defensiveness or inaction.

How to Read a Working Group Scorecard

Remember, a working group scorecard is not a report card. It is a way of seeing how collaboration is actually experienced.

Each row represents a different vantage point within the system. Chairs, members, and backbone staff are working toward the same goals, but they do not always experience the work the same. When those perspectives diverge, the difference is often more important than the score itself. The accordion to the right shows three common patterns to look out for.

1. Chairs rate domains higher than members

This typically indicates the chairs believe meetings are productive, while members are less clear about the decisions made and how those decisions are translated into action. It points to the need for clearer communication, stronger facilitation, and more explicit follow-through.

This is not necessarily a failure of group leadership. Here is what each role can do:

  • Staff — offer coaching and practical tools such as more detailed prep materials, a concrete decision-making structure like Fist to Five, and follow-up notes after meetings
  • Chairs — be open to feedback and listen to staff suggestions
  • Leadership — ensure chairs have the time and support to lead effectively
2. Members rate domains higher than staff

This usually means the group feels energized and connected with each other, but not yet aligned with the backbone's broader strategy, data practices, or expectations. When you see this pattern, it signals that staff and chairs need to work together to translate excitement and momentum into coordinated and durable action.

  • Staff — help translate activity into measurable reports and shared reporting
  • Chairs — clarify priorities so effort is aligned with system-level goals
  • Members — continue doing the work, but be more explicit about how it connects to shared outcomes
3. Scores are low across all perspectives

When chairs, members, and staff all rate a domain poorly, the issue is rarely about effort. It is about structure. The group may lack a clear purpose, effective leadership, or the right mix of partners.

  • Staff — bring additional options to the table, including the option of sunsetting the group or rebuilding it
  • Chairs — be explicit about what is not working, explore constraints, and advocate for additional resources
  • Members — share openly where work is breaking down and what support is needed to contribute meaningfully

These patterns are not always indicators of a strong or weak group. They offer a way to understand where responsibility, structure, and support need to be adjusted. When coalition health is read this way, it becomes a way to care for the system and the people within it.

How to Read the Cross-Coalition Summary

The cross-coalition summary answers a different question: Where is the system healthy, and where is it under strain?

When reviewing the cross-coalition summary, leadership should look for three things: (1) clusters of low scores in the same domain across several groups, (2) recurring misalignment between chairs, members, and staff, and (3) signals that point to backbone-wide changes rather than group-level fixes.

1. Multiple groups score low in the same domain

Instead of indicating a problem with any one group, this pattern points to how the backbone organization defines success, supports measurement, or holds everyone accountable to expectations around data sharing. Responses might include clearer expectations (like a working group participation agreement with members), improvements to data infrastructure, or additional funding to help partners meet reporting requirements.

2. Groups show mixed signals within a domain

This signals disagreement about how well the standards are met within the groups. Leadership can look for places where there is strong alignment and highlight those coalitions as sources of learning for the network. This might take the form of a peer-led panel highlighting effective practices or small stipends for chairs to mentor others. Remember, the goal is not to compare groups, but to recognize that while the backbone staff can provide structure and resources, working group members are experts in their own experience. Often, the most meaningful insights are communicated best through peers.

With this interpretation, the cross-coalition summary becomes a tool for stewarding the system rather than managing individual groups. It helps leadership see where the structure of the network needs to change, where additional support is required, and where learning already exists inside the community. When patterns guide decisions, the work shifts from reacting to problems to intentionally building the conditions that allow collaboration to thrive.

Turning Patterns into Action

The most important takeaway is this:

Instead of looking for what's broken, look for where support would make the biggest difference.

Don’t ask which groups are failing. Instead ask what kind of support the network needs right now. Coalition data is most powerful when it guides how attention, time, and resources are allocated. It helps leadership see where facilitation would be most helpful, which chairs would benefit from coaching, where goals need more clarity, when leadership structures should be adjusted, and how data and accountability practices can be strengthened across the network.

Because the data is triangulated across chairs, members, and staff, these decisions are grounded in patterns rather than individual perspectives. Leaders are not reacting to a single story, but responding to what the system is showing them.

Protecting Trust While Using Data

Data only works when people trust how it is used. This means working group scorecards are used for reflection and learning, not ranking. System-level summaries are used for strategy and support, not blame. There needs to be transparency about what the data will and will not be used for.

This kind of trust also requires attention to power, equity, and community voice. Resources like Why Am I Always Being Researched? from Chicago Beyond, Urban Institute's Elevate Data for Equity project, and the Data Equity Framework from We All Count offer practical guidance for designing data systems that are transparent, reciprocal, and grounded in shared responsibility rather than extraction (all linked below).

When groups see that honest feedback leads to engaging facilitation, clearer expectations, and real support rather than punishment, they become more open over time. That openness is what makes learning, adaptation, and improvement possible.

Why This Matters

Complex systems and networks rarely struggle because people do not care. They struggle because no one can clearly see how the network is actually functioning. By combining scorecards, triangulated perspectives, and continuous feedback loops, this data makes collaboration visible in a way that supports better decisions and more human-centered leadership. It creates a shared understanding, surfaces where the work needs support, and helps leaders care for both people and the structures that make collective work possible. This is how collaboration becomessomething leaders can steward with care and intention, and something people can rely on rather than just hope for.



Want to Dive Deeper?

If any of this resonated and you want to keep going,
here's where I'd send you next.


Fist to Five Consensus-Building Tool explained by Civic Canopy.
I have used this method in hundreds of meetings! If your group is used to majority rules in decision-making, this can be a rough transition, but stick with it! This method really highlights the art of making the implicit explicit and how conversation can really strengthen the decisions made.

Masterful Meetings: Meeting Facilitation Training offered by Leadership Strategies.
I completed this training a few years ago, and it opened my eyes to how, as the facilitator, the energy you bring into the space can really make or break the meeting.

Why Am I Always Being Researched? by Chicago Beyond.
I keep the pocket version of this report on my desk as a quick reference and reminder of what's really important. It is a powerful reframing toward respect, reciprocity, and shared ownership of knowledge.

The Data Equity Framework from We All Count.
I love this framework because it reminds me data is not just numbers. It is a series of choices about what gets collected, who is represented, and how information is used. It is a great tool for surfacing power and accountability in all data work.

Principles for Advancing Equitable Data Practice by Marcus Gaddy and Kassie Scott.
As a part of Urban Institute's Elevate Data for Equity project, I appreciate that it starts in familiar territory: protections from the Institutional Review Board (IRB) and the Belmont Report; then expands those concepts to include questions of power and equity. I find myself coming back to it whenever I'm thinking about designing data systems and tools that are technically solid, but also ethical and human-centered.


Previous
Previous

What Technology Couldn't Build: How governance, trust, and change management built the infrastructure instead

Next
Next

Taking the Pulse of a Working Group: A behavior-based, triangulated diagnostic for group learning