How Leaders Build Clarity Around Evidence, Artifacts, and Observation for Specialized Staff
- Josh Morgan

- 1 day ago
- 5 min read
![]() | One of the fastest ways to create frustration in evaluation is to leave people guessing about what counts as evidence. That problem gets even bigger with specialized staff. In many school systems, educators in specialized roles do important work that does not fit neatly into one visible lesson. Their impact may show up through direct services, consultation, progress monitoring, documentation, family communication, collaboration with teams, coaching, or systems problem-solving. If leaders do not build clarity around evidence, artifacts, and observation, evaluation quickly becomes confusing, inconsistent, and overly dependent on what happens to be easiest to see. |
That is not fair to staff, and it is not good leadership.
Observation alone is not enough
Traditional evaluation systems often lean heavily on observation. That makes sense in many classroom roles, where instruction can be seen in a defined block of time. But specialized roles often require a broader lens.
Some of the work is visible. Leaders may observe direct services, small-group support, consultation meetings, training sessions, collaborative planning, or family-facing conversations. But some of the most important work happens off-stage. It happens in preparation, analysis, follow-up, documentation, coordination, and decision-making that shapes student access and support.
That is why observation alone is not enough.
When leaders rely too heavily on what they happen to see in a moment, they can miss the larger picture of practice. They may undervalue the quality of planning, the use of data, the consistency of follow-through, or the systems thinking behind the work. Strong evaluation requires leaders to understand that some roles produce impact in ways that are both visible and invisible.
Artifacts should not feel like random paperwork
One reason evaluation becomes frustrating is that artifacts are sometimes treated like a scavenger hunt.
Staff start collecting documents because they feel they should have something to upload, submit, or show. Leaders ask for evidence, but the connection between the artifact and the actual work is not always clear. As a result, people can end up with folders full of material that proves activity without really showing practice.
That is not the purpose of artifacts.
Artifacts should help tell the story of the work. They should show how an educator plans, responds, collaborates, monitors, communicates, and adjusts in ways that support students and teams. Good artifacts are not random extras. They are natural traces of thoughtful practice.
That means leaders need to help staff move from this question:
What can I put in my folder?
to this one:
What evidence best reflects how I do this work well?
That is a very different conversation.
Clarity starts when leaders name the actual work
The strongest evaluation conversations start with role clarity.
Before leaders can identify strong evidence, they need to understand what the role actually asks of the educator.
For specialized staff, that often includes several layers of work happening at once:
direct service to students
support across multiple settings
consultation with staff
communication with families
progress monitoring and data review
planning and documentation
problem-solving with teams
adjustment of services and supports over time
Once leaders understand the role more clearly, evidence becomes easier to define. A communication log is not just a log. It may show family partnership, consultation, responsiveness, and follow-through. Progress monitoring is not just a chart. It may show instructional decision-making, reflection, and service adjustment. A meeting agenda or notes document may show collaborative planning, leadership, and problem-solving.
The point is not to collect more. The point is to connect evidence more meaningfully to the work.

Leaders should help staff see the difference between evidence and volume
More evidence does not always mean better evidence.
That is one of the most important lessons in this work.
Sometimes staff feel pressure to over-document because they are unsure what matters most. That can create unnecessary stress and clutter. On the other hand, some educators under-document meaningful work because they assume only formal observations count.
Leaders need to help staff understand the difference between volume and relevance.
A smaller set of strong, aligned artifacts is usually more useful than a large pile of disconnected documents. What matters is whether the evidence reflects the core responsibilities of the role, shows thoughtful practice, and helps support a meaningful conversation about impact and growth.
This is where leadership matters. Good leaders reduce noise. They do not increase it.
Observation and artifacts should work together
The best evaluation systems do not force leaders to choose between observation and artifacts. They use both.
Observation helps leaders see practice in action. It captures interaction, communication, responsiveness, facilitation, instruction, and professional presence in real time.
Artifacts help leaders see the thinking behind the work. They reveal planning, data use, collaboration, follow-up, reflection, systems design, and consistency over time.
Used together, they create a fuller picture.
That fuller picture matters because specialized roles often cannot be understood well through one lens alone. A leader might observe a consultation meeting and then review follow-up notes, planning documents, communication records, or progress monitoring to understand whether the work was coherent and effective over time. That kind of evaluation is not softer. It is actually more rigorous because it is more accurate.
Good leaders make evidence usable, not mysterious
One of the best things leaders can do is make evidence feel usable.
That means:
giving examples of meaningful artifacts
helping staff understand what different evidence types can show
clarifying how observations and artifacts connect
revisiting expectations over time
creating space for questions
using coaching conversations to build confidence and precision
This is where team structures like PLCs can help. Staff should not have to decode the entire evaluation system alone. They need opportunities to compare examples, discuss gray areas, and build a shared understanding of what strong evidence looks like in their roles.
When leaders do that well, evaluation becomes clearer and more credible.
Evidence should strengthen coaching, not just ratings
This may be the most important leadership point in all of this.
Evidence should not exist only to justify a score.
It should make coaching stronger.
The point of reviewing artifacts and observations is not just to confirm whether something happened. It is to help leaders and staff talk more clearly about practice.
Strong evidence allows for better questions:
What patterns are showing up in the work?
What seems strong and consistent?
Where are there gaps between intent and implementation?
What support or refinement would improve impact?
What does this suggest about the system around the educator?
That is when evidence becomes useful. It moves the conversation from technical completion to professional growth.
What this means for my leadership
This work has made me think more carefully about what it means to evaluate people fairly and clearly.
I do not want staff guessing about what counts. I do not want evidence to feel like random paperwork. And I do not want leaders relying on narrow snapshots of practice when the role itself is broader than that.
I want evaluation systems that help people understand the work, represent the work more accurately, and grow through better conversations about the work.
For specialized staff especially, that means leaders need to build clarity around evidence, artifacts, and observation in deliberate ways. When leaders do that well, evaluation becomes more accurate, more supportive, and more connected to the real responsibilities of the role.
That is better for staff, and it is better for students.

![]() | Evaluation becomes confusing and inconsistent when specialized staff are not given clear guidance about what counts as meaningful evidence, how artifacts connect to their roles, and how observation and documentation work together. |
![]() | Strong leaders build clarity by defining the work of the role, identifying aligned evidence sources, using artifacts and observation together, and creating shared structures that help staff understand, organize, and reflect on meaningful evidence over time. |
![]() | When evidence is clearer and better aligned, evaluation becomes more accurate, coaching becomes more useful, staff experience less confusion, and schools gain a stronger picture of practice, growth, and impact. |







Comments