Fit-Gap Analysis

SME Whisperer Case Study1

Student affairs professionals at James Madison University (JMU) are expected to demonstrate that their programs/events lead to student learning and development in important non-cognitive areas (e.g., intercultural competency). However, yearly assessment reports consistently lack evidence of student learning, often focusing instead on program attendance and student satisfaction.

The Client: Center for Assessment & Research Studies (CARS)

The Ask: CARS seeks to promote student learning assessment within the Division of Student Affairs by addressing a perceived “lack of cohesion and intentionality” with respect to JMU’s current assessment training/resource offerings.

Background Information:

  • CARS is comprised of a team of full-time faculty and graduate students who provide assessment support to the Division and conduct assessment-related research.
  • Past needs assessment results indicate student affairs professionals at JMU value student learning assessment but report not having sufficient time, training, or leadership support to engage in it effectively.
  • Given the diversity of functions and programming offered by student affairs offices across the Division, divisional expectations for assessment are vague and (subsequently) difficult to enforce.
  • Professional organizations have outlined broad standards for assessment in student affairs, but there has been no articulation of the specific knowledge/skills required for effective assessment practice.

A rubric was developed to articulate the knowledge, skills, and attitudes needed to engage in quality assessment practice, as well as the learning progression in these skills from novice to advanced.

SMEs within CARS and the Division of Student Affairs engaged in a facilitated card sort activity to rank the importance of each rubric skill for effective student affairs assessment practice at JMU.

SMEs mapped existing assessment trainings and resources to rubric skills to identify critical gaps (high importance skills, little/no training) and inefficiencies (low importance skills, abundant training).

Step 1: Determine necessary skills
Screenshot of a table with novice, intermediate, and advanced level descriptors of the following two assessment skills: "Developing theory-based programs" and "Mapping of SLOs with curriculum".
Excerpt from Assessment Skills Framework
(Skill Area 2: Create and Map Programming to Outcomes)

Key Tasks/Decisions:

  • Relied heavily on professional standards, peer-reviewed research, and a review of CARS training documents to create a strong first draft (thus reducing the number of feedback iterations to protect SMEs’ time).
  • Sought feedback from CARS faculty, student affairs leadership, and select student affairs professionals across the Division.
  • Solicited high-level feedback via survey and more detailed feedback through follow-up interviews to streamline the process.
Step 2: Prioritize skills by importance
Three columns of note cards spread out on a table, each printed with the name of an assessment skill from the Assessment Skills Framework.
Card sort activity

Key Tasks/Decisions:

  • Facilitated an in-person card sort activity with key stakeholders within CARS and across the Division of Student Affairs. Workshop participants were asked to…
    1. Review the Assessment Skills Framework (rubric)
    2. Organize traits/domains into high, moderate, and low importance categories.
    3. Of the high importance traits/domains, rank from most to least important.
    4. Debrief about card sort process and discuss/align on rankings.
  • Feedback solicited from less critical stakeholders via an online survey version of the card sort activity.
  • Results compiled and final rankings shared with key stakeholders for approval.
Step 3: Identify critical training gaps
Excerpt from Assessment Skills Framework
(Competency Level Overview)

Key Tasks/Decisions:

  • Compiled an inventory of existing JMU trainings and resources through an extensive document/website review and follow-up SME interviews. To assist with identifying training gaps/inefficiencies, the following information was gathered for each professional development offering:
    • Cost (to individual, CARS, and/or the Division)
    • Timing/frequency
    • Mode of delivery (e.g., in person, video, email)
    • Intended audience (e.g., incoming skill level, level of seniority, office/role)
    • Potential/actual reach (how many learners can/do benefit each year)
  • Coordinated with CARS leadership to budget time during yearly team retreat to gather information about the skill coverage of existing CARS trainings and resources. CARS faculty were asked to respond to the following questions:
    • What skills/attitudes are targeted by the training/resource?
    • (Training Only) What competency level can participants be reasonably expected to achieve by the end of the training?

What we found: While there was ample assessment training aimed at getting student affairs professionals to the novice level in most skills (i.e., able to apply their assessment knowledge to simplified or hypothetical examples), there was no training to support professionals in progressing to the intermediate level (able to apply their knowledge to real-life assessment projects).

Additionally, offerings that allowed for hands-on practice were provided infrequently and at high cost (financial/time) to individuals and the Division.

Next steps: A plan was developed to institute a cohort-based training model where learners meet regularly over the course of a year to design and implement real-life assessment projects within their offices while receiving timely, personalized instruction and feedback from CARS faculty/graduate students.

  • High SME engagement and satisfaction with the process. Clear communication (and negotiation) with SMEs in advance about their roles during each phase and the anticipated time commitment facilitated a smoother process and enabled SMEs to budget sufficient time to engage thoughtfully.
  • Strategic inclusion of diverse stakeholder voices. A diversity of stakeholders (from CARS experts and divisional leaders to on-the-ground student support staff) were strategically consulted at different points in the process based on when/where their feedback would be most valuable.
  • Identification of “sneaky” training gaps. Gathering detailed information on existing training opportunities (e.g., cost, timing, reach) made it possible to identify instances where training existed, but was inaccessible (or even unknown) to those who needed it most.
  • Greater focus on actual (vs. desired) assessment tasks. Rather than focusing exclusively on what student affairs professionals should do with respect to assessment, a greater understanding of their actual day-to-day assessment tasks would have allowed for the design of learning solutions that could be more readily integrated into existing workflows and covered knowledge/skills more easily transferred to the workplace.
  • Reducing burden on SMEs. The cognitive/time burden on CARS faculty could have been further reduced by requesting they provide targeted feedback on only a subset of the skills in the Assessment Skills Framework (those most closely related to their areas of expertise).
  1. This case study describes work that took place in 2018-2019 and does not necessarily reflect current conditions at JMU; descriptions of events/outcomes are subjective and written from my lens as Graduate Student Lead of the CARS Professional Development Team. ↩︎