Incorporating Learner Feedback into Your Design Process
Aug 25, 2021
One of the key models for learning experience designers is the ADDIE model: analysis, design, development, implementation, and evaluation. This is supposed to be a cycle, with evaluation informing revisions and iterations.
As a third-party developer, Six Red Marbles tends to be most heavily involved in the second and third steps. Generally, the client provides the analysis of the need, and our LXD team focuses on design and development. Sometimes we do collaborate on the analysis. But implementation usually happens on the client’s end, which means we develop a course and then send it off into the world, with little insight into how learners actually experience and engage with the content we made.
So, when we get the opportunity to review feedback and participate in the evaluation stage of the cycle, it can be exciting—but what do we do with it? How can we go about analyzing this feedback and triaging changes to implement?
Soliciting Feedback
While we’re very accustomed to receiving and implementing feedback from client stakeholders, that process is generally driven by the client—we create the content per the client’s guidance, and then the client reacts to it. Occasionally, we might flag something we specifically want input on, but for the most part the client decides where to focus their attention.
The process for soliciting learner feedback is necessarily different because we have the opportunity to drive the process and focus the learners’ input on particular topics. If we’re collecting feedback from a small group of test users, we create a detailed document to capture comments on each component of our design. For live user research, we have to be more high level, so we usually create a survey that captures both quantitative and qualitative data on concept, usability, learner effort, and learner satisfaction. If the client prefers, we can also solicit customer feedback through interviews or “voice of the learner” sessions, which adds depth and gives us the chance to ask follow-up questions.
Sorting Qualitative Feedback
The first thing we do is look through the feedback to identify recurring themes. The more frequently something is mentioned, the higher it goes on the list of action items. This thematic approach allows us to see the big picture and identify both common problems and particular successes. The positives are just as important as the negatives because revisions shouldn’t only be about fixing problems—enhancing a learning experience also entails building on the bits that are already working well. The positive themes can also inform the direction of revisions that address issues.
In a recent course, for example, many learners indicated that they appreciated skills-based activities but found the prework for the activities repetitive. To address this feedback, we streamlined the prep questions and revised instructions to clarify the purpose of the prework. We also reoriented some activities to be more hands-on, leveraging the positive (skills practice) to further alleviate the negative (repetitiveness).
After the common themes, we look for any other feedback that feels essential, even if it’s not mentioned frequently. This includes:
- Accessibility: If something is inaccessible to a learner, we need to address it immediately. We take a learner-centered approach to our course design, and that doesn’t work if the learner is missing out.
- Inclusivity: Like accessibility issues, inclusivity or bias concerns may not be obvious to all learners—but they’re very important for the learners who do notice them. We strive for inclusive, welcoming designs that reflect the diversity of our learners, so this type of feedback is especially valuable. In some cases, a single comment might prompt us to consider DEI from a new angle, which then informs global revisions to the course.
- Key insights: We believe we can learn just as much from each project as our learners do, and learners’ fresh perspectives can be a great source of insight. Sometimes the most valuable comments come from only a small number of individuals, so we take each piece of feedback seriously and look for ways to grow not only a particular design but our practice as a whole.
The final step in determining priorities is to separate necessities from nice ideas. A candid appraisal of the time and resources available helps us decide where to draw that line. Sometimes it makes sense to apply simple, quick changes that fit within the client’s budget even if they’re not urgent. Other times the client may decide to set aside a budget for more extensive revisions and address as many updates as fit into their timeline.
Implementation Questions and Challenges
Now that we have our prioritized list, we do a deeper dive into each point and identify potential changes that would meet the need. To determine which solutions to implement, we consider several factors, including the time and effort involved, any ripple effects across the rest of the learning experience, and the client’s receptiveness to change.
Often, one of our most complicated challenges to navigate is making the client happy while addressing learner concerns. Concrete feedback enables us to bring the learner’s voice into the conversation. In one recent example, a client believed that it was helpful to provide learners with reusable student worksheets in two places: both within the course learning journals and in a separate document. But when feedback from learners made clear that this redundancy caused confusion, we were able to show the client why it would work better for the worksheets to be in the separate document only.
We, Too, Are Learners
Having direct feedback from learners is a welcome opportunity for us, and we relish the chance to dive in to the data and come up with an action plan. Being able to revise existing courses gives us valuable insight for all the other courses we’re building. At Six Red Marbles, we are Always a Student.