What is the evidence base for the STAR (Situation, Task, Action, Result) model?

Learning and Development (L&D) uses many models, theories and practises used in the design, development and delivery of learning solutions. An important question which is often overlooked in the use of these models and theories is where the evidence base has come from, or on what research are the models and theories developed? It’s one thing to have a model designed using sound psychological principles, and a different thing for a model to be based on a person’s or a consultant’s own experience, and even more different to an idea becoming a model.

The benefit of research driven models and theories is that we can more directly attribute an intended outcome with the model or theory, because we can prove the model has practical utility. Through Challenging Frontiers, I want to highlight what good work can look like, and in particular how L&D can become more mindful of evidence-based models, theories and approaches.

The STAR (Situation, Task, Action, Result) model is often used in interview situations and for giving performance feedback. First developed by Development Dimensions International, the model has become very popular and used by practitioners across the globe. The practical use of the model makes a lot of sense. Acronyms help people remember key things, and STAR is no different. In interview situations, it can be a highly effective way to:

  • Describe the Situation

  • Explain the Task

  • Share the Action

  • Talk about the Result

The same methodology can also be applied when giving performance feedback. It’s easy to see that in describing feedback using STAR, it focuses on evidence and observed behaviour. It gives the line manager a way to have a difficult conversation with clear framing and clear examples.

For me, the use of a model is strengthened when I know that the model itself was tested using some kind of research-methods approach from psychology / scientific method. As best as I can determine, the STAR model is the smart thinking of consultants from DDI, and it’s clear to see why it’s so popular. And sometimes smart thinking is a great outcome.

When I talk about research-methods, there are two key criteria to examine. One is the reliability of the tool. If I use the tool over and again, will I gain similar results at each occasion? The other is the validity of the tool. Does the tool measure what it claims to measure?

From a reliability perspective, anecdotally it seems like there is consistency in what the model allows for. However, there is a lack of actual evidence to argue the same. Although anecdotally we may have heard that recruiters have used the model to help job candidates in a number of sectors and across a variety of job roles, there is no research to qualify the same. Similarly, although anecdotally we may have heard that it is an effective model, we don’t have a comparison base. Alternatives like BOOST and STI have been developed, and what we don’t know is how STAR compares to any of them. As is more likely the case it’s the discretion of the practitioner, which is laden with bias and prejudice for one model over another.

From a validity perspective, how do we know that STAR is the right model to use in different scenarios? There is a form of validity called ‘face validity’, where if it looks like it does what it says, we can have a level of faith in the tool. Again, though, from a development perspective, we don’t have the research to let us know if there were other variations of STAR or other feedback models that could also deliver similar results.

The likelihood for a model like STAR is that it will continue to be used, and we can give improved guidance on how to use it.

If it’s being used to support job candidates, ensure that recruiters and hiring managers help job candidates to understand how they can use the STAR model so it gives consistency in approach. When we give people clarity on what to do, it reduces bias and helps provide clear parameters for people to provide better responses.

If it’s being used to give performance feedback, training for managers on the tool is key, particularly if you are adopting it across the organisation. Consistency of the model means that team members will be aware of how managers deliver performance feedback, and that provides a clear base for having ongoing performance based conversations. Again, training on the tool means we reduce bias, reduce personal commentary, and improve the use of examples in giving clear feedback.

This post isn’t to discredit the STAR model, it’s to highlight the importance of research-methods in L&D, and using the STAR model as an example of a popular model with little/no evidence underpinning it. This raises important questions in a profession where we are helping people go through personal and professional development. People will have trust and faith in L&D to use models and theories that can help them develop. And with tools like STAR, we can provide that. But if the tool is the product of someone’s smart thinking, vs a well-developed and researched tool, that brings with it a different set of use cases.

If you want to read the full report we’ve written on this, download your free copy here.

Next
Next

Using positive psychology for learning and organisational development