Introduction

Most learning and development (L&D) practitioners are concerned about their level of understanding of the impact of learning. Effective learning and development evaluation needs to be strongly linked with identified performance gaps. The L&D strategy will outline the organisation’s evaluation approach and describe how the impact of any interventions, will be measured.

This factsheet defines evaluation in an organisational L&D context. It explores typical evaluation methods, from post-training questionnaires to development metrics and quantitative survey methods. It also enforces why learning must be aligned with business objectives.

The CIPD is at the heart of change happening across L&D, supporting practitioners in providing insights and resources. Connect with us through our Leading in Learning network.

The quality and effectiveness of learning and development (L&D) activities can be assessed both formally and informally and show alignment to organisational performance. Our Profession Map encourages practitioners to view evaluation as learner engagement, transfer and impact.

Links with learning and development strategy

A learning and development strategy driven by the organisation’s strategic goals and needs is widely recognised as important to business success. To effectively evaluate L&D, it’s first necessary to have clearly identified organisational performance targets and subsequent learning needs, and agree what measures of success will look like. Evaluation covers the impact of learning provision, how that is transferred as well as the engagement of employees undertaking L&D activities and the engagement of wider stakeholders in the process.

Coverage of learning and development evaluation

Our learning cultures research gives advice on evaluating the learning environment across the whole organisation, with particular teams and at an individual level.

Whilst the majority of organisations carry out some evaluation of learning activities, in 2019 our Professionalising learning and development report showed about a quarter of respondents struggle to understand L&Ds impact. Our 2021 Learning and skills at work report showed that one in four respondents don’t systematically evaluate L&D initiatives..

Evaluation activities can include:

  • Impact – where L&D can work with the organisation to show how the learning interventions have impacted on performance – these can include links to key performance indicators (financial and operational).

  • Transfer – where L&D can work with the organisation to show how any learning undertaken on L&D events has been transferred back into the employee’s role and work area – these can include performance goals and how new skills and knowledge have been used.

  • Engagement – where L&D can demonstrate how stakeholders are engaged with learning, this can be at an organisational level where a positive learning environment is the goal, at team levels or at an individual level (the ‘happy sheet’ is an individual reaction to an individual event).

As L&D practice moves from solely offering formal training to embrace ‘social learning’, where sharing, impact and kudos are less tangible measures, measurement can become more challenging. See more in our factsheet on evolving L&D practice which also covers the need to measure value not volume in L&D activities.

The Kirkpatrick model

The seminal model for L&D evaluation, first published in the 1950s by US academic Don Kirkpatrick remains influential today. However, research conducted by Thalheimer indicates this model was first introduced by Raymond Katzell.

It outlines four levels for evaluating learning or training:

  • Reactions – reaction to a learning intervention that could include ‘liking or feelings for a programme’.
  • Learning - ‘principles, facts etc absorbed’.
  • Behaviour - ‘using learning gained on the job’.
  • Results - ‘increased production, reduced costs, etc’.

This was helpful guidance when launched. However, in the 1980s Alliger and Janak found that the relationships between the levels were weak because each level is not always linked positively to the next.

Various surveys from the Association for Talent Development have found that most attention is focused on evaluation of learning at the reactions level because of the difficulties and time costs of measuring the other three levels. Thalheimer suggests eight recognised levels of learning evaluation, including some listed above, but he argues that some of these are highly ineffective.

Brinkerhoff success case method

A key criticism of Kirkpatrick’s evaluation model is that changes to performance cannot solely be linked to learning. The Brinkerhoff success case method (SCM) addresses this challenge by proposing a wider focus on systems.

Firstly, an SCM evaluation involves finding likely ‘success cases’ where individuals or teams have benefited from the learning. These typically come from a survey, performance reports, organisational data or the ‘information grapevine’. Those representing potential ‘success cases’ are interviewed and ‘screened’ to find out if they genuinely represent verifiable success with corroborating evidence from other parties. Factors that contribute to success beyond the learning intervention are also explored.

Secondly, an SCM evaluation looks at ‘non-success cases’ to discover those who have found little or no value from the learning. Exploring the reasons why can be very illuminating.

The approach asks four questions:

  • How well is an organisation using learning to improve performance?
  • What organisational processes/resources are in place to support performance improvement?
  • What needs to be improved?
  • What organisational barriers stand in the way of performance improvement?

Following analysis, the success and non-success ‘stories’ are shared.

SCM should not be seen as comprehensive evaluation method because of the nature of the sampling, but it offers a manageable, cost-effective approach to determine success insights and areas for improvement.

Weinbauer-Heidel’s levers of transfer effectiveness

Weinbauer-Heidel has researched learning transfer extensively and published her approach called 12 Levers of Transfer Effectiveness. She recommends asking learners ‘What is the likelihood of you applying this learning?’ and using a ‘Net Transfer Score’ to give insight on the impact of L&D.

Thalheimer’s Learning Transfer Evaluation Model

The Learning Transfer Evaluation Model is divided into 8 tiers and colour-coded to work as a kind of barometer using a traffic light system: green shows which methods are most useful in validating learning results, while red shows those which are inadequate in measuring learning. Yellow shows those in-between.

Easterby-Smith model

In the mid-1990s Easterby-Smith was able to draw together four main strands for the purposes of training evaluation:

  • Proving – that the training worked or had measurable impact in itself
  • Controlling – for example, the time needed for training courses, access to costly off-the-job programmes, consistency or compliance requirements
  • Improving – for example, the training, trainers, course content and arrangements etc
  • Reinforcing – using evaluation efforts as a deliberate contribution to the learning process itself.

This model focuses on single training programmes.

Philips' return on investment model

Philips and Philips built on the Kirkpatrick model by adding return on investment (ROI) as a fifth level. However, much ROI evaluating is carried out post project and does not build from a baseline. Another problem is that the arithmetic of ROI means that when a small cost learning intervention is set against a big project cost, it can look superficially impressive.

Some commentators ask whether a financial model represents the best way to address the effectiveness of learning. Does stating an ROI of x% help an organisation address its performance gaps and allow the L&D team to communicate their impact.

The value of using models to approach evaluation

Each evaluation model demonstrates a specific approach and some were developed to assess the value of individual training programmes not an holistic approach to L&Ds impact on the organisation. Our 2021 Learning and skills at work survey shows that only one in four respondents make changes based on the evaluation feedback they receive.

Many organisations have an established approach for getting learners’ reaction to interventions. Commonly called a ‘happy sheet’, it looks at learner satisfaction levels of, for example, the facilitator, materials, venue etc. This is not the approach Katzell and Kirkpatrick advise in their first level of evaluation. They state the first level is the learner’s reaction to the learning itself.

Our Learning and skills at work surveys shows that where L&D teams have a clear vision and strategy and that it’s aligned to the organisation’s performance needs, evaluation tends to be more prevalent and the data is widely used within the organisation. Wider forms of evaluation beyond reaction are also common in organisations where L&D professionals have engaged with senior leaders and there’s a collective agreement of the value of learning.

L&D teams can be a credible business partner to the organisation when they take time to use a range of evaluation approaches that are in line with performance data

The ‘RAM’ approach

Drawing on our research findings, we developed an approach to learning known as RAM (Relevance, Alignment, Measurement) that still has value today. It’s based on the need for:

  • Relevance: How existing or planned learning provision will meet new opportunities and challenges for the business.

  • Alignment: If the L&D strategy takes an integrated blended approach, it’s critical for L&D practitioners to work with stakeholders about what their performance needs and how to achieve them. Aligning with broader organisational strategy gives focus, purpose and relevance to L&D.

  • Measurement: L&D teams effectively and consistently measure the impact, engagement and transfer of learning activities as part of the evaluation process. It may be helpful to use a mixture of evaluation methods and broader measures of expected change and improvement such as return on expectation, and to link L&D outcomes to key performance indicators.

The RAM approach focuses on the outcome, rather than the response to a learning event (the focus of the majority of ‘happy sheets’). Our costing and benchmarking L&D factsheet has further detail on measurement.

The 70:20:10 Institute ‘Performance Approach’

The 70:20:10 Institute suggest L&D take on ‘performance roles’. They explore a role for ‘performance detective’ and ‘performance tracker’, where the detective role remit is to find out where data exists in an organisation, and tracker to provide insights from data for effective L&D provision in meeting stakeholder needs.

The focus on learning outcomes

An immediately obvious implication of L&D evaluation research is the need to focus on learning outcomes - broadly defined as some permanent or long-lasting change in knowledge, skills and attitudes - which is an output or outcome, rather than on any training itself which is an input.

The ’talent analytics’ perspective

‘Talent analytics’ is about ‘mining’ a whole range of data streams to gain insight into how people learn and develop. Looking at the way we develop talent and provide future capability is a challenging area. It provides opportunities for real time evaluation close to the operational pulse of the organisation and is therefore more likely to be useful as a decision tool. Essentially, it’s an evidence-based approach to demonstrate value. Read more in our Talent analytics and big data research report.

Advice for L&D practitioners

Measuring the impact, transfer and engagement of L&D activities can’t be done just by an end of course questionnaire, knowledge quiz or post-training survey. L&D practitioners must work closely with stakeholders to agree success criteria for the whole L&D offering. L&D practitioners also need to work with the organisation to prioritise the available resources.

L&D practitioners need to question the value of traditional happy sheets or quizzes along with the standard default questions they contain. Is it the learner’s responsibility to ‘rate’ the facilitator or materials? To what degree does a value on a Likert scale apply to learner reaction to the learning, the engagement of a learner or, arguably the most important element, the impact on the learner’s performance? Brinkerhof uses an analogy that measuring the satisfaction of a learning event is akin to predicting the satisfaction and longevity of a marriage based on the quality of the wedding reception.

Books and reports

BEEVERS, K., REA, A. and HAYDEN, D. (2019) Learning and development practice in the workplace. 4th ed. London: CIPD and Kogan Page.

LANCASTER, A. (2019) Driving performance through learning. London: Kogan Page

MATTHEWS, P. (2018) Learning transfer at work: how to ensure training performance. Milton Keynes: Three Faces Publishing.

PARRY-SLATER, M. (2020) The learning and development handbook. London: Kogan Page.

PHILLIPS, J.J. and PHILLIPS, P. (2016) Handbook of training evaluation and measurement methods. 4th ed. New York: Routledge.

Visit the CIPD and Kogan Page Bookshop to see all our priced publications currently in print.

Journal articles

BASKA, M. (2019) Majority of L&D professionals feel ‘growing pressure’ to measure impact. People Management (online). 14 February.

DERVEN, M. (2012) Building a strategic approach to learning evaluation. T+D. Vol 66, No 11, November. pp54-57.

DIAMANTIDIS, A.D. and CHATZOGLOU, P.D. (2014) Employee post-training behaviour and performance: evaluating the results of the training process. International Journal of Training and Development. Vol 18, No 3, September. pp149-170.

FARAGHER, J. (2020) Why is calculating the ROI of L&D like finding a needle in a haystack? People Management (online). 22 October.

MATTOX, J.R. (2012) Measuring the effectiveness of informal learning methodologies. T+D. Vol 66, No 2, February. pp48-53.

CIPD members can use our online journals to find articles from over 300 journal titles relevant to HR.

Members and People Management subscribers can see articles on the People Management website.

This factsheet was last updated by David Hayden.

David Hayden

David Hayden: Digital Learning Portfolio Manager, L and D

David is part of the CIPD’s Learning Development team responsible for the digital learning portfolio - he leads the design and delivery of a number of L&D-focused products and keeps his practice up to date by facilitating online events for a range of clients. David began his L&D career after taking responsibility for three Youth Trainees in 1988 as an Operations Manager, and has since gone on to work in, and headed up, a number of corporate L&D teams and HR functions in distribution, retail, financial and public sector organisations. He completed his first Masters degree specialising in CPD and has just completed his second in Online and Distance Education. David also has a background in 'lean' and has worked as a Lean Engineer in a number of manufacturing and food organisations. Passionate about learning and exploiting all aspects of CPD, David’s style is participative and inclusive. As well as authoring the CIPD L&D factsheet series, he co-authored the 4th edition of 'Learning and Development Practice in the Workplace' with Kathy Beevers and Andrew Rea.

Top