Editorial Type: research-article
 | 
Online Publication Date: 30 Nov 2023

USING MULTIPLE METHODOLOGIES TO EVALUATE GERIATRIC WORKFORCE DEVELOPMENT PROGRAMS: A FALL PREVENTION EXAMPLE

PhD,
PhD, and
PhD
Article Category: Research Article
Page Range: 81 – 93
DOI: 10.56811/PFI-23-0007
Save
Download PDF

This paper highlights an effective model for evaluating geriatric workforce enhancement professional development programs, using multiple methodologies (i.e., quantitative and qualitative). An adaptation of Kirkpatrick's (1994; 2010) model of evaluation was implemented to assess the evidence-based fall prevention program delivered at a Program for All Inclusive Care for the Elderly site. The purpose of this paper is to describe the program's design, data collection activities and instruments, and findings, and also suggest strategies for successfully replicating the approach.

INTRODUCTION

Professional development programs are defined as those that offer additional training and skill-building to a targeted group of persons, employees, or staff. Often, professional development has been associated with educators. Here, the concepts of professional development were applied to those who serve older adults in the long-term care setting. Staff in these settings are not routinely ready-made nor trained specifically to support older adults. Due to the growth in the older adult population, there is greater demand for both long-term care facilities and for staff who can effectively work in these settings. Older adults in these settings may be at a higher risk of illness or falls and may be experiencing complications from multiple medicines and comorbid conditions. With increased age comes an increase in risk of developing Alzheimer's disease or another type of dementia. However, with increased age also often comes the ability to impart wisdom and experiences, as well as the opportunity to demonstrate resilience.

There are numerous ways to enhance the quality of life of older adults, one critical way being to provide professional development to those who care for them. Staff who receive ongoing professional development report they are more confident in their ability to carry out their work, experience greater job satisfaction, share their skills with other staff, and are less likely to transition out of their work setting (Krueger et al., 2007; Gilster et al., 2018). Unfortunately, limitations such as time and resources prevent facilities from being able to provide long-term care staff with professional development opportunities aimed at improving the care of older adults. As the population of older adults continues to grow, it is necessary to invest in efforts to not only develop and deliver these types of programs, but also to thoughtfully evaluate them in order to fully appreciate their impact on the staff and residents in long-term care and to inform program modifications to better meet the diverse needs of staff.

PURPOSE OF EVALUATION

The purpose of evaluation is to make a judgement about the value, merit, or worth of a program, curricula, or policy (Fitzpatrick et al., 2011; Scriven, 1967; Scriven, 1996); it can help to determine whether a program is worth being made available to staff and identify benefits of offering the program. Evaluation is conducted using systematic inquiry to arrive at a determination of worth by means of formative or summative evaluation activities or a combination of both. Summative evaluation activities take place after a program has concluded and are used to determine if a program achieved its stated goals and objectives (Scriven, 1967; Scriven, 1996). For example, at the conclusion of a program, participants may be asked to complete a survey where they provide perspective on their overall perceptions of the program, reflect on what or if they have learned something valuable, and comment on the strengths and weaknesses of the program. This information can then be used to make improvements to the program for future iterations or to decide whether the program should continue or be terminated.

Formative evaluation activities are conducted while a program is ongoing and inform immediate program improvement (Scriven, 1967; Scriven, 1996). This can be particularly useful for programs that take place over longer periods of time. For instance, participants may be asked to complete an evaluation after each session of a multi week program, the results of which can then be used to inform subsequent sessions and make improvements before the program comes to an end.

EVALUATING PROFESSIONAL DEVELOPMENT PROGRAMS

According to Kirkpatrick's (1994; 2010) model, evaluations of training programs should assess impact at four levels: reactions and satisfaction, learning, behavioral, and organizational results. At level one, the aim is to capture how program participants reacted to and how satisfied they were with the program; this level is fairly straightforward to evaluate and can be gathered directly from the people completing a program. The next level, level two, is learning. Here, evaluators should measure the degree to which participants gained new knowledge from the training program. Evaluating learning is slightly more difficult than evaluating reactions to and satisfaction with a program. First, to reach valid conclusions about changes in learning requires evaluators to have some skills in measurement and test construction to ensure the tools used to measure knowledge gains are appropriate. Second, it is difficult to capture how sustainable new knowledge is and even more difficult to capture how new knowledge is being applied in occupational practice, which is the focus of measurement at level three: behavior. When evaluating behavioral outcomes, the goal is to uncover whether people have made changes to their daily practice as a result of the training and, if so, does the change result in improvements in performing a task? Unlike both learning and reactions/satisfaction, evaluating behavior requires gathering follow-up data from participants after a program has ended, a notoriously difficult undertaking. Additionally, and related to sustainability of new knowledge, behaviors may change immediately following a training program but taper off as time passes. Lastly, at level four, the goal is to evaluate whether a training program yields outcomes at the organizational level; this can be the most challenging level at which to conduct evaluation. Not only is it challenging to link one specific training program to an organization's overall function, but obtaining the data for doing so is often a major hurdle to clear. For example, many access and privacy issues arise when evaluators attempt to obtain data from organizations, particularly in health care settings and when dealing with patient data.

Given the difficulties inherent in evaluating each of Kirkpatrick's suggested levels, it is not surprising that evaluation of training programs typically does not occur at all four levels (Saks & Burke, 2012). Even in instances where evaluation does take place at all four levels, often the full story of training programs is not known; while data collected to inform outcomes at each of the four levels tells what happened, they do not necessarily tell why it happened. For example, evaluation results at level one, reactions and satisfaction, that program participants did not find a training program to be enjoyable or worthwhile and, at level two, that participants did not learn anything. Not only is this disappointing news for a program developer, but it gives no guidance on how to improve a program.

In order to fully understand a program, it is recommended to evaluate all aspects of it through exploration of both the “whats” and the “whys” of program outcomes. Additionally, and as previously mentioned, it is important to acknowledge the challenges to conducting evaluation at the patient outcome level (level four). By focusing on conducting thoughtful evaluation at levels one through three, one can gain a better understanding of why they might (or might not) expect an impact at level four. When the focus is on chasing patient outcomes data, the opportunity to capture all other “whats” that occurred as a result of a training program and, more importantly, why they occurred, is missed. What follows is an example of an evaluation of an evidence-based falls prevention program. Here, the various components of the overall evaluation and the evaluation results are outlined. Based on this example, an adapted version of Kirkpatrick's model specific for evaluating geriatric workforce development training programs is proposed.

A MULTIPLE METHODS EVALUATION OF A FALL PREVENTION AND MANAGEMENT PROGRAM

The authors highlight the evaluation of an evidence-based program for the prevention and management of falls among older adult populations. Perhaps the best description of an evidence-based practice (EBP) emphasizes the use of clinical reasoning and the integration of information from four sources: research evidence, clinical expertise, the client's values and circumstances, and the practice context (Hoffman et al., 2017; Sackett, 2005). The curriculum developed for this training was guided by the American and British Geriatrics Societies' clinical practice guidelines Panel on Prevention of Falls in Older Persons, American Geriatrics Society and British Geriatrics Society (2011) and the U.S. Preventive Services Task Force recommendations for falls prevention and management (Michael et al., 2010; U.S. Preventive Services Task Force, 2018). The target audience is interprofessional teams of clinicians who work with an older adult population, such as professional health care providers in acute care hospitals, long-term care facilities or Programs of All-Inclusive Care for the Elderly (National PACE Association, 2021). Because older adults often have complex health care needs, interprofessional collaborative care has been recognized as especially important for this patient population (Goldberg et al., 2012). As a consequence, the curriculum was based on the seminal research findings of Tinetti (1986) and Tinetti and colleagues (1994) and the interprofessional team training was designed to align new clinical practices (e.g., use of multifactor risk assessments) with the falls prevention evidence base to improve provider practices and, potentially, patient outcomes. As such, the goal of this EBP falls program was to provide teams of clinicians with increased knowledge about the causes of falls and how to prevent them, and a greater understanding of the role that other team members (disciplines) have in preventing and managing falls. Time is also devoted to exploring how the interprofessional team can work effectively when caring for an older adult.

The EBP falls content is delivered by an interprofessional group of clinical and academic faculty who have expertise in geriatric care and geriatric workforce development. The curriculum was refined through multiple iterations of implementation and refinement through application of the Institute for Healthcare Improvement's Plan, Do, Study, Act (PDSA) model of continuous improvement (Institute for Healthcare Improvement, 2003; Langley et al., 2009; Guerrero etal., 2018). The training consists of two-hour, face-to-face sessions over six weeks in the participating clinical care teams' facility. Case studies and clinical skills discussions augment didactic content and up to twelve hours of continuing medical education credits are offered for the completion of reading assignments. A full list of topics and learning objectives for each session can be found in Table 1.

TABLE 1 Overview of the Falls Prevention and Management Program
TABLE 1

The challenge was to focus on the most salient knowledge relevant to each profession that should be commonly known across the professions. Essentially, the faculty needed to always consider, “What kind of shared knowledge should every member of the team have?” The goal was to make sure that anyone from any profession would be exposed to the information most relevant to the other professions. To help tell the story of the EBP falls prevention program, a multifaceted evaluation, employing both quantitative and qualitative forms of data collection through a variety of formative and summative evaluation activities, was designed and implemented. Following Kirkpatrick's guidelines, the evaluation design included gathering outcomes data at each of the four levels: reaction/satisfaction, learning, behavior, and organizational outcomes. Kirkpatrick's (1994) model for evaluation was expanded to include collection of qualitative data to triangulate with data collected at each of the four levels in order to more comprehensively evaluate the effectiveness of the EBP falls program. This model of evaluation (see Figure 1) enabled evaluators to capture both the “whats” and the “whys” that occurred as a result of training programs.

FIGURE 1FIGURE 1FIGURE 1
FIGURE 1 Multiple Methods Evaluation Model for Professional Development Training Programs

Citation: Performance Improvement Journal 62, 3; 10.56811/PFI-23-0007

FORMATIVE EVALUATION ACTIVITIES

Formative evaluation activities were incorporated into the evaluation design for two reasons. First, this allowed the gathering of session specific feedback that could inform changes for future program iterations. Second, if there were issues to be addressed while the program was ongoing, there was an opportunity to correct them and make improvements for subsequent sessions. To gather this information, participants were asked to complete a session evaluation each week of the program. On these session evaluations, participants provided feedback about their reaction to and satisfaction with the session overall (e.g., effectiveness of speakers, value of material covered), and compared their perceived self-efficacy (Ajzen, 2002; Bandura, 2006) of meeting the learning objectives after the training with, retrospectively, before the training. Using a retrospective pretest design allowed participants to reflect on each topic covered, thinking about how much they knew about the topic after the training and directly compare it to what they knew about it before the training. A sample session evaluation is provided in Table 2.

TABLE 2 Sample Session Evaluation, a Formative Evaluation Activity
TABLE 2

SUMMATIVE EVALUATION ACTIVITIES

While the formative evaluation activities helped to ensure that meeting objectives were met for each session and that participants were satisfied with the program sessions, the summative evaluation activities were developed to evaluate the degree to which the EBP falls program achieved its overall goals. Summative evaluation activities employed in this example included a preprogram and postprogram knowledge inventory (learning) consisting of multiple choice and true-false items, a postprogram survey (see Table 3), a semi-structured focus group protocol (see Appendix), and a medical record review to document practice change among the health care providers trained (behavior).

TABLE 3 Postprogram Survey for the Falls Program, a Summative Evaluation Activity
TABLE 3

The falls knowledge questions were drawn from the didactic content and were refined successively over continual iterations through psychometric item analyses. Items that were judged too easy (i.e., most answered them correctly) and those judged too difficult (i.e., most answered them incorrectly) were systematically deleted over the course of eight program years. Ultimately, the knowledge questions included a total of 48 items and constituted a streamlined version that could be administered at one time as the preprogram knowledge inventory without being overly burdensome. For ease of administration, the postprogram falls knowledge inventory is completed in stages, with postinventory questions being answered after the session in which they are covered.

After the sixth session, which is the last session when new content is introduced, participants completed the postprogram survey. The postprogram survey was designed to gauge participants' overall comfort with the material covered in the program and the degree to which their individual daily practice was affected.

The seventh session of the EBP falls program was devoted to a semi-structured focus group with participants. During the focus group, participants reflected on their overall perceptions of the program and what they had learned and provided comments about the program's strengths and suggestions for improvements. The focus group also served as a time for participants to brainstorm ways they could incorporate what they learned into their regular clinical practice and share their new knowledge with other staff who were not able to attend the training. During the focus group, program faculty took detailed notes; the session was also voice recorded and transcribed verbatim. Together, faculty notes and the focus group transcription were analyzed using open coding to uncover the predominant themes.

A retrospective review of medical records was conducted for patients who had experienced one or more falls during the three months before the EBP falls program and then tracked for falls three and six months after the program. Following steps consistent with those proposed by Hill and colleagues (1997), consensual qualitative research (CQR) was employed for abstracting information contained in medical case notes. Each case note was simultaneously reviewed by at least two data abstractors and an arbitrator to reach consensus on what should be recorded. The assumption is that multiple perspectives are more likely to be free from researcher bias (Marshall & Rossman, 1989).

Using this approach, reliability is measured by comparison with the arbitrator to ensure accuracy in all aspects of data collection. If an abstractor has less than 90% agreement at the criterion level with the arbitrator, the abstractor is given additional training until 90% agreement is achieved. Indicators of practice change may derive from any of the following: 1) increases in the number of disciplines involved in falls management; 2) modification/enhancement of care plans in light of the evidence base; and 3) improved documentation of falls reporting, screening, assessment, or intervention.

When determining the precise variables that would be collected as part of the chart reviews, considerations were grounded in the principles of real-world evaluation (Bamberger et al., 2012) and utilization-focused evaluation (Patton, 2008). The task was to decide which variables were essential and which ones could be eliminated without reducing the utility of the data collected. The chart reviewers sought to balance the breadth and depth of the data collected while also making judicial use of limited resources (Patton, 2008). Beyond the three general domains of practice change (assessment, intervention, and interprofessional teamwork), the database was compiled with concrete, measurable elements. Table 4 shows the major variables that were documented and the process for balancing breadth and depth. In addition to the breadth provided by the variables selected, breadth was built into the protocol by collecting these across all disciplines trained. By looking at all disciplines, the reviewers could ascertain which ones exhibited the largest and smallest practice changes as a consequence of training and investigate systemic/institutional factors or limitations contributing to these results. For example, one can consider whether a particular discipline may have had the latitude to implement the EBP practice changes advocated for in the training. If they had the latitude but did not demonstrate practice change, one can think about whether or not there were any integral barriers operating.

TABLE 4 Medical Record Review Evidence Categories
TABLE 4

Checklist items for the variable indicators were derived from the training curriculum and signify where the most depth was built in. Ultimately, indicators were collapsed for practice change reporting purposes. Although this amounts to surface analyses, one can still look more in depth and examine the extent to which multifactorial risk assessments were conducted. This also retains the ability to develop higher order variables by gauging level of compliance with the evidence base as a saturation measure of EBP behavioral translation.

SUMMATIVE EVALUATION RESULTS: A CASE EXAMPLE

The Virginia Commonwealth University Institutional Review Board approved the training and evaluation protocol by expedited review (IRB no. HM14409). For the review of medical records, a Health Insurance Portability and Accountability Act (HIPAA) waiver of patient authorization was requested and approved.

The evaluation described here is of an iteration of the EBP falls program offered for two cohorts of clinical staff at two different PACE sites within a single health system. Training was delivered in face-to-face sessions at one practice site and streamed via live videoconferencing for the other site. A total of 26 participants completed the program. Results showed statistically significant gains in knowledge from pretraining to posttraining with respect to all sessions. A closer examination of individual items revealed that certain content areas, specifically interprofessional teamwork and care plan development, showed improvement but didn't result in statistical significance. Since these are particular strengths of PACE teams in general, a ceiling effect precluded robust gains.

Focus group findings revealed unintended system changes as a consequence of the training. A site-specific falls prevention program was developed; this program requires physicians to refer all falls with or without injuries that were not a result of equipment to the clinic for falls screening. There were also improvements made to the falls reporting procedure. The PACE modified their mandated falls reporting process to provide earlier (within 24 hours of a fall incident) notification to rehabilitation specialists who then initiate the falls assessment protocol. Self-efficacy improvements were documented related to the interdisciplinary approach to risk assessment, alterations in the decision-making process related to care plan development, and modifications to the retrospective analysis of events when PACE participants fall (e.g., more structured postfall huddle). In general, the focus group results typically point to the implementation of new approaches, tools, and measures (to the extent that institutional protocols are flexible) and increased satisfaction with improved interventions.

Analysis of the chart audit data allowed for close examination of practice changes that occurred after the EBP falls training. Practice change was calculated using the following formula:

Statistically significant practice changes showed that physical therapists increased documentation of postfall risk assessments by a third from baseline to the three-months posttraining period, and this improvement had been maintained through the six-month posttraining period. Documentation of the causes and circumstances of falls had doubled three months after the training, and the increase from baseline to the six-months posttraining period was maintained, albeit the gain had dropped to only a third greater than baseline (for more details on conducting and analyzing the chart audit reviews see Owens et al., 2018 and Wheeler et al., 2018).

DISCUSSION

In order to comprehensively evaluate the EBP falls professional development training programs, the adapted Kirkpatrick model of training evaluation was implemented (see Figure 1). Using this model allows triangulation quantitative findings at each of Kirkpatrick's four levels with more in-depth qualitative findings to capture a complete picture program implementation. In this EBP falls prevention example, outcomes were evaluated at Level 1, reactions and satisfaction, through both weekly session evaluations and the postprogram focus group. Participants reported being very satisfied with the program and enjoyed the opportunity to learn from program faculty and to learn interactively with each other. Level 2 or learning outcomes were evaluated through preknowledge and postknowledge inventory; results show substantial knowledge gains as a result of the program. Through the chart audit reviews, outcomes related to behavior, Level 3, were examined, which in this case was practice change. Only significant changes in practice occurred for physical therapists at this site. However, given what was learned in the focus group about newly implemented and improved protocols around falls where rehabilitation team members are notified more quickly and then begin deeper assessment, this finding makes sense. This finding also highlights the importance of collecting why this happened and not solely relying on what happened; had a focus group not been conducted, there would be no explanation for why only physical therapists practice changes were showing up in the chart audits. Finally, at Level 4, the evaluation plan consistently included a tool for assessing organizational change in falls rates. While typically successful, the team has faced incredible challenges in obtaining any postprogram falls rate data. As previously mentioned, this is one of the most difficult levels at which to conduct evaluation, particularly in health care settings. However, focus group data provided some insight into the potential for organizational change to occur.

The data gathered at levels one through four tend to be quantitative data. In many instances, qualitative data can be more illustrative of the efficacy of a program or curriculum than quantitative data, with the strongest evidence coming from triangulating the two types of data. This is especially true for evaluating professional development programs where sample sizes may be smaller and retention is challenging. Smaller sample sizes make it difficult to detect statistically significant findings even when a real difference does exist, but qualitative data allow for reinforcing trends in quantitative data and highlight potential reasons for those trends. Qualitative data are also very effective for uncovering specific content areas where a program is particularly effective as well as areas where improvements could be made.

Since clinician charting may not always fully reflect the practices that were actually implemented, triangulation of multiple types of data becomes especially important for trying to understand phenomena such as change in practice. For instance, the chart review finding that the documentation of fall risk assessments increased was supported by the focus group indication that self-efficacy levels regarding the interdisciplinary approach to risk assessment improved. Presumably, the improvement in risk assessments resulted from implementation of new approaches, tools, and measures learned about during the focus group. The practice change data showing an increase in documentation of the cause and circumstances of falls were supported when the qualitative data revealed that modifications had been made to the retrospective analysis of events when PACE participants fall (e.g., more structured postfall huddle). The focus group data also aided in understanding findings related to knowledge gains. For example, although no gains related to care planning, the qualitative data did show alterations had been made in the decision-making process related to the care plan development. Similarly, while the Team Training module was not as valuable for this group of learners, as it would be in a clinical site that is less advanced with respect to these skills, the emphasis on multifactorial interventions (and the attendant implications for interprofessional care) certainly influenced the resulting team approach to fall prevention planning. This was evident in the remarks about how the identification of fall risk prior to the training was not coordinated as a team function, but rather was completely individualized, informal, and lacked uniformity.

CHALLENGES AND STRATEGIES FOR FOSTERING SUCCESS

Conducting an evaluation of a professional development program using multiple methodologies presents both challenges and opportunities. The biggest challenge is the amount of time that must be devoted to these types of evaluations. Using multiple methods to collect data for a program evaluation requires the same level of thoughtfulness and rigor in planning as using a single methodology (Greene et al., 2001). First, the evaluation activities need to be thoughtfully designed and in alignment with each of Kirkpatrick's levels as well as with the program curriculum. Second, implementing the evaluation activities as the program unfolds also takes time and careful management of the various data sources being tracked to various objectives. Finally, analyzing the multiple sources of collected data can be very time consuming; this is particularly true of qualitative data.

From the participant perspective, being able to participate fully in these types of training programs requires time away from work; it can be difficult to get all team members present at each of the training sessions. Evaluations do not take place in a lab and evaluators often lack control over the environment and the participants in the environment. If staff are presented with an emergency in their facility, introducing the next topic or focus group may have to wait for a different time when the staff are able to pay attention to the request. Moreover, it can be challenging to achieve a high response rate on each of the different types of surveys and questionnaires used to collect evaluation data. Collecting data at several time points can be burdensome for participants and requires them to take even more time away from their jobs. This is why it is critical that evaluators take the time to carefully and thoughtfully plan data collection activities and why it is important to only collect data that you intend to use. Further, program coordinators can help convey the importance of evaluation to program participants who have valuable input, which may involve completing several tools or contributing to focus group discussions that take time.

For many of the programs that evaluators work with, particularly professional development programs, buy-in of stakeholders (e.g., long-term care administrators, staff, residents, family of residents, and funding agencies) is a critical component to ensure not only a successful evaluation effort but programmatic success, as well. This is why involving stakeholders in designing aspects of program curricula and evaluation methodologies can be helpful and fostering these relationships should begin early on, ideally when an evaluator first becomes involved with a project. This also provides the evaluator with time to become familiar with the content of a program since evaluators are often not content experts in the fields where they conduct evaluation.

In evaluation, the stakeholders may have far more content expertise and knowledge of program and participant nuances than an evaluator, especially an external one, may possess. In this professional development program stakeholders are engaged early on in program planning to ensure that training goals align with the quality improvement aspirations of the health practice itself. It was discovered that the best way to balance breadth and depth is to remain ever mindful of the true purpose of the evaluation, and satisfy the information needs of the stakeholders (i.e., the training site administrators and quality assurance managers). This is where having the perspectives of both program participants and informed program staff converges to inform immediate program refinement throughout implementation through the use of the PDSA approach (Institute for Healthcare Improvement, 2003; Langley et al., 2009). Recall that participants are asked to provide feedback following each program session. The evaluators review the participant feedback with the program staff each week and use the information to guide program improvement. This is the scientific method used for action-oriented learning.

The PDSA process evaluation activities used by this team were grounded in a utilization-focused evaluation (UFE) approach (Patton, 2008) that facilitated the engagement of stakeholders with the intent to provide results that would be of most value to them. In this way, the evaluation methods were done for and with specific intended primary users—in this case, the stakeholders. As a consequence, the evaluation team establishes a working relationship with project partners and their input informs evaluation choices. Because of the reliance on partnering organizations to facilitate data collection, the approach also involved elements of collaborative evaluation (O'Sullivan, 2012). Their input informs evaluation choices, and they are critical to the facilitation of data collection.

CONCLUSION

The intent to provide interprofessional training on fall assessment from the perspective of various disciplines produced mixed results. The training described here is comparable in several respects to a mixed methods study in Canada to test whether an EBP education program on fall prevention for health professionals was acceptable to the target audience and if it had an impact on learning and on practice (Scott et al., 2011). The EBP training differed in that the Canadian study employed a two-day workshop while this EBP intervention was much more intensive. Also, unlike the Canadian study, the current study included the collection of baseline data, better linking practice results to the educational intervention. Similar to the Canadian study, despite many successes, there were opportunities to improve success. For example, emphasis on the approaches, tools, and measures that have a demonstrated evidence base was resisted to some extent. Practicalities related to reimbursement parameters pose obstacles to the incorporation of additional assessment measures. However, the results suggest there are aspects of the training that can be sensibly introduced to enhance the assessment of fall risk and the implementation of interventions to prevent falls. In addition, feedback is typically received from health care providers about the overriding importance of relying on one's clinical judgments when they contradict formal assessments. This has implications for a developing literature base that cautions against the inflexible adherence to clinical practice guidelines in geriatric care (Diachun et al., 2012; Yoshikawa, 2012) and the authors remain vigilantly cognizant of that.

Overall, the findings revealed that participants in the EBP falls prevention program were very satisfied with the professional development they received. Participants also gained some new knowledge to apply in their practice and did display some behavioral changes in their daily practice as a result. The qualitative data provided a better understanding of participants' perceptions of the strengths and weaknesses of the EBP falls program as well as specific challenges that get in the way of implementing best practices to prevent falls while on the job. Not only is this informative from a programmatic efficacy standpoint, but it is informative as the field constantly strives to create ways to improve patient care and the patient experience in settings that are becoming increasingly more patient-centered.

The authors thank members of the VGEC Plenary group who substantially contributed to the design, development, and delivery of the professional development program. The authors especially thank Myra G. Owens, PhD, for her persistent leadership on the development and implementation of the electronic medical review.

References

  • Ajzen, I. (2002). Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior. Journal of Applied Social Psychology, 32
    (4)
    , 665683.
  • Bamberger, M. Rugh, J. & Mabry,L. (2012). Real World Evaluation: A Condensed Summary Overview (2nd ed.).
    Sage Publishing
    .
  • Bandura, A. (2006). Guide for constructing self-efficacy scales. InPajaresF. & UrdanT.(Eds.), Self-efficacy beliefs of adolescents(1st ed., pp. 307337).
    Information Age Publishing
    .
  • Diachun, L. L. Charise, A. & Lingard,L. (2012). Old news: Why the 90-year crisis in medical elder care?Journal of the American Geriatrics Society, 60, 13571360.
  • Fitzpatrick, J. L. Sanders, J. R. & Worthen,B. R. (2011). Program Evaluation: Alternative Approaches and Practical Guidelines (4th ed.).
    Pearson
    .
  • Gilster, S. D. Boltz, M. & Dalessandro,J. L. (2018). Long-term care workforce issues: Practice principles for quality dementia care. The Gerontologist, 58, S103S113. https://doi.org/10.1093/geront/gnx174
  • Greene, J. C. Benjamin, L. & Goodyear,L. (2001). The merits of mixing methods in evaluation. Evaluation, 7
    (1)
    , 2544.
  • Goldberg, L. R. Koontz, J. S. Rogers, N. & Brickell,J. (2012). Considering accreditation in gerontology: The importance of interprofessional collaborative competencies to ensure quality health care for older adults. Gerontology & Geriatrics Education, 33
    (1)
    , 95110.
  • Guerrero, L. R. Lagha, R. R. Shim, A. Gans, D. Schickedanz, H. Shiner, L. & Tan,Z. (2018). Geriatric workforce development for the underserved: Using RCQI methodology to evaluate the training of IHSS caregivers. Journal of Applied Gerontology, 00
    (0)
    , 117.
  • Hill, C. E. Thompson, B. J. & Williams,E. N. (1997). A guide to conducting consensual qualitative research. The Counseling Psychologist, 25
    (4)
    , 517572.
  • Hoffman, T. Bennett, S. & Del Mar,C. (Eds.). (2017). Evidence-based practice across the health professions (3rd ed).
    Elsevier
    .
  • Institute for Healthcare Improvement. (2003). The Breakthrough Series: IHI's collaborative model for achieving breakthrough improvement. IHI Innovation Series white paper.
  • Kirkpatrick, D. L. (1994). Evaluating training and development: The four levels.
    Berrett-Koehler
    .
  • Kirkpatrick, D. L. (2010). 50 years of evaluation. TD: Talent and Development, 64
    (1)
    , 14.
  • Krueger, P. Brazil, K. Guthrie, D. & Sebaldt,R. J. (2007). Predictors of job satisfaction in long-term facilities. Health Sciences Faculty Publications, 11.
  • Langley, G. L. Moen R. Nolan K. M. Nolan, T. W. Norman, C. L. & Provost,L. P. (2009). The improvement guide: A practical approach to enhancing organizational performance (2nd ed.).
    JosseyBass Publishers
    .
  • Marshall, C. & Rossman,G. B. (1989). Designing qualitative research.
    Sage Publishing
    .
  • Michael, Y. L. Whitlock, E. P. Lin, J. S. Fu, R. O'Connor, E. A. & Gold,R. (2010). Primary care–relevant interventions to prevent falling in older adults: A systematic evidence review for the U.S. Preventive Services Task Force. Annals of Internal Medicine, 153, 815825.
  • Mosocco, D. (2009). Introducing the PACE concept: managed care for the frail older adult. Home Healthc Nurse, 27
    (7)
    , 423428.
  • National PACE Organization. (2021, November). About NPA. Retrieved November 22, 2021, from http://www.npaonline.org/website/article.asp?id=5&title=About_NPA
  • Panel on Prevention of Falls in Older Persons, American Geriatrics Society and British Geriatrics Society. (2011). Summary of the updated American Geriatrics Society/British Geriatrics Society Clinical Practice Guideline for Prevention of Falls in Older Persons. Journal of the American Geriatrics Society, 59, 148157.
  • Patton, M. Q. (2008). Utilization-Focused Evaluation (4th ed.). Sage Publishing.
  • Sackett, D. (Ed.). (2005). Evidence-based medicine: How to practice and teach EBM (3rd ed.).
    Elsevier
    .
  • Saks, A. M. & Burke,L. A. (2012). An investigation into the relationship between training evaluation and transfer of training. International Journal of Training and Development, 16
    (2)
    , 118127.
  • Scriven, M. (1967). The methodology of evaluation. InStakeR. E.(Ed.), Curriculum evaluation. (American Educational Research Association Monograph Series on Evaluation. No. 1, pp. 3983).
    Rand McNally
    .
  • Scriven, M. (1996). Types of evaluation and types of evaluator. Evaluation Practice, 17, 151162.
  • Scott, V. Gallagher, E. Higginson, A. Metcalfe, S. & Rajabali,F. (2011). Evaluation of an evidence-based education program for health professionals: The Canadian Falls Prevention Curriculum (CFPC). Journal of Safety Research, 42, 501507.
  • Tinetti, M. E. (1986). Performance-oriented assessment of mobility problems in elderly patients. Journal of the American Geriatrics Society, 34
    (2)
    , 119126.
  • Tinetti, M. E. Baker, D. I. McAvay, G. Claus, E. B. Garrett, P. Gottschalk, M. Koch, M. L. Trainor, K. & Horwitz,R. I. (1994). A multifactorial intervention to reduce the risk of falling among elderly people living in the community. New England Journal of Medicine, 331
    (13)
    , 821827.
  • US Preventive Services Task Force. (2018). Interventions to prevent falls in community-dwelling older adults US Preventive Services Task Force recommendation statement. Journal of the American Medical Association, 319
    (16)
    , 16961704. doi:10.1001/jama.2018.3097
  • Yoshikawa, T. (2012). Future direction of geriatrics: “Gerogeriatrics.”Journal of the American Geriatrics Society, 60, 632634.

APPENDIX. EBP FALLS PREVENTION AND MANAGEMENT SEMI-STRUCTURED FOCUS GROUP QUESTIONS

As a consequence of the VGEC 24-Hour Evidence Based Practices for Falls Prevention and Management program:

  1. Have you made any changes to the way you address falls management or prevention?

    Examples/possible probes: Interventions to improve ambulation or incontinence, e.g., environmental scans for hazards, monitoring for orthostatic hypotension, using a fear of falling assessment tool

  2. Are there any plans to use new approaches in the future?

    Examples/probes: new instruments or assessments?

  3. Has the way your team functions related to falls changed?

    Examples/probes: Have more disciplines been incorporated into falls assessment? Are more disciplines consulted or involved in problem-solving around falls issues?

  4. Has the team process for identifying falls risk changed?

    Examples/probes: Has your retrospective analysis of events when someone falls changed? Would you say your approach to analyzing falls is now more in-depth? How so?

  5. Have you been able to incorporate your learning from the training into interprofessional team processes?

    Examples/probes: Is there greater involvement in following through with problem-solving solutions? You've just learned your falls rate has increased by 25%. Drawing from lessons learned in training, walk us through what your team would do after learning this?

  6. What team-related changes could you make to improve the effectiveness of your falls prevention and management efforts?

    Examples/probes: Is there anything that could be done more thoroughly or more formally? What are the potential barriers to making any of these changes? How might you be able to overcome those barriers?

  7. Are there plans to incorporate training on falls prevention and management for new hires?

    Examples/probes: Is there any specific content from the training you would like to add to new hire training if you could?

  8. Please comment on the overall approach to the training sessions. How can we improve it? What worked well?

    This work was supported by the Bureau of Health Professions (BHPr), Health Resources and Services Administration (HRSA), Department of Health and Human Services (DHHS) [grant number UB4HP19210], and the Geriatrics Workforce Enhancement Program. This content and conclusions are those of the authors and should not be construed as the official position or policy of, nor should endorsement be inferred by, HRSA, DHHS, or the U.S. Government.
Copyright: © 2023 International Society for Performance Improvement 2023
FIGURE 1
FIGURE 1

Multiple Methods Evaluation Model for Professional Development Training Programs


Contributor Notes

SARAH A. MARRS, PHD is an Assistant Professor in the Department of Gerontology and Director of Research for the Virginia Center on Aging at Virginia Commonwealth University (VCU). She teaches in VCU's doctoral program in Health-Related Sciences and bachelor's program in Health Services, and directs a faculty and clinician professional development program focused on interprofessional geriatrics. Her current work focuses on the impact of ageism on healthcare, professionals' recognition of and response to abuse in later life, the interplay of ageism and abuse, and geriatrics workforce enhancement through interprofessional training. Email: marrssa@vcu.edu

CHRISTINE J. JENSEN, PHD is the Director of Health Services Research with the Martha W. Goodson Center and a faculty affiliate in the Department of Gerontology at VCU. She is active with the Gerontological Society of America, the Southern Gerontological Society (past President), and serves on the Advisory Council of the Virginia Center on Aging and the Lindsay Institute for Innovations in Caregiving. Jensen is a Master Trainer with the Rosalynn Carter Institute for Caregivers and was named the 2015 Applied Gerontologist of the Year by the Southern Gerontological Society. Her current work focuses on programming and person-centered training to support family and professional caregivers. Email: christine.jensen@rivhs.com

CONSTANCE L. COOGLE, PHD is recently retired, and served as the Director of Research for the Virginia Center on Aging and the Director of Evaluation for the Virginia Geriatric Education Center, as well as an Associate Professor in the Department of Gerontology at VCU. She taught the research methods course in VCU's Bachelor of Science in Health Services program. She administered the Alzheimer's and Related Diseases Research Award Fund for the Commonwealth of Virginia for well over 30 years. She is a former Chair of the Virginia Governor's Alzheimer's Disease and Related Disorders Commission, and past President of the Alzheimer's Association, Greater Richmond. An accomplished experimental psychologist, Dr. Coogle is a fellow in the Gerontological Society of America and a past President of the Southern Gerontological Society. Her research interests include geriatric education, substance abuse and misuse in older adults, assisted living violations, and the direct care workforce crisis. Email: ccoogle@vcu.edu

  • Download PDF