Editorial Type: research-article
 | 
Online Publication Date: 18 Jul 2024

APPLYING GIBBONS' ARCHITECTURAL INSTRUCTIONAL DESIGN APPROACH TO EVALUATE NON-PROFIT TRAINING

,
,
,
, and
Article Category: Research Article
Page Range: 205 – 212
DOI: 10.56811/PFI-23-0003
Save
Download PDF

Perhaps the most critical layer of evaluation consisted of the structural spine of the course. This included the learning objectives, high-level course structure and design, learning activities, and assessments. The key for this layer was to determine if the spine of each course was aligned. “We don’t want scoliosis of the spine” is a phrase often echoed while evaluating this layer. (p. 5)

Through the evaluation of Humentum’s training courses, it is our belief that Gibbons’ (2013) architectural approach to instructional design can be successfully applied as a framework for evaluation. In this manner, Gibbons’ work extends to both sides of instructional design: the development and the evaluation. (p. 15)

This article describes how a research team at Boise State University adapted the Gibbons (2013)instructional design layers model to evaluate the design of a nonprofit organization’s training curriculum. In essence, the evaluation team relied on the layers model to deconstruct the curriculum and analyze each individual layer. This article summarizes the team’s approach and rationale behind the analysis of each layer. The team’s key learnings are also set forth.

EVALUATING NONPROFIT TRAINING: APPLICATION OF GIBBONS’ LAYERED APPROACH TO INSTRUCTIONAL DESIGN

The Learning Strategy Lab within the Organizational Performance and Workplace Learning (OPWL) program at Boise State University (BSU) recently partnered with Humentum, a global humanitarian organization that provides training and consultation services to other nonprofit organizations. The Lab team conducted an independent review of Humentum’s training catalog using a representative sample of course material. This article describes the lab’s evaluative process to highlight an alternative method for applied evaluation. We employed Gibbons’ (2013)architectural layers model of instructional design as the organizing framework for our evaluation. We characterize below the distinct layers that comprised the evaluation, how the research team applied adult learning principles to these layers, the final recommendations provided to Humentum, and, lastly, the lessons learned as a result.

Principles of Learning

Early in the evaluation of Humentum’s training curriculum, the research team observed that several principles of cognition and information processing were highly relevant to our analysis and useful for any recommendations offered. Therefore, to provide context for the sections that follow, we first summarize five salient learning principles.

Principle 1: Information Processing of Multimedia

Information processing is the mechanism by which sensory input is received and processed by the brain. In an online environment like that of Humentum’s learning management system (LMS), a learner will initially process information via two primary channels: auditory and visual (Mayer & Moreno, 1998, 2003). The auditory channel processes sounds, speech, and verbal cues (Mayer, 2009). The visual channel processes colors, shapes, images, graphics, characters on a page or screen (i.e., text), and movement (Mayer, 2009). Sensory input first enters the sensory memory channels through one’s eyes and ears. From there, the sensory input moves to working memory, where information is organized. However, there is a finite limit to the amount of information that can be processed by each channel at any given time. Each sensory input cannot be overused because it will exceed the cognitive capacity of the learner, which results in the information not being processed by the brain (Mayer, 2021).

Principle 2: Cognitive Load

Closely related to Principle 1, cognitive load refers to the weight or stress placed on the processing capacity of the learner. When information is overly complex to process or there is too much information presented at once, it becomes difficult to fully comprehend and the learner’s cognitive load strains, leading to cognitive overload (Kalyuga, 2007). In other words, cognitive overload is reached when the processing demands evoked by the learning task exceed the processing capacity of the cognitive system. Cognitive overload hinders new information from integrating with prior knowledge stored in long-term memory and therefore cannot be used to construct new knowledge. Because it directly impedes learning, the state of cognitive overload is very much a suboptimal condition (Sweller, 1994).

Principle 3: Attention and Engagement

High levels of attention, engagement, and focus are needed to support the brain’s processing of information and, ultimately, for learning to occur (Huberman, 2022). In e-Learning environments especially, where learners may encounter a variety of guidance types, it is important to ensure that attention, engagement, and focus are accounted for in the design of the material. The ARCS (attention, relevance, confidence, and satisfaction) model is one approach to meeting that instructional goal (Keller, 1987, 2016). Tactically, a professional can apply ARCS through the use of humor, storytelling, illustrative examples, conflict, variety, and intermediate rest (DiCarlo et al., 2017; Huberman, 2022; Keller, 2010; Snow & Lazauskas, 2018).

Principle 4: CRAP Model of Design

The CRAP acronym refers to a model of visual design principles which help design engaging and accessible instruction (Da Silva, 2022). The “C” of the acronym represents Contrast, which refers to the harmonious use of color, tone, size, shape, and direction of elements on a page in a course. Next, Repetition relates to the consistent use of elements from slide to slide or page to page. Third, Alignment between all elements on a slide or page, which should be relevant, physically aligned, and easy to read. Lastly, Proximity refers to elements on a page or slide being located with or near other similar elements.

Principle 5: Knowles’ Principles of Andragogy

Knowles’ six principles of andragogy outline the factors that should be considered when designing instruction for adult learners (Knowles, 1978). The following list represents Knowles’ six principles:

  1. The need to know: Adult learners prefer to know why they are learning something and the benefits of what they are learning in order to invest their time, effort, and sustain motivation.

  2. Self-concept: Adult learners tend to engage in self-directed learning activities, which means they are already skilled to some extent at self-management (compared with children or adolescents).

  3. Experience: The prior experience of adult learners should be considered and incorporated into the instruction in order to manage the positive and negative impact their experiences may have on their learning experience.

  4. Readiness to learn: As we noted above with respect to engagement and attention, adult learners vary widely in their preferred styles for guidance. Therefore, instruction for adult learners should include a variety of learning events that are structured and independent.

  5. Orientation to learn: Adult learners are motivated by the benefits of their learning, such as real-life application or arriving at a solution to a problem. Therefore, instruction for adults should be focused on tasks and problems.

  6. Motivation to learn: Building on assumption #5, adults are often intrinsically motivated, meaning their motivation comes from an internal drive to learn.

Evaluation: Application of Gibbons’ Framework

To systematically evaluate the curriculum, we borrowed from Gibbons’ (2013)An Architectural Approach to Instructional Design (Table 1). Specifically, we referred to Gibbons’ instructional design “layers” as a starting point, which we then modified to fit the purposes of our task.

TABLE 1 Instructional Design Layers Used to Evaluate
TABLE 1

Façade Layer

Akin to a home’s “curb appeal” when shopping for a new house, we used the term façade to represent our initial, superficial observations of the LMS’ aesthetic. For this layer we applied an eye test to quickly analyze the home page, the site’s navigation, loading speed, and the initial visual aesthetic of each course. This layer was relatively brief compared to the time dedicated to the subsequent layers. For this layer, we considered several questions that addressed each subtopic which are reflected in Table 2 below.

TABLE 2 Considerations While Evaluating the Façade Layer
TABLE 2

Structural Spine Layer

Perhaps the most critical layer of evaluation consisted of the structural spine of the course. This included the learning objectives, high-level course structure and design, learning activities, and assessments. The key for this layer was to determine if the spine of each course was aligned. “We don’t want scoliosis of the spine” is a phrase often echoed while evaluating this layer. That is, we approached the evaluation with the view that strong instruction possesses strong alignment between the elements of the spine (i.e., learning objectives, course structure, learning activities, assessments). Table 3 below depicts the questions we considered while evaluating the structural spine layer.

TABLE 3 Considerations While Evaluating the Structural Spine Layer
TABLE 3
Learning Objectives

When evaluating the learning objectives, we wanted to ensure that learning objectives were placed consistently throughout the course, because the consistent placement of objects on a screen reduces the cognitive load on the learners and, therefore, they would be able to use their processing capacity on the course content rather than struggling to understand navigation.

We used The Tyler Rationale to analyze if the objectives were effective. According to Tyler (1969), learning objectives guide the development of desired performance. These objectives should then be followed by activities explicitly selected to advance learners toward these objectives, and assessments to measure whether the objectives have been met. We leveraged Mager’s principles on Learning Objectives (1997) to describe what measurable performance looks like and recommended that ideal objectives include three key parts:

  1. Performance, or what the learner is expected to be able to do or produce.

  2. Conditions under which the performance is expected to occur.

  3. Criteria, or the level of competence that should be met or exceeded.

In addition, the hierarchical classifications from Bloom’s Taxonomy assisted in the selection of the verbs to create objectives that would be appropriate to the order of thinking that the course is intended to achieve.

Course Structure & Design

The course structure consists of any patterns in the sequence and order in which the course is presented to the learner. Based on the research of information processing of multimedia, learning is optimized when audio/verbal information is paired with visual/pictorial (Mayer & Moreno, 2003). When only one mode of learning is consistently used without a break, then only one form of information processing is correspondingly being used. This then causes a cognitive overload on that particular channel.

When looking at how the course is designed and structured, we kept the CRAP model of design in mind. Specifically, we focused on the contrast (i.e., the text color, background colors and other visuals), repetition (making sure that elements in each section or page are consistent), alignment (all elements on a page should be physically aligned and easy to read) and proximity (related elements on a page should be near each other and unrelated elements should be separated).

Activities

The principles we applied to the evaluation of activities were as follows: (1) information processing of multimedia and (2) CRAP framework of design. Additionally, we evaluated the activities by relying on the ARCS model of motivation to analyze if the activities successfully captured the learners’ attention. We found the activities to be engaging and conducive of meaningful learning. We also considered whether learners were more likely to participate in the activities due to intrinsic or extrinsic motivation.

Assessments

Strong assessments provide learners with feedback regarding the degree to which they have mastered the training material (Wiliam, 2011). In this manner, assessments serve as a codified measure of student learning. This can be achieved using tests, quizzes, assignments, demonstrations of a desired behavior, etc. (Taras, 2002). In any strong course design, therefore, the learning objectives and assessments should be aligned so learners know what to expect and do not have to process any “surprises” resulting from an unstructured, misaligned course design, including assessments (Kalyuga, 2007). When evaluating the assessments, we also considered the frequency and measurement of the assessments. If there was a better way, we then considered the alignment of the curriculum’s spine to be “off” or misaligned.

Content & Message Layer

For this layer, we analyzed both the quantity and quality of the information presented, as well as the feedback provided to the learner (see Table 4 below). For the quantity of information, we considered the amount of information presenting in each format. In online environments, many paragraphs of uninterrupted text is seldom appropriate; conversely appropriate video length varies by subject matter (Lagerstrom et al., 2015). In general, philosophical topics can effectively leverage longer videos; by comparison, physical science, math, statistics, or engineering topics are better suited for shorter videos (Guo et al., 2014; Thornton et al., 2017).

TABLE 4 Considerations While Evaluating the Content & Message Layer
TABLE 4

We then assessed the quality of information that was presented to the learner by asking, “Is the information presented in a manner that invites engagement and consumption?” “How is the audio/video quality for multimedia pieces or presentations?” (Mayer & Moreno, 2003). Again, the key principle is that a consistently text-heavy presentation of information is not ideal for online consumption. It presents heavier cognitive load compared with other audio/visual forms of presenting the same information (Mayer, 2021).

Feedback

Feedback can arrive in different forms: discussion questions; knowledge checks/quizzes; group sessions; or via direct 1:1 message. The research suggests that strong feedback consists of an immediate and clear articulation of the (1) current state, (2) desired state, and (3) articulation of the path to get from current to desired (i.e., how to reduce the performance gap) (Martinez, 2023). Research has shown that becoming aware of the difference between these two states (i.e., desired and current) engages the receiver’s brain systems that drive attention, effort, and persistence in pursuit of a goal.

To ensure that learners received effective feedback from the course, we used the ARCS theory of motivation to guide our evaluation process (Keller, 2010 & 2016). Specifically, we focused on the relevance of feedback delivered by considering how tightly it connected with the task. Lastly, Knowles’ (1978) six principles of andragogy teaches that adults learn best when instruction is built on their prior experience. Therefore, we also analyzed if the feedback took this into account.

We further asked ourselves: “Is this clear?” Quality feedback should make complete sense, with no confusion. Learners cannot learn from mistakes if they do not understand the feedback. Next, we considered, “Is it timely?” Ideally, the feedback should come relatively soon after the learner has submitted their response or taken some initial action (Boud & Molloy, 2013). To determine if an assessment was effective, we asked “is it actionable?” Strong feedback should be actionable. That is, can the learner do something about the feedback they receive? Does it help them act? Simply stating “Your work is not good” and nothing more is of no help to the learner. The purpose of feedback is to help the learner see the exact gap between their current work and mastery level. “What do they need to do more/less of or differently to improve?” Similarly, saying “Great job, well done” without explaining why that is the case is equally ineffective. Quality feedback instructs the learner what and why their work is well done (Keller, 2010).

Multimedia Layer

For this layer, we evaluated the interaction between the various audio-visual elements (see Table 5 below). We specifically considered the interactivity level, production quality, and technical performance of the multimedia elements. We asked ourselves the following questions: “How do the audio-visual (AV) elements of the learning experience play together? What is the complementary nature of the AV elements? Or does the visual component not complement or supplement the audio?”

TABLE 5 Considerations While Evaluating the Accessibility Layer
TABLE 5

Control Layer

For the control layer of the Humentum online training courses, we examined how a learner might manipulate the computer screen. We further explored the manner in which the learner interacts with, or “controls,” the learning environment within that screen. For in-person experiences, this means examining how the learner interacts with the physical space (e.g., the classroom, the website) and the learning experience. For online environments, this includes the manner in which the learner is able to navigate and manipulate the screen. In our evaluation, we asked ourselves, “Is the navigation in the platform intuitive? Is it clear where, when, and how a learner should go next?”

Accessibility

To evaluate accessibility of the course, we utilized the POUR principles as a guide (Business Disability Forum, 2020). We wanted to ensure that the course material was perceivable (i.e., ensuring that everyone is able to perceive the content even if they access information in a nontypical way), operable (i.e., allowing users to operate the application using a variety of methods and forms of technology), understandable (i.e., content should be clear and concise), and robust (i.e., what is developed can be used by reasonably outdated/current/anticipated technology standards and assistive technologies).

Results and Recommendations

After examining each layer using relevant learning principles, we made several recommendations meant to bring Humentum’s catalog more in line with design practices for optimal learning. We separated our recommendations into suggested changes to their learning objectives, course structure and design, course activities, assessments, feedback, and accessibility.

Course Objectives

The majority of the learning objectives developed by Humentum were measurable and successfully aligned with the content of their courses. For those that were unclear or not measurable, we recommended altering the language so that objectives were focused on activities learners should be able to accomplish by the end of the course. This was meant to better inform learners what the purpose of their assignments were, how they related to the big picture, and how Humentum expected to measure whether learners had met each objective.

Not all learning objectives were located in the same place across Humentum’s courses, so we also recommended referencing these objectives more prominently throughout the course and displaying them in a consistent location. Doing so would reinforce to learners what objective it is they are currently working toward when completing a specific activity. It would also reduce the cognitive load they might experience while having to search for the objectives in different locations or being confused about how a certain task related to their current objectives.

Course Structure

Humentum had a strong foundation for their course structure that was pleasant to view and navigate, but the courses tended to be text-heavy and made sparse use of video, audio, or images. Recognizing this may have been a conscious decision due to Humentum’s global audience who may not have access to high-speed internet, we recommended balancing the use of text, graphics, and multimedia where possible and appropriate. Additionally, when considering visuals to include and where to include them, we noted that Humentum strongly leveraged contrast, but suggested being mindful of other visual elements such as repetition, alignment, and proximity. When text was a necessity, we recommended making greater use of techniques like storytelling and chunking to increase both attention and retention.

Built into the platform Humentum used for their courses, Articulate Rise, are “Experience Points” that learners earn by completing tasks such as self-check quizzes, accessing necessary external content, and watching videos to completion. These points accumulate and place learners on a “leaderboard,” in competition with other learners in the course. We recommended clearly explaining their purpose to learners who may not be familiar with similar video game terminology. Framing the experience points and leaderboard as another form of self-assessment, rather than competition, may provide more encouragement to learners who enjoy that form of gamification and lessen the potential discouragement for those who prefer not to be placed in competition with their peers in what they thought would be a collaborative environment.

Learning Activities

In reviewing the learning activities of Humentum’s courses, we found significant use of worksheets, learning journals, and other external resources (outside of the learning site) all throughout the courses. Before beginning our review, Humentum informed us that many of their learners used mobile devices like cell phones or tablets to complete their courses. Navigating between tabs, pages, and external resources, especially while attending a webinar-based course, proved to be difficult on mobile devices. With this in mind, we recommended providing more places to access these extra materials outside of a single module and to consider alternative ways of prompting learners to self-reflect on the material. We also advised varying the prompts in the learning journals between modules to prevent burnout from repeated questions.

In the webinar-based courses, we observed that facilitators did not always debrief the larger group after smaller breakout sessions. This made it difficult for learners who might view the content later to get the full benefit of the small group discussions, since those were not captured in the webinar recordings.

Assessments

The assessments used by Humentum were mostly in the form of ungraded knowledge checks scattered throughout each course. Some modules had a more formal summative assessment that could be infinitely repeated until passed and used the same questions, allowing learners to force-complete the assessment.

We recommended adding prechecks throughout the modules to assess the knowledge learners have before completing certain sections. This would allow Humentum to better quantify how much the course content has improved a learner’s understanding of the subject. We also suggested using word banks and randomized questions in their formal assessments to encourage learners to study for a higher score, rather than repeating assessments to skip ahead to later content.

Feedback

Looking at the type and sophistication of feedback provided both to learners by Humentum and vice versa, we found it was sometimes unclear how relevant certain content was to the course objectives. The primary example of this were group discussions present at the beginning of each module. Since we were participating in a private instance of the course, we were not able to fully determine how these discussions benefitted students and what their incentives were to complete them. After further discussion with Humentum, we discovered these were largely used to develop social connections between learners, as the large number of participants would mean any content-focused question would quickly see repetitive responses. We recommended clarifying the purpose of these discussions for learners and incentivizing participation by providing them with experience points for posting in the discussions.

Accessibility

Lastly, we reviewed the courses for accessibility. Our recommendations in this area were minor, as Humentum had already put significant consideration into their audience and how people were accessing their courses. As mentioned above, there were some technological glitches when navigating certain course elements on mobile devices. Although most images were appropriately tagged for screen readers, some had HTML made up of random characters or were missing alt tags. Therefore, we recommended correcting these or marking them as decorative so they would be appropriately read by software used by any learner with visual impairments.

The video content that Humentum used had closed captions, so we recommended regularly reviewing the most common languages of their users to ensure these are available to as many learners of their global audience as possible. Lastly, some content was significantly slower to load on certain browsers. Although this may have been an issue caused by Articulate Rise, we suggested seeking solutions to make the experience smoother for learners using browsers other than Chrome.

Conclusion

Lessons Learned

The evaluation team discovered the challenge of adopting the recommendations given the constraints faced by a nonprofit organization. Software and technological constraints, funding, and a combination of both environmental and social factors made some recommendations difficult to implement. The team learned from Humentum that some of their users only had access to a computer that is most likely using an earlier or outdated version of Internet Explorer or, more challenging, used only a cell phone or tablet to access the courses. This insight provided a boundary condition for consideration while making recommendations related to course materials, workbooks, and multimedia aspects of the courses.

The team learned that several initial recommendations, while rooted in learning principles, did not apply well to the specific context of Humentum. For example, the team suggested that webinars and breakout rooms be recorded to help asynchronous learners feel like active participants in the learning process and to help boost engagement. But we discovered that the application of this tactic had been attempted in the past with no success. Humentum found that when participants know they are being recorded, they tend to speak up less, and interactions between learners become rigid and calculated.

Further, the student evaluation team discovered the tension between theory and application when Humentum shared that the quizzes and tests were viewed as barriers for learners. In the past, Humentum reported that there had been a drop in participation when learners were required to pass a quiz to get to the next lesson. Realizing the pattern of greater completion rate when they remove quizzes and tests was a key insight for the student team. It allowed the team to modify the initial recommendation to add pre- and postknowledge checks instead. Lastly, the team learned that funding, staffing, and other operational needs could make it challenging to implement specific recommendations, especially for a nonprofit organization.

Implications

Through the evaluation of Humentum’s training courses, it is our belief that Gibbons’ (2013) architectural approach to instructional design can be successfully applied as a framework for evaluation. In this manner, Gibbons’ work extends to both sides of instructional design: the development and the evaluation. Through review of this case, our hope is that professionals will gain insight into an additional lens through which to view instructional design and its evaluation.

Copyright: © 2023 International Society for Performance Improvement 2023

Contributor Notes

SETH-AARON MARTINEZ (PhD) is a consultant and award-winning author whose scholarship centers on the cognitive, behavioral, and socio-emotional dimensions of expertise development. Dr Martinez is an assistant professor of Organizational Performance and Workplace Learning at Boise State University, where he directs the Learning Strategy Lab. He also serves as the Associate Editor for the Journal of Non-Profit Innovation (JONI), and is a brain-based executive coach, certified through the NeuroLeadership Institute. Prior to academia, at both Meta and Adobe he built global training programs and consulted with executives on the strategy behind their training at scale. At Stanford University, he consulted with faculty to design MBA and Executive Ed coursework in the Graduate School of Business.

SCOTT HARRINGTON is a higher education professional with many years of experience assisting students obtain their educational goals. As a nontraditional student, he is especially interested in increasing access to education for underserved populations. He currently works as an academic advisor for electrical engineering and computer science students at Oregon State University (OSU). His educational background is in communication, as well as Buddhist philosophy and ethics. He holds a bachelor’s degree in Religious Studies from OSU and is pursuing further education in Mandarin Chinese, Chinese history and politics, and revolutionary Buddhist movements in preparation for graduate programs in East Asian Studies. In his personal life, he enjoys tinkering with computers, playing PC games, international travel, and spending time with his spouse and their dog.

CLAUDIA ACHILLES is an instructional designer at the Idaho National Laboratory. She is also currently an MS student in Organizational Performance and Workplace Learning at Boise State University. She aspires to help lead change in an organization to improve company culture. Claudia received a Bachelor of Science in Interdisciplinary Studies with a concentration in sociology and psychology in 2019 from Eastern Oregon University.

HALAH MOHAMMED is an advisor, instructional designer, and writer. Halah earned their BA in English Literature at Carleton College and their MS in Organizational Performance and Workplace Learning at Boise State University. Halah founded Halah’s Instructional Design Lab (HIDL), a consultancy for organizations and people seeking workplace solutions and workplace opportunities. A former educator and academic advisor for K-12 and higher education, they advise adult creatives and careerists in need of clarity on ideas and goals through their professional advising service Go Ask Halah. Halah’s creative writing has been published in Denver Reflections, Leon Art Gallery, and the Intersections Zine. When they are not working, you can find them writing, knitting, traveling, and playing DnD.

PATRICIA TJAN is a recent 2024 graduate of Boise State University’s Organizational Performance & Workplace Learning Master’s program. She currently works as a leader in the learning and development team in the healthcare industry. Her work involves overseeing her team to develop and deliver learning curriculums for more than 200 employees in the revenue operations department through strategic collaboration and engaging learning. Patricia finds fulfillment in supporting other’s career growth by using impactful learning programs and creative solutions.

  • Download PDF