Assessment - the heart of the student experience

When I first attended my doctoral orientation, I shared my keen interest in student assessment, and one of the faculty in the program asked a key question, assessment of what?  That's really the fundamental question as the focus in higher education seems to have squarely landed on assessment. And the intense scrutiny is coming from different places, some enduring, some new.

The importance of assessment has long been recognized. Assessment has been described as “the heart of the student experience” and is “probably the single biggest influence on how students approach their learning” (Brown & Knight, 1994, cited in Rust, O’Donovan & Price, 2005).  Assessment is also highly emotional; students describe it as a process that invokes fear, anxiety, stress, and judgment (Vaughn, Cleveland-Innes & Garrison, 2013, p. 81).  It is fair to say, “nowhere are the stakes and student interest more focused than on assessment” (Campbell & Schwier, 2014, p. 360).

Other key trends in higher education have heightened focus on student assessment. Notable trends include accountability and the “increasing level of scrutiny applied to their [colleges and universities] ability to capture and report performance outcomes” (Newman, 2015, p. 48).  The need for robust quality assurance processes respond both to the still-lingering perception that online learning is ineffective, as well as the precipitous increase in online learning, which is becoming recognized as a crucial 21st century skill, not just a mode of delivery.  The increasing demographic of adult learners who desire to gain competencies desired by employers has also led to a heightened awareness of the challenges and opportunities in assessment.  A 2015 study from Colleges Ontario shows that 44 percent of current Canadian college students already possess post-secondary experience and return to college for the purposes of finding “that extra piece that makes them employable” or to “upgrade skills in a particular area” (Ginsberg, 2015).

As such, any discussion of assessment has to confront one of the great current debates in higher education. Wall, Hursh, and Rodgers (2014) define assessment as “a set of activities that seeks to gather systematic evidence to determine the worth and value of things in higher education,” including the examination of student learning. They assert that assessment “serves an emerging market-focused university” which has replaced the goals of providing a liberal education, developing intrinsically valuable knowledge, and serving society. The purpose of educational attainment has narrowed to serving society through economic development. This narrow focus has led some to suggest that students “come into play only as potential bearers of skills producing economic value rather than as human beings in their own right” (Barnett & Coate, 2005). This author does not (at this time) take a stance as to whether or not this development is good or bad, but recognizes that learning and assessment are inextricably linked, and that both increasingly focus on skills development. This focus on skills development leads directly to assessment because “one of the most telling indicators of the quality of educational outcomes is the work students submit for assessment” (Gibbs, 2010, p. 7).

Assessment, then, provides evidence of the “outcome” in any “outcomes-based” approach to education. In Ontario, for example, “postsecondary learning outcomes are rapidly replacing credit hours as the preferred unit of measurement for learning,” but “the expanded presence of learning outcomes at the postsecondary level has outstripped our abilities to validate those outcomes through assessment” (Deller, Brumwell & MacFarlane, 2015).  Assessment “remains the keystone of the learning outcomes approach,” and assessment practices are increasingly focused on demonstrating acquisition of learning outcomes for the “purposes of accountability and quality measurement,” which is increasingly measured by their alignment with market-oriented aims and closing the Canadian “skills gap,” where Canada loses as much as $24.3 billion dollars in economic activity (Bountrogianni, 2015).  The perspective of students as potential bearers of skills to support economic development drives the move towards authentic assessment, where students provide “direct evidence of meaningful application of learning” (Angelo, 1999; Maki, 2010, as cited in Goff, et. al., 2015) by using skills, knowledge, values and attitudes they have learned in “the performance context of the intended discipline.”

And yet, a book on online assessment theory and practice has never been more in need. In the Sloan Online Survey (Allen & Seaman, 2015), the proportion of academic leaders who report that online education is a critical component of their long-term strategy has grown to 70.8% in 2015 (p. 4), an all-time high. The growth rate of distance enrollments has slowed in recent years but continues to outpace the growth rate of the higher education student body in the United States. While faculty perceptions of online learning lag behind those of administrators, Contact North’s online learning wish list for 2016 includes a wish that “we stop debating about whether or not online learning is as effective as classroom teaching.” There is a clear evidence base that at worst, it makes no significant difference, but at best, the affordances of online technology provide some enhancement and opportunities to transform the learning experience and demonstrate learning outcomes. Learners “expect authentic, engaged learning” that involves “a range of different learning activities appropriate to their own learning needs.” According to this report,

the focus [for online learning] has been on courses, programs and learning processes. It has not been on assessment. But that is changing with the development of new methods of assessment involving simulations and games: adaptive learning engines which enable assessment as learning is taking place; new methods for developing assessment tools using machine intelligence; and new developments in ensuring the security and integrity of online assessments.

Contact North claims “we are approaching an era in which new thinking about how we assess knowledge, competencies and skills start to bear fruit.” This new era includes badges, verified learning certificates, and micro-credentials, as well as Prior Learning Assessment and Recognition (PLAR) to facilitate student mobility.  In this new era, assessment will also become a central component of any definition of quality. Within the Ontario Quality Assurance Framework, for example, “each academic unit is asked: What do you expect your students to be able to do, and to know, when they graduate with a specific degree? How are you assessing students to make sure that these educational goals have been achieved?” (p. 12).  Assessment flows directly from learning outcomes and its importance in the educational transaction has grown. The development of new programs in Ontario requires identification of learning outcomes and the “methods to be used to assess student achievement of those outcomes.” The strengthening focus on quality, and new opportunities afforded by technology, certainly demand a fresh look at assessment.

This book works to move towards this new era, and away from the current era where the “field of educational assessment is currently divided and in disarray” (Hill & Barber, 2014). This is not an entirely new claim. Over a decade ago, Barr and Tagg (1995) declared that a shift had occurred in higher education from an instruction paradigm to a learning paradigm and that learner-centered assessment was a central element in this new paradigm (Webber, 2011).

This epochal shift in assessment has moved like a glacier, slowly and yet with dramatic effect. The “traditional view of assessment defines its primary role as evaluating a student’s comprehension of factual knowledge” whereas a more contemporary definition “sees assessment as activities designed primarily to foster student learning” (Webber, 2011). Examples of learner-centered assessment activities include “multiple drafts of written work in which faculty provide constructive and progressive feedback, oral presentations by students, student evaluations of other’s work, group and team projects that produce a joint product related to specified learning outcomes, and service learning assignments that require interactions with individuals the community or business/industry” (Webber, 2011). As Webber points out, there is a growing body of evidence from multiple disciplines (Dexter, 2007; Candela et. al., 2006; Gerdy, 2002) illustrating the benefits of learner-centered assessment, but these examples “do not provide convincing evidence that reform has actually occurred.”

Perhaps one of the greatest transformations has been the development of the Community of Inquiry framework. While it is not within the scope of this book to give the Community of Inquiry (CoI) model a thorough and comprehensive coverage and treatment, the CoI’s approach to assessment very much falls in line with the spirit of this new era of assessment.  Within the Community of Inquiry framework, assessment is part of “Teaching Presence,” “the unifying force” which “brings together the social and cognitive processes directed to personally meaningful and educationally worthwhile outcomes” (Vaughn, et al., 2013, p. 12). Teaching presence consists of design, facilitation, and direction of a community of inquiry, and design includes assessment, along with organization and delivery.  “Assessment very much shapes the quality of learning and the quality of teaching. In short, students do what is rewarded. For this reason one must be sure to reward activities that encourage deep and meaningful approaches to learning” (Vaughn, et al., 2013, p. 42)

In designing assessment through the Community of Inquiry lens, it is essential to plan and design for the maximum amount of student feedback. “The research literature is clear that feedback is arguably the most important part in its potential to affect future learning and student achievement” (Hattie, 1987; Black & Wiliam, 1998; Gibbs & Simpson, 2002 as cited in Rust, et al., 2005).  Good feedback helps clarify what good performance is, facilitates self-assessment and reflection, encourages teacher and peer dialogue around learning, encourages positive motivational beliefs and self-esteem, provides opportunities to close the gap between current and desired performance and can be used by instructors to help shape teaching (Nicol & Macfarlane-Dick, 2006 as cited in Vaughn, et al., 2013, p. 82).

Planning for positive feedback can help students “succeed early and often.”  Positive feedback environments can help students “get on winning streaks and keep them there” so that this emotional dynamic feeds on itself and students get into “learning trajectories where optimism overpowers pessimism, effort replaces fatigue and success leaves failure in its wake” (Stiggins, 2008, p. 37). 

Direct instruction, part of teaching presence, is most effective when the feedback is encouraging, timely, and specific. This further confirms that irrespective of teaching environment, “instructors who take the time to acknowledge the contributions of students through words of encouragement, affirmation or validation can achieve high levels of teaching presence” (Wisneski, Ozogul, & Bichelmeyer, 2015).  In addition to providing feedback, a social constructivist approach requires that students actively engage with the feedback.  “Sadler (1989) identified three conditions for effective feedback. These are (1) a knowledge of the standards; (2) having to compare those standards to one’s own work; and (3) taking action to close the gap between the two” (Rust, et al., 2005).  In order to promote student engagement with feedback, “instructors in a blended community of inquiry are also encouraged to take a portfolio approach to assessment. This involves students receiving a second chance or opportunity for summative assessment on their course assignments” (Vaughn, et al., 2013, p. 93).  Providing multiple opportunities to submit work is highly authentic to real-world work contexts, it encourages students to work to close the gap between current and desired performance, and it exemplifies the spirit of learner-centered assessments. 

Peer assessment is another component of learner-centered assessments (see Students teaching students: Or the blind leading the blind?) and can be a particularly useful approach; “one of the strategies that can improve the quality of education, particularly in web-based classes, is electronic peer review.  When students assess their co-students’ work, the process becomes reflexive: they learn by teaching and by assessing” (Nagel & Kotze, 2010).  A useful model to account for self-reflection, peer assessment, and instructor-led assessment is the Community of Inquiry’s Triad Model (Vaughn, et al., 2013, p. 95).  The triad model also helps identify the most beneficial technologies and interactive platforms to be used.  “Technology-enabled collaborative learning environments can be a rich context for assessing higher order skills, as long as the purpose and design of the assessment is clear about its targets and the assessment tasks are constructed to include technology as part of the collaborative problem solving task and the assessment provides timely useful feedback to teachers and students” (Webb & Gibson, 2015).  To create synergy among the learner, the task and technology, the use of technology must directly support the learning outcomes and the real-life nature of the activities (Herrinton, Oliver & Reeves, 2006). In other words, the technology needs to be used in an authentic fashion.

In addition to assessment's enduring importance to the student experience and its growing importance to learning outcomes and quality assurance, the Community of Inquiry has also focused on assessment as a research agenda for proof of cognitive presence, the most elusive of all the presences. “Complementing other studies, they (Shea, et. al., 2011, p. 109) found little evidence of student engagement at the higher levels of cognitive presence, irrespective of their grades. They propose various explanation for this includes a failure to develop measures of assessment of learning that are meaningful to both students and instructors and recommend more research exploring correlations between cognitive presence and instructor assessment” (Evans & Haughey, 2014).

Students teaching students: Or the blind leading the blind

This began as a paper for EDDE 802 - Research Methods in Education, and then became a chapter on peer assessment in online learning contexts

The venerable reputation of peer assessment. The ability of peer-to-peer interaction to enhance learning is well documented. Topping (2009) references George Jardine (1774-1826) who described the methods and advantages of peer assessment of writing at the University of Glasgow.  In 1842, the French essayist Joubert is attributed with the quote, “To teach is to learn twice” (as cited in Vaughn, Garrison & Cleveland-Innes, 2014, p. 87).  Over one hundred years later, McKeachie (1987) wrote, “The best answer to the question, ‘What is the most effective method of teaching?,’ is that it depends on the goal, the student, the content and the teacher.  But the next best answer is, ‘Students teaching other students.’”  McKeachie includes several studies regarding the effectiveness of peer instruction, such as Gruber and Weitman (1962) and Webb and Grib (1967), which show peer discussions without an instructor produced significant improvements in achievement tests and student interest in their discipline. 

The Webb and Grib study involved 1,400 students in 42 courses, and the favorable results included gains in student responsibility, from memorization to comprehension and understanding.  The authors note that more importantly, at the completion of their investigation of student led discussions, “a significant proportion of teachers and students had changed the conception of their roles and their ideas on how students learn,” but “since the objectives of the project did not focus on these changes, no provision was made for their measurement, if, indeed, measurement of them is possible [emphasis added] (1967, p. 71).  Still, what instructors discovered was a profound impression that students were more capable on their own, that students who had rarely spoken out in class not only participated, but expressed ideas of “good quality” (1967, p. 75).  The other thing instructors discovered was their role as teachers had shifted.  After listening to the discussions, instructors became aware of the student’s varied interpretations of lecture material, which caused them concern over the effectiveness of their communication, which led to a redefinition of role.  The new role meant “becoming more attuned to student needs, being more aware of what their students were thinking, and recognizing the difficulties students were having with material” (1967, p. 75).  This speaks to the primary benefit of peer assessment; it makes the learning process more visible in ways that traditional assessment strategies often cannot, but it also speaks to one of the pitfalls of peer assessment – assessor competence.

Students’ perception of their role also changed from a passive, receptive attitude to a more active, responsible one.  As one of the students quoted in their study relates, “It finally puts the responsibility of educating one’s self on the student’s shoulders, where it should be, for it is the student who will learn and understand more when it is he who discovers for himself the truth or falsity of the course’s content (p. 76)”  And, much like the instructors, students also became cognizant of the abilities of their fellow students.  They gained greater respect for the ability and the divergent views of other students, and the more reticent students seemed to “blossom” in the more relaxed atmosphere of the peer group.”  The peer led discussions strengthened social presence (e.g. “They help to develop a habit of listening as well as speaking. They help to teach respect for another person’s point of view.”), and cognitive presence, “students often realized while attempting to explain the material to others that they did not really comprehend it themselves” (p. 77).  For some, this realization meant changing their study habits from memorizing to “seeking a more thorough understanding of the material” and applying that understanding to personal existence.  In the end, Webb and Grib identified seven principle advantages of student led discussions:

1.       Discussion places more emphasis on comprehension and understanding and less on memorization,

2.       In the interaction, students came to see several other points of view,

3.       The students own ideas were clarified in the process of discussing with others,

4.       The discussions forced students to think and organize their ideas,

5.       Students were more actively involved in their own learning,

6.       The discussions forced more thorough preparation than regular class meetings, and

7.       The discussions led to a greater interest in the subject matter.

Crowdsourcing feedback in online learning contexts.  Beyond student-led discussions, more recent studies corroborate that peer marking results in statistically significant improvement in students’ subsequent work (Forbes & Spence, 1991: Hughes, 1995; Cohen et al., 2001; Rust, 2002 as cited in Rust, O’Donovan & Price, 2005).  Peer assessment has also handled the digital shift and shows promise in online learning contexts; “one of the strategies that can improve the quality of education, particularly in web-based classes, is electronic peer review.  When students assess their co-students’ work, the process becomes reflexive: they learn by teaching and by assessing” (Nagel & Kotze, 2010).  This is the spirit of the Community of Inquiry’s conception of Teaching Presence, which “implies that everyone in the community is responsible for providing input on the design, facilitation, and direction of the teaching process” (Vaughn, et al., p. 87).  Technological advances enable learning to transcend time and place and support peer assessment because it eliminates challenges students face in managing their day-to-day lives.  “In a blended community of inquiry, one of the biggest challenges of peer assessment activities can be finding a convenient place and time for all students to meet outside the classroom,” a problem solved by leveraging collaborative tools providing 24-7 access in hyper-connected environments. 

Feedback, conceptualized as information provided by an agent about aspects of one’s academic performance, is one of the single most important factors influencing learning (Hattie & Timperley, 2007).  And yet, “it is often not possible to provide feedback that is both detailed and prompt, thus limiting its effectiveness in practice” (Sun, Harris, Walther & Baiocchi, 2015).  Peer assessment has been suggested as a method to crowdsource feedback in large online learning contexts because crowdsourcing has “been applied to otherwise intractable problems with surprising success” and would provide “as many graders as students, enabling more timely and thorough feedback” in a number of settings, such as MOOCs, where it would otherwise be impossible (Sun, et al., 2015).  As Sun, et al. (2015), plainly state:

peer assessment is a workable solution to the problem of feedback; it reduces the burden to the instructors with minimal sacrifice to quality. On top of this, it has been conjectured that students also learn in the process of providing feedback [emphasis added]. If true, then peer assessment may be more than just a useful tool to manage large classes; it can be a pedagogical tool that is both effective and inexpensive (p. 2).

MOOCs present a problem of scale and peer assessment could unify Peters’ vision of large-scale education with new interactive technologies which work best in smaller, more intimate environments.  Guri-Rosenblit (2014) writes that the industrial mode of operation has not yet proven compatible with the new digital technologies.

Efficient online communication is, by its very nature, labour intensive.  The industrial model is based on the notion of a small number of academics who are responsible for developing high-quality materials for large numbers of students.  Obviously, small numbers of academic faculty are unable to interact with thousands or even with hundreds of students (p. 115). 

The effective utilization of peer assessment may allow small numbers of faculty to design interactive learning environments with hundreds or thousands of students using learners themselves as the prime mode of interaction.  The advantages of peer assessment include providing larger quantities of feedback than the instructor could provide, in a timely and authentic fashion that resembles professional practice where providing and receiving feedback from work colleagues is a common activity (van der Pol, van den Berg, Admiraal & Simons, 2008).

Not everyone believes this. Peer assessment has its detractors and its challenges.  Downes (2013) describes some of the ugliest manifestations of peer assessment as “the blind leading the blind,” where students reinforce each other’s misconceptions, or “the charlatan”, where students who are not subject matter experts convince other learners of their expertise.   Peer assessor competence, “the blind leading the blind,” presents two challenges.  First, students may not possess subject matter expertise to fairly assess their peers.  Secondly, learners may also possess a skill deficiency because students “typically have no experience” in peer assessment “which breeds inconsistent subjective evaluation” (Luaces, Alonso-Betanzos, Troncoso & Bahamonde, 2015).  Instructing how to give and receive feedback is an important part of the teaching and learning process (Barber, King, & Buchanan, 2015), and the role of positive, affective feedback in peer assessment is inconclusive.  Stiggins (1999) suggests that planning for positive feedback can help students “succeed early and often” and yet, other research shows students ignore positive feedback. 

Designing peer assessment interactions for maximum impact.

Peer assessment is far from a novel concept, but to this point it has only been conjectured that peer assessment is a reflexive process.   Peer assessment “is an arrangement for learners to consider and specify the level, value, and quality of a product or performance of other equal-status learners.” (Topping, 2009).   In addition to the assessment of peer’s work, revising the learners' own work after engaging with peer feedback is regarded as the other important activity for learning reflection (Smith, Cooper, & Lancaster 2002), particularly in online environments where “a growing number of educators have tried to utilize Internet-based systems to facilitate the process of peer assessment” (Chen & Tsai, 2009). 

Falchikov and Goldfinch (2000, p. 315 as cited in Falchikov, 2004) suggest peer feedback judgments are most effective when they are based on well understood criteria of academic products.  The potential benefits of peer assessment are also maximized when there is a deliberate attempt to build assessor proficiency through instructional interventions, such as providing specific assessment criteria, and with examples for how to compose valuable peer feedback messages (Gielen & de Wever, 2015).  Peer assessment is most effective when learners are prepared for assessment through the use of marking exercises (Rust, 2002).  At a bare minimum, learners should be involved in a short intervention where they are exposed to the assessment criteria, model answers, and examples of meaningful feedback messages (Gibbs, 1992, p. 17 as cited in Rust, 2002). 

As Falchikov (2004) has enumerated, there are several key variables known to affect the outcomes of peer assessment, including design, population characteristics, what is being assessed, the level of the course, how the assessment is carried out and the nature of the criteria used in the assessment process.  All of these instructional variables suggest that learning is fundamentally situated, and peer assessment, as a pedagogical approach, is fundamentally constructivist in nature, where “meaning is understood to be the result of humans setting up relationships, reflecting on their actions, and modeling and constructing explanations” (Fosnot, 2005, p. 280).  This important to keep in mind because not only will the variables enumerated above affect the outcomes of peer assessment, but so will the structure and type of feedback provided to the learner.

Cheng, Liang and Tsai (2015) divide peer feedback messages into three types, affective (comments providing support, praise, or criticism), cognitive (comments focusing on the correctness of the work or giving guidance for improvement), and metacognitive (comments about verification of knowledge, skills or strategies).  In their investigation of writing performance in an undergraduate context, cognitive feedback messages were more helpful than affective or metacognitive feedback. Cognitive feedback provides explanation or elaboration of the problems identified or suggestions provided.  Another way to view the quality of a feedback message can be determined by its content.  The content of an effective feedback message should provide both verification and elaboration.  Verification is described as “a dichotomous judgement to indicate that a response is right or wrong,” and elaboration is the “component of the feedback message which contains relevant information to help the learner in error correction” (Hattie & Gan, 2011, p. 253 as cited in Gielen & de Wever, 2015). 

Assessees, the other half of the peer assessment transaction, need to be capable to question the assessor’s peer feedback and have the opportunity to make changes accordingly, where the assessee chooses to follow, or not follow, the assessor’s advice in order to augment the quality of the academic performance (Horvadas, et al., 2014).  Strangely enough, previous research suggests both positive and negative feedback can stimulate negative outcomes; positive feedback may cause learners to “rest on their laurels,” and negative feedback may cause learners to give up rather than double their efforts (Sun, et al., 20145).  This discussion highlights that, for peer assessment to achieve its impact, attention needs to be paid to the structure and support needed for an assessor to generate high quality peer feedback (Horvadas, Tsivitanidou & Zacharia, 2014), and for assessees to be able to able to engage, reflect and revise their work.  Without accounting for the structural components of the peer assessment process, peer assessment will less likely produce a learning environment where students are teaching students, but more likely create the conditions where the blind are leading the blind.

The gray areas of peer assessment. Despite claims that peer feedback can be an effective and inexpensive pedagogical approach, there remains significant mystery about how, why, and when peer assessment works.  Peer assessment is a complex phenomenon, and a literature review of peer assessment in online learning contexts highlights three intertwined research gaps.  In peer assessment studies, there exists

1.       an unclear understanding of what takes place during the peer assessment process that contributes to learning,

2.       a general lack of quantitative studies analyzing the impact of feedback on academic performance, and

3.       very little research as to how positive/negative feedback impacts learners during the learning transaction.

Successful peer feedback “is dependent on interrelated factors including feedback quality, competence of assessors, perceptions of the usefulness and importance of feedback” (Demiraslan Cevik, Haslaman & Celik, 2015).  A mini-research agenda for peer assessment in communities of inquiry and MOOCs includes examining how “the types of feedback” (affective or cognitive) affects students’ performance (Nelson & Schunn, 2007), student motivation, and why some students cannot perceive the benefits of peer assessment (Cheng, et al., 2015).  Another research direction includes how to build assessor competence in participatory learning environments.  Much is unknown about how feedback affects motivational variables, which “deserve further exploration” (Van Zundert, Sluijsmans, Konings & van Merrienboer, 2012), and why students decide to use or ignore specific feedback comments.  Demiraslan Cevik et al. (2015) suggest that “the relationships between the nature of group dynamics and the acceptance and use of feedback merit further exploration” in participatory learning environments where peer assessment is given from one learning group to another, rather than an on individual basis, exploring how group learning dynamics affects the acceptance or rejection of peer assessment. 

The importance of understanding peer assessment in terms of learning analytics.

Gasevic, Rogers, Dawson & Gasevic, (2016) “posit that learning analytics must account for (instructional) conditions in order to make any meaningful interpretation of success prediction” [emphasis in the original].  In a 2016 study they conducted, feedback was a type of trace data collected in only one of the nine courses under investigation, and in that case, the feedback tool was used to ask students about their study habits online and the value of quizzes.  It was primarily used for question and answer, not peer feedback, highlighting a lack of research on peer feedback as an instructional condition from a learning analytics research perspective, confirming their observation that there is a “need to consider instructional conditions in order to increase the validity of learning analytics findings.” 

These potentially important differences in peer assessment design have not been fully explored from a learning analytics perspective.  Gasevic, et al., (2016) also suggest that “learning analytics has only recently begun to draw on learning theory and there remains a significant absence of theory in the research literature that focuses on LMS variables.”  Gasevic, et al., (2016) suggest there are distinctive elements in courses, such as peer feedback, that determine learning management system (LMS) use, and they ground learning analytics approaches in Winne and Hadwin’s constructivist learning theory, where learners construct knowledge using tools (e.g. cognitive, physical and digital) to operate on raw information (e.g. readings given by the course instructor or peer assessment artifacts) to construct products of their own learning (Winne, 1996; Winne, 2011; Winne & Hadwin, 1998 as cited in Gasevic et al, 2016).  These products can be evaluated with respect to internal standards, such as time, or external standards (e.g. rubrics used for grading and/or structured peer feedback scripts).  Gasevic, et al., (2016) point out that learners are active agents in their own learning:

As agents, learners make decisions about their learning in terms of choices of study tactics they will apply to evaluate their learning products against.  Decisions made about learning are influenced by conditions, which can be internal (motivation) and external (learning task grading policy).

It will be essential to understand the instructional conditions in order to make any sense of the patterns that existing within the social learning analytics.

Social learning analytics

make use of data generated by learners’ online activity in order to identify behaviours and patterns within the learning environment to signify effective process. The intention is to make these visible to learners, to learning groups and to teachers, together with recommendations that spark and support learning. In order to do this, these analytics make use of data generated when learners are socially engaged. This engagement includes both direct interaction – particularly dialogue – and indirect interaction, when learners leave behind ratings, recommendations, or other activity traces that can influence the actions of others (Shum & Ferguson, 2012, p. 10).  

 

Most modern learning management systems (LMS) come with a built-in peer assessment tool that automatically distributes anonymous student responses to peer graders, enabling easy crowdsourcing of an effective pedagogical approach, but the ability to fully make sense of this data will require structured instructional conditions, outlined above, in order to best understand learners’ online activity and learning behaviours.

Learning analytics has been defined as “measuring, collecting, analysing and communicating data about learners and their contexts with the purposes of understanding and optimizing learning in the context in which it takes place.”  Learning analytics “should support students’ agency and development of positive identities rather than predict and determine them,” with the goal of providing a basis for effective decision making regarding pedagogical design (University of Bristol, 2013).  The potential of learning analytics resides in its ability to “combine information from multiple and disparate sources, to foster more-effective learning conditions in real-time” (Booth, 2012).  Learning analytics approaches typically rely on data emanating from a user's interactions with information and communication technologies (ICTs), such as LMS, student information systems and/or social media.  For example, the trace data (also known as log data) recorded by the learning management system, such as Moodle or Blackboard, contains time-stamped events about use of specific resources, attempts, time spent in the production or interaction with peer assessment feedback, the number of discussion messages read and volume of online discussions posted.  Data mining techniques, employing “large amounts of data to support the discovery of novel and potentially useful information” (Piatetsky-Shapiro, 1995 as cited in Shum & Ferguson, 2012), are commonly applied to identify patterns in these trace data (Baker & Yacef, 2009, as cited in Gasevic et al., 2016).

Identifying and understanding whatever patterns exist in the peer assessment or peer feedback trace data will only be enhanced by well-designed and well-structured peer assessment activities that account for the instructional conditions. It has been suggested that “machine ethics, including learning analytics, stand on the cusp of moral nihilism” (Willis, 2014) because the conduct of learning analytics is viewed legalistically rather than asking the question, “What does this mean for humanity?”  As Willis (2014) suggests, “now is the time to act within frameworks of human autonomy and agency” to help redefine what is learned from past academic failures and “responsibly innovate knowing that competing values often pervade technological innovations” and push for learning analytics’ interventions that are in the best interests of learners.  As the forces of massification move forward, the promise of peer assessment will only be realized if it is also firmly based in effective pedagogical practices, some of which are largely understood, while others are still unknown territory. 

Detailed APA citations available upon request.

Assessments in Blended Learning - a short elaboration on the CoI's Triad approach

Recently, a colleague asked me to help them write a chapter on blended learning. Here are my contributions to that chapter, which serves as a quick and dirty guide to understanding what blended learning is, some thought about the Community of Inquiry's Triad model of assessment, and some thoughts about the future (and end) of blended learning as a phrase.

It may be that the limited exposure to the power of online learning through blended learning is helping to fuel online learning’s rise in popularity and legitimacy. As Rees (2016) recently opined,

"Old style faculty will become dinosaurs whether they deserve to be or not. That’s why I recently made a commitment to start teaching online, beginning in the fall of 2016. My plan is to create a rigorous and engaging online U.S. history survey course while I am still in a position to dictate terms. After all, if I create a respectable, popular class that takes advantage of the Internet to do things that can’t be done in person [emphasis ours], then it will be harder for future online courses at my university (or elsewhere for that matter) to fail to live up to that example.

At its most basic level, blended learning seeks to take advantage of the Internet to do things that cannot be done in person. Google Docs, Wordpress blogs, and wikis (inside or outside the LMS) make learning more collaborative and the process of learning more visible to the instructor than ever before.  Hill (2016) suggests that online learning has firmly entered the mainstream – despite lingering criticisms from those weighing in on the practice of online learning who have no experience with online learning – especially the experience of learning in online learning contexts that have been well planned and designed with the rigor, engagement and affordances mentioned by Rees above. If it is true that online learning has firmly moved into the mainstream, it is all the more true for blended learning.

We would argue that blended learning fueled (and continues to fuel) a “pedagogical renaissance” (we refrain from using the term “revolution”).  The heated debates which sought to prove (or disprove) the effectiveness of blended learning or face-to-face learning has caused a deeper exploration of teaching practice, as well a more complex understanding of how students learn.  It has been argued by Jensen, Kummer and Godoy (2015), for example, that the improvements of a “flipped classroom” may “simply” be the fruits of active learning.  The flipped classroom is the most basic blend, where recorded lectures, instructional videos and/or animations, or other remotely accessed learning objects or resources are accessed “outside” of class, and when students are “inside the classroom,” their online learning experiences are complemented by active pedagogical approaches, such as problem-based learning, case studies, and peer interaction.  Developing meaningful active learning activities is far from simple, but blended learning’s success has certainly caused fundamental re-thinking of course design and what teaching and learning looks like. Further discussion on flipped learning occurs later in this chapter.

In their quasi-experimental study, Jensen, Kummer and Godoy (2015) looked at unit exams, homework assignments, final exams and student attitudes to compare non-flipped and flipped sections of a course, and what they determined was that “the flipped classroom does not result in higher learning gains or better attitudes over the non-flipped classroom when both utilize and active-learning, constructivist approach [emphasis ours].” The effectiveness of active learning has been established in studies like this and in four major meta-analyses (Freeman, Eddy, McDonough, Smith, Okoroafor Jordt and Wenderoth, 2014; Michael, 2006; Prince, 2004; Hake, 1998), which indicate that the key advantage for blended learning is that it enables students to be active and engaged in various ways, depending on the context and design of the course.

The simplest way to understand constructivist teaching is to separate content delivery (transmission) from concept application. What blended learning has done, most of all, is cause a fundamental rethinking of learning-as-delivery, of content as an item of mechanistic transfer, and to refocus discussion on how to design learning environments so that students are offered the best opportunities for engaging at a deep level with the content and with those in the learning environment.  As Feldstein and Hill (2016, p. 26) observe, when “content broadcast” (content attainment) is moved out of the classroom, it provides more space to “allow the teacher to observe the students’ work in digital products, so there is more room to coach students” in the concept application phase of learning. 

In blended learning, technology becomes "an enabler for increasing meaningful personal contact" (Feldstein & Hill, 2016).

The Community of Inquiry's Triad approach to assessment highlights how digital technologies support various forms of assessment. Below are some examples for how these tools have been used to create dynamic and robust assessment approaches.

Clickers.  Student response systems (SRS, aka clickers) offer a powerful and flexible tool for teaching and learning. They can be used peripherally or they can take a central role during class, but even with minimal use, significant differences have been found in final grades between sections of the same course (Caldwell, 2007; Lantz, 2010).  In our experience with student response systems, clickers can be used to increase student participation, integrate with other commercially provided learning resources to provide useful feedback to instructors on student learning, and increase opportunities for fun through formative assessment practices.  In a small community college, instructors used clickers in various ways, including using them as comprehensive review at the end of a module, or for students to “get their head in the game” and activate prior learning at the beginning of class with five to 10 short questions.  Other instructors used them with icebreaker activities to solicit student opinions on controversial topics to which they might be reluctant to admit without the veil of anonymity, as a way to launch discussion. Others used it for team-based games where students competed for top points. Students enjoyed the increased interactivity and faculty felt more able to assess the learning of the entire class rather than random, individual students. Based on student content attainment, faculty developed remedial lectures on specific elements and were able to reflect on whether or not their instructional approaches were successful. Through think-pair-share (or test – teach – retest), the opportunities for peer instruction are endless, where students teach each other concepts through discussion.

Wikis. Wikis can be used to assist in group assessment.  Group assessment, as Caple and Bogle (2013) point out, is “a fraught yet increasingly popular, indeed necessary, method of undergraduate assessment.”  Group assessment is often necessary because of the massification and “scalability” of higher education, where some undergraduate courses have upwards of 1,000 students, and it has become popular because the collaborative nature of the assessment task provides the opportunity for students to develop interpersonal skills such as leadership and communication.

Blogs. As the term “blog” was coined in the late 1990s, blogs are now one of the older forms of user-generated content. Blogs are so “old school” that they have given way to other social media platforms, such as Twitter (micro-blogging), and the popular blogger Seth Godin (sethgodin.typepad.com, June 1, 2016) has suggested that Google and Facebook no longer want people to read blogs because they are free, uncensored, and exist outside their walled gardens. Still, blogs remain an effective strategy for a form of student engagement that fosters collective and reflective learning (Mansouri & Piki, 2016). While students primarily use blogs for entertainment and personal fulfillment, it has been suggested that “we would be more effective teachers if we helped students solve their real-world personal, professional, and academic writing problems by building on existing practices, including the flexible use of the composing technologies that permeate their everyday lives” (Moore et al., 2016).  Blogging remains a powerful option for formative assessment, whether it takes place within the closed environment of the LMS or out on the open web as a way to facilitate collaborative learning, reflection, and social support. As Garrison and Akyol (2009) suggest, this venerable Web 2.0 tool goes beyond simple interaction, giving learners the opportunity to engage in purposeful discourse to construct meaning, share meaning, and consolidate understanding both at personal and conceptual levels.  Blogs may produce greatest benefits for students who are shy, introverted or naturally reflective (Ciampa & Gallagher, 2015).

This brief look at clickers, wikis, and blogs highlights the dynamic and flexible digital tools that can be used to create sophisticated blended learning environments that enable faculty and learners to engage with and critically monitor and assess the quality of learning taking place in any form of educational provision. These standard tools are now being complemented by other social networks, such as Twitter, to expand assessment approaches. Twitter has been used to enhance social presence in large enrollment online courses (Rohr & Costello, 2015), or increase concept retention, course enjoyment, and student achievement by creating avenues for student engagement that “transcended traditional classroom activities” (Junco, Heiberger & Loken, 2011).  Others (Barnard, 2016) have taken advantage of Twitter’s strict character limit and the imposed brevity to teaching creative writing and storytelling skills in new digital environments. As mentioned earlier, the affordances of these tools provide limitless opportunities to invent more creative assessment approaches.

Flexible learning.

Perhaps the most promising aspect of flexible learning and assessment is contained within the concept of differentiated assessment, which is “an educational structure that seeks to address differences among students by providing flexibility in the levels of knowledge acquisition, skills development, and types of assessment undertaken by students (Varsavsky & Rayner, 2013), rather than the “middle of the cohort teaching approach.”  There are significant challenges and opportunities provided by giving students choice for how they will provide evidence of learning. Again, massification, scalability, reduced funding, and/or developing rubrics that can be fairly and meaningfully applied in a high choice, high variability environment are all significant challenges.  However, differentiated approaches to assessment in higher education provide “perhaps the most genuine framework for student learning”(Varsavsky & Rayner, 2013) because they recognize that learning is, by its very nature, an individual experience. Differentiated assessment also applies sound adult learning principles, such as giving students, particularly adult students, control over how they will be assessed.  By allowing participation in the creation of their assessment, learners become co-creative participants in their experience, affording them the opportunity to generate ideas about what would be the most meaningful, valuable, and hands-on way to demonstrate learning.

The flipped classroom.

In addition to creating more active learning opportunities for concept application, the flipped classroom confers to students a level of temporal freedom (Anderson, 2003). In an ongoing research project at a small community college, when students were interviewed about the benefits of a flipped classroom environment, they responded positively to the sense of control they had in the instruction process. They studied in the kitchen, in cafes, or on their beds. They could rewind (or fast-forward) parts of the lecture, re-watch sections they were unclear of, and they had time to process, reflect, and develop more meaningful questions. They also had control over their energy. As one student put it, “In lecture, by the end of class, I didn’t want to ask questions because I just want to get out of there, and I know I didn’t absorb half of it because I’m tired.”  With the flipped classroom, “I can do it when I know I am going to be able to focus.”  This is a perfect expression of what Anderson (2004) called temporal freedom, and if students can interact with lectures multiple times and at times when they feel ready for learning, this will probably be evidenced in their assessments.

The new normal and the end of blended learning.

Guri-Rosenblit (2014) points out that “One of the main conclusions of the OECD (Organization for Economic Cooperation and Development, 2005) study was that most higher education institutions use online teaching to enhance classroom encounters rather than to adopt a distance teaching pedagogy” (p. 109). She pronounced a meld of systems:   “The clear and distinct function of distance education providers for over 150 years is not clear and distinct anymore” (p.109) because any bricks-and-mortar institution can extend itself to students outside its on-site campus and offer online courses in some format to learners regardless of whether they study on-campus or off-campus.  Has this become the “new normal” over the past decade, since the OECD study? While more and more "blended learning" research continues to appear, its implementation appears to be comfortably embedded into global teaching practice. 

As far back as 2006, Educause's Center for Applied Research (Albrecht, 2006) wrote that

"the battles over the efficacy of residential learning versus online learning have disappeared with the quiet adoption of blended learning. While an occasional attack surfaces, the attraction of mixed delivery mechanisms has led to implementation, often without transcripting and virtually without announcement (p. 2). Looking to the future, we echo Ololube (p.52), who in his chapter “A vision for the future of blended learning," in Advancing technology and educational development through blended learning in emerging economies (2013), concluded that:

"Blended learning combines mobile learning and (flipped) classroom sessions. The terms m-learning, e-learning, and blended learning have disappeared. People are learning with whatever device is available and the learning systems are flexible enough to allow everybody to start at the appropriate level. (p. 52)"

Similarly, Ontario’s distance consortium, Contact North (2012), when looking at the long-term strategic perspectives among Ontario college and university presidents, suggests that blended learning works because it is evolving naturally, because students like and demand it, and because faculty members find that it enhances, rather than replaces, their traditional teaching methods. In fact, Contact North suggests, “it is highly likely that such terms as 'online,' 'hybrid,' or 'blended' learning will disappear in the near future as the technology becomes so integrated into teaching and learning that it is taken for granted" (2012, p. 10).

Conclusion.

Instructors and instructional designers need to be clear about the assessment choices they make; do they align with the learning outcomes and one’s teaching philosophy?  Is the choice of a clicker, a wiki or a blog the most appropriate assessment method?  Do these affordances enable increasingly meaningful personal and interpersonal contact, or greater learner choice and control?  Are they selected to reduce grading loads, which is deemed by many to be a perfectly reasonable factor upon which to make an assessment decision?  While there is no recipe for the perfect blend, these are the thoughtful considerations necessary as one rethinks assessment strategies in a blended learning environment.

Detailed APA citations available upon request.

 

A Different Approach to Strategic Planning Using Appreciative Inquiry

The interview describes the integration of Appreciative Inquiry (AI) into the strategic planning cycle at Medicine Hat College. Appreciative Inquiry can play a powerful role in initiating and managing change through the process of asking generative questions. AI increases the possibility of introducing successful and transformative change at all levels within an organization. The interview was conducted in December 2015 by Innovations in Practice Editor Jennifer Easter.

DOI: http://dx.doi.org/10.21083/partnership.v11i1.3391

Appreciative Inquiry as leadership and driving organizational change


Driving and managing change processes will be leaders who are convinced there are better approaches, who are willing to learn and who truly believe in the power of the positive. Appreciative Leadership, which grows out of the appreciative tradition, is “unique among leadership theories both past and present” through its focus on “strengths-based practice,” and the “search for the best in people and organizations” as a way to create “organizational innovation and transformation” (Orr & Cleveland-Innes, 2015). This paper discusses how Appreciative Inquiry and Appreciative Leadership can be used to surface organizational hopes and dreams, create community, and build the future world we want to live in, where libraries are widely understood as essential services creating strong and resilient learning communities.

From the American, P3: The Independent Voter in the US Presidential Race

Once a month, or every couple weeks, I compile my reading and thinking about the US Presidential race. Here is the third installment of From the American. The next couple are going to be on whether or not voters are fools, how Trump seems to be immune to political gaffs, and a personal reflection on Hillary.

High impact educational practices: A closer look

On April 15, 2016, I was part of the organizing committee and presenter at Medicine Hat College's Liberal Education Symposium: Tensions and Possibilities in Liberal Education. Here are the slides with lecture notes from this 20 minute talk. The highlight for me was that much of my talk reinforced and jumped off from Dr. Jim Zimmer's keynote. Another highlight was Dr. Karim's Dharamsi's question about whether or not high impact practices represent a better approach to doing a bad thing. I've thought a lot about his question since then, and I am reminded of Nietzsche's observation that humanity has no goal. What kind of citizen are we hoping leaves our doors? The citizen who donates to the food bank? The citizen who understands the complexities of food security and works within the existing power structures for a more equitable state? Or the revolutionary who suggests that poverty and hunger are issues of justice and morality and that this unjust state of affairs should no longer exist, that what is needed is structural transformation?

I'm not sure, but one thing does seem certain: traditional democratic values seem to getting lost, and along with them, the idea that education's ultimate objective is to support democracy by the cultivation of citizenship.

From the American, Part 1: James Madison's Worst Nightmare

Here is the first political column I wrote for the Medicine Hat News.  My "vision" for the series is some "retro-punditry," back to a time when people actually listened to one another and engaged in civil dialogue and lived by the motto, "I disagree with what you have to say, but I'll defend to the death your right to say it." I know there was no Golden Age, but there was a time when it wasn't quite like this.