Assessment Strategies for Online Learning: Engagement and Authenticity

When I was in college, I dreamed of writing book. It was going to be this epic, mystical-reality bildungsroman about the time I was arrested for a crime I didn't commit. It was going to be hallucinogenic and psychedelically beautiful in line with Huxley's Island, Hesse's Demian and Thompson's Fear and Loathing. I never finished it, and probably shouldn't have. 

But this one did get finished!

Conrad and Openo - Assessment strategies in online learning contexts.PNG

It "hits the shelves" in Spring 2018. One of the reasons it is so exciting to publish with Athabasca University Press is that they believe in open access, so it will be freely available on the web, as well. It was such an honour and privilege to work with Dr. Conrad on this book. It wasn't always easy, but I am thrilled with the final product. 

Bridging the divide: Leveraging SoTL for quality enhancement

I am very excited to share this piece of research. This was put together by a great group of folks who worked together in the Society of Teaching in Learning in Higher Education's (STLHE) collaborative writing groups. A special issue of the Canadian Journal of the Scholarship of Teaching and Learning will come out soon and contain all of the collaborative writing group articles. I am very thankful for the opportunity to participate in this group, and I hope that some of the recommendations to recognize the legitimacy of SoTL with Canadian provincial quality assurance frameworks will come to pas as quality assurance in higher education continues to evolve. 

This paper argues a divide exists between quality assurance (QA) processes and quality enhancement, and that the Scholarship of Teaching and Learning (SoTL) can bridge this divide through an evidence-based approach to improving teaching practice. QA processes can trigger the examination of teaching and learning issues, providing faculty with an opportunity to systematically study their impact on student learning. This form of scholarship positions them to take a critical and empowered role in the continuous improvement of student learning experiences and to become full participants in the goal of QA structures. A document analysis of current provincial QA policies in Canada reveals a gap between how teaching and learning challenges are identified and how those challenges are studied and acted upon. A QA report is not the end result of an assurance process. It is the beginning of a change process that is intended to lead to improvements in the student learning experience. The authors consider how SoTL provides a research-minded approach to initiate continuous improvements within a QA framework, and provides considerations for how it might be integrated into evolving provincial frameworks.

Openo, J., Laverty, C., Klodiana, K., Borin, P., Goff, L., Stranach, M., and Gomaa, N. (in press). Bridging the divide: Leveraging the scholarship of teaching and learning for quality enhancement. The Canadian Journal for the Scholarship of Teaching and Learning.

Appreciative Leadership: A Cure for Today's Leadership Crisis

This paper was originally written for EDDE 804, Leadership and Project Management in Distance Education.  The assignment called for students to present and review a leadership theory.  I chose Appreciative Leadership because of my powerful experiences with Appreciative Inquiry and Appreciative Coaching, and because I think Appreciative Leadership may be a cure for today's leadership crisis. 

There is a leadership crisis. Kellerman (2012) suggests “leadership is in danger of becoming obsolete” (p. 200) because of dominant cultural constructions of leadership.  These constructs, promoted by the leadership industry, include that the wider world only matters insofar as it pertains to the narrow world, and this insular leadership focuses solely on financial performance, disregarding any external damage caused.  According to Kellerman, leadership education programs assume leadership can be taught quickly and easily, and that leadership can be taught in silos with a curriculum that concentrates only on what is applicable.  Followership is unimportant, bad leadership is unimportant, and not enough attention is paid to slowly changing patterns of dominance (pp. 191-195).  

Gronn (2003) also suggests conventional constructs of leadership “are in trouble” (p. 23) due to the oversimplified leader-follower binary.  Avolio, Walumba and Weber (2009) add a growing sense that historical models of leadership are not relevant to today’s digital/knowledge economy.  The greatest indication of the leadership crisis, however, is that leadership theories and leadership development programs have not enabled leaders to do what leaders need to do.  If the essence of leadership is influencing change (Uhl-Bien, 2003), and “80 percent of organizational change initiatives fail to meet their objectives” (Black, 2014, p. 3), conventional constructs of leadership are ineffective.

Kellerman (2012) suggests a perfect world would contain an overarching leadership theory with application to leadership practice (p. 195).  Appreciative Leadership may provide that. Whitney, Trosten-Bloom and Rader (2010) define Appreciative Leadership as

a way of being and a set of strategies that give rise to practices applicable across industries, sectors, and arenas of collaborative action. . . Appreciative Leadership is the relational capacity to mobilize creative potential and turn it into positive power – to set in motion positive ripples of confidence, energy, enthusiasm, and performance – to make a positive difference in the world (p. 3).

Gronn (2003) suggests that to study leadership, one should investigate the outcomes of workplace practices and then work backwards.  This can be accomplished by examining examples where appreciative practices have been employed.

Building organizational resilience using Appreciative Inquiry

Attached below are the slides from my presentation at the Family and Child Support Service Agencies of Alberta's Power of Prevention conference. on November 24, 2016.

Session description: Best estimates suggest 60-80% of strategic change initiatives fail. Leaders can increase their odds using Appreciative Inquiry (AI). Appreciative Inquiry is unapologetic in its focus on the positive, believing communities can be strengthened through collaborative inquiry as a method to turn problems into transformation. Emerging from positive and sports psychology, Appreciative Inquiry seeks out what is working well within organizations in order to create greater success. AI is a high-engagement process where the members of an organization co-create their preferred future together through appreciative interviews, re-framing, and the development of possibility statements. This highly interactive workshop introduces a new method of strategic planning that is perfectly suited for a time of rapid change and change fatigue. 

From the American, P8: The American Crisis Revisited

I’m eating crow and a slice of humble pie with some old drinking buddies — anger, disbelief and fear — feeling like I did after the Supreme Court cancelled the Florida recount, meaning Gore “lost.” A numb hopelessness won’t let go. But it’s only Day 1. Lincoln is whispering in my ear, “we must not be enemies,” and passion must not “break our bonds of affection.” He’s right, and my better angels will reappear.

So I do what I did in 2000. I read Thomas Paine’s The American Crisis, words that gave the colonists and the Continental Army hope when read to them before the Battle of Trenton on Dec. 23, 1776.

“These are the times that try men’s souls. The summer soldier and the sunshine patriot will, in this crisis, shrink from the service of their country; but he that stands it now, deserves the love and thanks of man and woman. Tyranny, like hell, is not easily conquered; yet we have this consolation with us, that the harder the conflict, the more glorious the triumph … My secret opinion has always been that God Almighty will not give up a people … or leave them to perish … Neither do I suppose that He has given us up to the care of devils … Let them call me rebel, but I should suffer the misery of devils if I were I to make a whore of my soul by swearing allegiance to one whose character is that of a sottish, stupid, stubborn, worthless, brutish man.”

I refused to be a sunshine patriot in 2000, and I won’t be one now. Donald Trump is my president, but I will not make a whore of my soul and be happy about it. Now is the time for faith, that despite evidence to contrary, God has not given us up to the care of devils, and the course is to recommit to working for human decency by recognizing that a majority of Trump’s supporters are not members of the Ku Klux Klan. I’ve eaten with Trump voters at barbecues, some are members of my family, and each one I know is a hard-working American disappointed by a system that has dismissed and demeaned them.

I don’t know what to do right now, other than resist demonizing my fellow citizens. As I reflect, I figured Hillary Clinton would win (not because I wanted her to — like so many others, I fell in love with Bernie) because the Republican Party was imploding. Feuds between Ryan and Trump, McCain and Trump, and Pence and Trump, all indicated a party in disarray, which is a party that typically loses. Obama’s approval ratings were strong, which is a good sign for the party possessing the presidency, and Michelle delivered the best speech of the campaign. Trump didn’t represent classic conservative views of small government, held a confusing stance on abortion, and ran a weird campaign. And I wasn’t alone in thinking it was impossible that a 3 a.m. tweeting, Putin-admiring, tax-dodging, pathologically lying racist woman-hater would win.

This illuminates how obvious it is that it’s not the Republican brand in trouble, but the Democratic Party that’s in shambles, and they can’t blame this on Trump or the FBI. The Democratic National Committee actively worked against Sanders and chose a candidate with a history of scandal, whose foundation may have accepted donations from terrorist-sponsoring countries. Republicans now control two-thirds of state houses, a majority of governorships, and hold a historic margin in the House of Representatives. This should sit heavily on Democratic leaders, and hopefully, this will be the last we see of the Clintons, who have repeatedly failed the American people and destroyed faith in the Presidency. Just like 2000, Gore’s loss had more to do with a Clinton impeachment than it did with the hanging chads in Florida. Democrats have no one to blame but themselves, and only time will tell whether or not they realize that.

From the American, Part 6: The Myth of America

I submitted this to the Medicine Hat News and realized, only after publication, that the final part of this didn’t appear. Ending should read: Clinton better represents how Americans truly see themselves. Most Americans are not ready to see America as a third world country, nor are they willing to give up on their hard fought path of progress. America is far from perfect, and Clinton is far from a perfect candidate, but she better represents America's enduring hopes for equality and democracy, certainly more than Trump.

This piece was submitted before the Access Hollywood revelation. which appears to sound the death bell for Trump's candidacy, whose candidacy should have been dead a long time ago.

From the American, Part 5: The race nobody wins

Here is the latest installment of my observations of the American Presidential race. August and September were ugly, and as one of the commentators on fivethirtyeight put it, you never can underestimate the media. As the media outlets focus on the "dead heat" of the polls and the birther controversy (which really is NOT a controversy), serious policy questions about economic inequality, foreign policy, gun control, gender, Black Lives Matter, and racism in America.

Assessment - the heart of the student experience

When I first attended my doctoral orientation, I shared my keen interest in student assessment, and one of the faculty in the program asked a key question, assessment of what?  That's really the fundamental question as the focus in higher education seems to have squarely landed on assessment. And the intense scrutiny is coming from different places, some enduring, some new.

The importance of assessment has long been recognized. Assessment has been described as “the heart of the student experience” and is “probably the single biggest influence on how students approach their learning” (Brown & Knight, 1994, cited in Rust, O’Donovan & Price, 2005).  Assessment is also highly emotional; students describe it as a process that invokes fear, anxiety, stress, and judgment (Vaughn, Cleveland-Innes & Garrison, 2013, p. 81).  It is fair to say, “nowhere are the stakes and student interest more focused than on assessment” (Campbell & Schwier, 2014, p. 360).

Other key trends in higher education have heightened focus on student assessment. Notable trends include accountability and the “increasing level of scrutiny applied to their [colleges and universities] ability to capture and report performance outcomes” (Newman, 2015, p. 48).  The need for robust quality assurance processes respond both to the still-lingering perception that online learning is ineffective, as well as the precipitous increase in online learning, which is becoming recognized as a crucial 21st century skill, not just a mode of delivery.  The increasing demographic of adult learners who desire to gain competencies desired by employers has also led to a heightened awareness of the challenges and opportunities in assessment.  A 2015 study from Colleges Ontario shows that 44 percent of current Canadian college students already possess post-secondary experience and return to college for the purposes of finding “that extra piece that makes them employable” or to “upgrade skills in a particular area” (Ginsberg, 2015).

As such, any discussion of assessment has to confront one of the great current debates in higher education. Wall, Hursh, and Rodgers (2014) define assessment as “a set of activities that seeks to gather systematic evidence to determine the worth and value of things in higher education,” including the examination of student learning. They assert that assessment “serves an emerging market-focused university” which has replaced the goals of providing a liberal education, developing intrinsically valuable knowledge, and serving society. The purpose of educational attainment has narrowed to serving society through economic development. This narrow focus has led some to suggest that students “come into play only as potential bearers of skills producing economic value rather than as human beings in their own right” (Barnett & Coate, 2005). This author does not (at this time) take a stance as to whether or not this development is good or bad, but recognizes that learning and assessment are inextricably linked, and that both increasingly focus on skills development. This focus on skills development leads directly to assessment because “one of the most telling indicators of the quality of educational outcomes is the work students submit for assessment” (Gibbs, 2010, p. 7).

Assessment, then, provides evidence of the “outcome” in any “outcomes-based” approach to education. In Ontario, for example, “postsecondary learning outcomes are rapidly replacing credit hours as the preferred unit of measurement for learning,” but “the expanded presence of learning outcomes at the postsecondary level has outstripped our abilities to validate those outcomes through assessment” (Deller, Brumwell & MacFarlane, 2015).  Assessment “remains the keystone of the learning outcomes approach,” and assessment practices are increasingly focused on demonstrating acquisition of learning outcomes for the “purposes of accountability and quality measurement,” which is increasingly measured by their alignment with market-oriented aims and closing the Canadian “skills gap,” where Canada loses as much as $24.3 billion dollars in economic activity (Bountrogianni, 2015).  The perspective of students as potential bearers of skills to support economic development drives the move towards authentic assessment, where students provide “direct evidence of meaningful application of learning” (Angelo, 1999; Maki, 2010, as cited in Goff, et. al., 2015) by using skills, knowledge, values and attitudes they have learned in “the performance context of the intended discipline.”

And yet, a book on online assessment theory and practice has never been more in need. In the Sloan Online Survey (Allen & Seaman, 2015), the proportion of academic leaders who report that online education is a critical component of their long-term strategy has grown to 70.8% in 2015 (p. 4), an all-time high. The growth rate of distance enrollments has slowed in recent years but continues to outpace the growth rate of the higher education student body in the United States. While faculty perceptions of online learning lag behind those of administrators, Contact North’s online learning wish list for 2016 includes a wish that “we stop debating about whether or not online learning is as effective as classroom teaching.” There is a clear evidence base that at worst, it makes no significant difference, but at best, the affordances of online technology provide some enhancement and opportunities to transform the learning experience and demonstrate learning outcomes. Learners “expect authentic, engaged learning” that involves “a range of different learning activities appropriate to their own learning needs.” According to this report,

the focus [for online learning] has been on courses, programs and learning processes. It has not been on assessment. But that is changing with the development of new methods of assessment involving simulations and games: adaptive learning engines which enable assessment as learning is taking place; new methods for developing assessment tools using machine intelligence; and new developments in ensuring the security and integrity of online assessments.

Contact North claims “we are approaching an era in which new thinking about how we assess knowledge, competencies and skills start to bear fruit.” This new era includes badges, verified learning certificates, and micro-credentials, as well as Prior Learning Assessment and Recognition (PLAR) to facilitate student mobility.  In this new era, assessment will also become a central component of any definition of quality. Within the Ontario Quality Assurance Framework, for example, “each academic unit is asked: What do you expect your students to be able to do, and to know, when they graduate with a specific degree? How are you assessing students to make sure that these educational goals have been achieved?” (p. 12).  Assessment flows directly from learning outcomes and its importance in the educational transaction has grown. The development of new programs in Ontario requires identification of learning outcomes and the “methods to be used to assess student achievement of those outcomes.” The strengthening focus on quality, and new opportunities afforded by technology, certainly demand a fresh look at assessment.

This book works to move towards this new era, and away from the current era where the “field of educational assessment is currently divided and in disarray” (Hill & Barber, 2014). This is not an entirely new claim. Over a decade ago, Barr and Tagg (1995) declared that a shift had occurred in higher education from an instruction paradigm to a learning paradigm and that learner-centered assessment was a central element in this new paradigm (Webber, 2011).

This epochal shift in assessment has moved like a glacier, slowly and yet with dramatic effect. The “traditional view of assessment defines its primary role as evaluating a student’s comprehension of factual knowledge” whereas a more contemporary definition “sees assessment as activities designed primarily to foster student learning” (Webber, 2011). Examples of learner-centered assessment activities include “multiple drafts of written work in which faculty provide constructive and progressive feedback, oral presentations by students, student evaluations of other’s work, group and team projects that produce a joint product related to specified learning outcomes, and service learning assignments that require interactions with individuals the community or business/industry” (Webber, 2011). As Webber points out, there is a growing body of evidence from multiple disciplines (Dexter, 2007; Candela et. al., 2006; Gerdy, 2002) illustrating the benefits of learner-centered assessment, but these examples “do not provide convincing evidence that reform has actually occurred.”

Perhaps one of the greatest transformations has been the development of the Community of Inquiry framework. While it is not within the scope of this book to give the Community of Inquiry (CoI) model a thorough and comprehensive coverage and treatment, the CoI’s approach to assessment very much falls in line with the spirit of this new era of assessment.  Within the Community of Inquiry framework, assessment is part of “Teaching Presence,” “the unifying force” which “brings together the social and cognitive processes directed to personally meaningful and educationally worthwhile outcomes” (Vaughn, et al., 2013, p. 12). Teaching presence consists of design, facilitation, and direction of a community of inquiry, and design includes assessment, along with organization and delivery.  “Assessment very much shapes the quality of learning and the quality of teaching. In short, students do what is rewarded. For this reason one must be sure to reward activities that encourage deep and meaningful approaches to learning” (Vaughn, et al., 2013, p. 42)

In designing assessment through the Community of Inquiry lens, it is essential to plan and design for the maximum amount of student feedback. “The research literature is clear that feedback is arguably the most important part in its potential to affect future learning and student achievement” (Hattie, 1987; Black & Wiliam, 1998; Gibbs & Simpson, 2002 as cited in Rust, et al., 2005).  Good feedback helps clarify what good performance is, facilitates self-assessment and reflection, encourages teacher and peer dialogue around learning, encourages positive motivational beliefs and self-esteem, provides opportunities to close the gap between current and desired performance and can be used by instructors to help shape teaching (Nicol & Macfarlane-Dick, 2006 as cited in Vaughn, et al., 2013, p. 82).

Planning for positive feedback can help students “succeed early and often.”  Positive feedback environments can help students “get on winning streaks and keep them there” so that this emotional dynamic feeds on itself and students get into “learning trajectories where optimism overpowers pessimism, effort replaces fatigue and success leaves failure in its wake” (Stiggins, 2008, p. 37). 

Direct instruction, part of teaching presence, is most effective when the feedback is encouraging, timely, and specific. This further confirms that irrespective of teaching environment, “instructors who take the time to acknowledge the contributions of students through words of encouragement, affirmation or validation can achieve high levels of teaching presence” (Wisneski, Ozogul, & Bichelmeyer, 2015).  In addition to providing feedback, a social constructivist approach requires that students actively engage with the feedback.  “Sadler (1989) identified three conditions for effective feedback. These are (1) a knowledge of the standards; (2) having to compare those standards to one’s own work; and (3) taking action to close the gap between the two” (Rust, et al., 2005).  In order to promote student engagement with feedback, “instructors in a blended community of inquiry are also encouraged to take a portfolio approach to assessment. This involves students receiving a second chance or opportunity for summative assessment on their course assignments” (Vaughn, et al., 2013, p. 93).  Providing multiple opportunities to submit work is highly authentic to real-world work contexts, it encourages students to work to close the gap between current and desired performance, and it exemplifies the spirit of learner-centered assessments. 

Peer assessment is another component of learner-centered assessments (see Students teaching students: Or the blind leading the blind?) and can be a particularly useful approach; “one of the strategies that can improve the quality of education, particularly in web-based classes, is electronic peer review.  When students assess their co-students’ work, the process becomes reflexive: they learn by teaching and by assessing” (Nagel & Kotze, 2010).  A useful model to account for self-reflection, peer assessment, and instructor-led assessment is the Community of Inquiry’s Triad Model (Vaughn, et al., 2013, p. 95).  The triad model also helps identify the most beneficial technologies and interactive platforms to be used.  “Technology-enabled collaborative learning environments can be a rich context for assessing higher order skills, as long as the purpose and design of the assessment is clear about its targets and the assessment tasks are constructed to include technology as part of the collaborative problem solving task and the assessment provides timely useful feedback to teachers and students” (Webb & Gibson, 2015).  To create synergy among the learner, the task and technology, the use of technology must directly support the learning outcomes and the real-life nature of the activities (Herrinton, Oliver & Reeves, 2006). In other words, the technology needs to be used in an authentic fashion.

In addition to assessment's enduring importance to the student experience and its growing importance to learning outcomes and quality assurance, the Community of Inquiry has also focused on assessment as a research agenda for proof of cognitive presence, the most elusive of all the presences. “Complementing other studies, they (Shea, et. al., 2011, p. 109) found little evidence of student engagement at the higher levels of cognitive presence, irrespective of their grades. They propose various explanation for this includes a failure to develop measures of assessment of learning that are meaningful to both students and instructors and recommend more research exploring correlations between cognitive presence and instructor assessment” (Evans & Haughey, 2014).

Students teaching students: Or the blind leading the blind

This began as a paper for EDDE 802 - Research Methods in Education, and then became a chapter on peer assessment in online learning contexts

The venerable reputation of peer assessment. The ability of peer-to-peer interaction to enhance learning is well documented. Topping (2009) references George Jardine (1774-1826) who described the methods and advantages of peer assessment of writing at the University of Glasgow.  In 1842, the French essayist Joubert is attributed with the quote, “To teach is to learn twice” (as cited in Vaughn, Garrison & Cleveland-Innes, 2014, p. 87).  Over one hundred years later, McKeachie (1987) wrote, “The best answer to the question, ‘What is the most effective method of teaching?,’ is that it depends on the goal, the student, the content and the teacher.  But the next best answer is, ‘Students teaching other students.’”  McKeachie includes several studies regarding the effectiveness of peer instruction, such as Gruber and Weitman (1962) and Webb and Grib (1967), which show peer discussions without an instructor produced significant improvements in achievement tests and student interest in their discipline. 

The Webb and Grib study involved 1,400 students in 42 courses, and the favorable results included gains in student responsibility, from memorization to comprehension and understanding.  The authors note that more importantly, at the completion of their investigation of student led discussions, “a significant proportion of teachers and students had changed the conception of their roles and their ideas on how students learn,” but “since the objectives of the project did not focus on these changes, no provision was made for their measurement, if, indeed, measurement of them is possible [emphasis added] (1967, p. 71).  Still, what instructors discovered was a profound impression that students were more capable on their own, that students who had rarely spoken out in class not only participated, but expressed ideas of “good quality” (1967, p. 75).  The other thing instructors discovered was their role as teachers had shifted.  After listening to the discussions, instructors became aware of the student’s varied interpretations of lecture material, which caused them concern over the effectiveness of their communication, which led to a redefinition of role.  The new role meant “becoming more attuned to student needs, being more aware of what their students were thinking, and recognizing the difficulties students were having with material” (1967, p. 75).  This speaks to the primary benefit of peer assessment; it makes the learning process more visible in ways that traditional assessment strategies often cannot, but it also speaks to one of the pitfalls of peer assessment – assessor competence.

Students’ perception of their role also changed from a passive, receptive attitude to a more active, responsible one.  As one of the students quoted in their study relates, “It finally puts the responsibility of educating one’s self on the student’s shoulders, where it should be, for it is the student who will learn and understand more when it is he who discovers for himself the truth or falsity of the course’s content (p. 76)”  And, much like the instructors, students also became cognizant of the abilities of their fellow students.  They gained greater respect for the ability and the divergent views of other students, and the more reticent students seemed to “blossom” in the more relaxed atmosphere of the peer group.”  The peer led discussions strengthened social presence (e.g. “They help to develop a habit of listening as well as speaking. They help to teach respect for another person’s point of view.”), and cognitive presence, “students often realized while attempting to explain the material to others that they did not really comprehend it themselves” (p. 77).  For some, this realization meant changing their study habits from memorizing to “seeking a more thorough understanding of the material” and applying that understanding to personal existence.  In the end, Webb and Grib identified seven principle advantages of student led discussions:

1.       Discussion places more emphasis on comprehension and understanding and less on memorization,

2.       In the interaction, students came to see several other points of view,

3.       The students own ideas were clarified in the process of discussing with others,

4.       The discussions forced students to think and organize their ideas,

5.       Students were more actively involved in their own learning,

6.       The discussions forced more thorough preparation than regular class meetings, and

7.       The discussions led to a greater interest in the subject matter.

Crowdsourcing feedback in online learning contexts.  Beyond student-led discussions, more recent studies corroborate that peer marking results in statistically significant improvement in students’ subsequent work (Forbes & Spence, 1991: Hughes, 1995; Cohen et al., 2001; Rust, 2002 as cited in Rust, O’Donovan & Price, 2005).  Peer assessment has also handled the digital shift and shows promise in online learning contexts; “one of the strategies that can improve the quality of education, particularly in web-based classes, is electronic peer review.  When students assess their co-students’ work, the process becomes reflexive: they learn by teaching and by assessing” (Nagel & Kotze, 2010).  This is the spirit of the Community of Inquiry’s conception of Teaching Presence, which “implies that everyone in the community is responsible for providing input on the design, facilitation, and direction of the teaching process” (Vaughn, et al., p. 87).  Technological advances enable learning to transcend time and place and support peer assessment because it eliminates challenges students face in managing their day-to-day lives.  “In a blended community of inquiry, one of the biggest challenges of peer assessment activities can be finding a convenient place and time for all students to meet outside the classroom,” a problem solved by leveraging collaborative tools providing 24-7 access in hyper-connected environments. 

Feedback, conceptualized as information provided by an agent about aspects of one’s academic performance, is one of the single most important factors influencing learning (Hattie & Timperley, 2007).  And yet, “it is often not possible to provide feedback that is both detailed and prompt, thus limiting its effectiveness in practice” (Sun, Harris, Walther & Baiocchi, 2015).  Peer assessment has been suggested as a method to crowdsource feedback in large online learning contexts because crowdsourcing has “been applied to otherwise intractable problems with surprising success” and would provide “as many graders as students, enabling more timely and thorough feedback” in a number of settings, such as MOOCs, where it would otherwise be impossible (Sun, et al., 2015).  As Sun, et al. (2015), plainly state:

peer assessment is a workable solution to the problem of feedback; it reduces the burden to the instructors with minimal sacrifice to quality. On top of this, it has been conjectured that students also learn in the process of providing feedback [emphasis added]. If true, then peer assessment may be more than just a useful tool to manage large classes; it can be a pedagogical tool that is both effective and inexpensive (p. 2).

MOOCs present a problem of scale and peer assessment could unify Peters’ vision of large-scale education with new interactive technologies which work best in smaller, more intimate environments.  Guri-Rosenblit (2014) writes that the industrial mode of operation has not yet proven compatible with the new digital technologies.

Efficient online communication is, by its very nature, labour intensive.  The industrial model is based on the notion of a small number of academics who are responsible for developing high-quality materials for large numbers of students.  Obviously, small numbers of academic faculty are unable to interact with thousands or even with hundreds of students (p. 115). 

The effective utilization of peer assessment may allow small numbers of faculty to design interactive learning environments with hundreds or thousands of students using learners themselves as the prime mode of interaction.  The advantages of peer assessment include providing larger quantities of feedback than the instructor could provide, in a timely and authentic fashion that resembles professional practice where providing and receiving feedback from work colleagues is a common activity (van der Pol, van den Berg, Admiraal & Simons, 2008).

Not everyone believes this. Peer assessment has its detractors and its challenges.  Downes (2013) describes some of the ugliest manifestations of peer assessment as “the blind leading the blind,” where students reinforce each other’s misconceptions, or “the charlatan”, where students who are not subject matter experts convince other learners of their expertise.   Peer assessor competence, “the blind leading the blind,” presents two challenges.  First, students may not possess subject matter expertise to fairly assess their peers.  Secondly, learners may also possess a skill deficiency because students “typically have no experience” in peer assessment “which breeds inconsistent subjective evaluation” (Luaces, Alonso-Betanzos, Troncoso & Bahamonde, 2015).  Instructing how to give and receive feedback is an important part of the teaching and learning process (Barber, King, & Buchanan, 2015), and the role of positive, affective feedback in peer assessment is inconclusive.  Stiggins (1999) suggests that planning for positive feedback can help students “succeed early and often” and yet, other research shows students ignore positive feedback. 

Designing peer assessment interactions for maximum impact.

Peer assessment is far from a novel concept, but to this point it has only been conjectured that peer assessment is a reflexive process.   Peer assessment “is an arrangement for learners to consider and specify the level, value, and quality of a product or performance of other equal-status learners.” (Topping, 2009).   In addition to the assessment of peer’s work, revising the learners' own work after engaging with peer feedback is regarded as the other important activity for learning reflection (Smith, Cooper, & Lancaster 2002), particularly in online environments where “a growing number of educators have tried to utilize Internet-based systems to facilitate the process of peer assessment” (Chen & Tsai, 2009). 

Falchikov and Goldfinch (2000, p. 315 as cited in Falchikov, 2004) suggest peer feedback judgments are most effective when they are based on well understood criteria of academic products.  The potential benefits of peer assessment are also maximized when there is a deliberate attempt to build assessor proficiency through instructional interventions, such as providing specific assessment criteria, and with examples for how to compose valuable peer feedback messages (Gielen & de Wever, 2015).  Peer assessment is most effective when learners are prepared for assessment through the use of marking exercises (Rust, 2002).  At a bare minimum, learners should be involved in a short intervention where they are exposed to the assessment criteria, model answers, and examples of meaningful feedback messages (Gibbs, 1992, p. 17 as cited in Rust, 2002). 

As Falchikov (2004) has enumerated, there are several key variables known to affect the outcomes of peer assessment, including design, population characteristics, what is being assessed, the level of the course, how the assessment is carried out and the nature of the criteria used in the assessment process.  All of these instructional variables suggest that learning is fundamentally situated, and peer assessment, as a pedagogical approach, is fundamentally constructivist in nature, where “meaning is understood to be the result of humans setting up relationships, reflecting on their actions, and modeling and constructing explanations” (Fosnot, 2005, p. 280).  This important to keep in mind because not only will the variables enumerated above affect the outcomes of peer assessment, but so will the structure and type of feedback provided to the learner.

Cheng, Liang and Tsai (2015) divide peer feedback messages into three types, affective (comments providing support, praise, or criticism), cognitive (comments focusing on the correctness of the work or giving guidance for improvement), and metacognitive (comments about verification of knowledge, skills or strategies).  In their investigation of writing performance in an undergraduate context, cognitive feedback messages were more helpful than affective or metacognitive feedback. Cognitive feedback provides explanation or elaboration of the problems identified or suggestions provided.  Another way to view the quality of a feedback message can be determined by its content.  The content of an effective feedback message should provide both verification and elaboration.  Verification is described as “a dichotomous judgement to indicate that a response is right or wrong,” and elaboration is the “component of the feedback message which contains relevant information to help the learner in error correction” (Hattie & Gan, 2011, p. 253 as cited in Gielen & de Wever, 2015). 

Assessees, the other half of the peer assessment transaction, need to be capable to question the assessor’s peer feedback and have the opportunity to make changes accordingly, where the assessee chooses to follow, or not follow, the assessor’s advice in order to augment the quality of the academic performance (Horvadas, et al., 2014).  Strangely enough, previous research suggests both positive and negative feedback can stimulate negative outcomes; positive feedback may cause learners to “rest on their laurels,” and negative feedback may cause learners to give up rather than double their efforts (Sun, et al., 20145).  This discussion highlights that, for peer assessment to achieve its impact, attention needs to be paid to the structure and support needed for an assessor to generate high quality peer feedback (Horvadas, Tsivitanidou & Zacharia, 2014), and for assessees to be able to able to engage, reflect and revise their work.  Without accounting for the structural components of the peer assessment process, peer assessment will less likely produce a learning environment where students are teaching students, but more likely create the conditions where the blind are leading the blind.

The gray areas of peer assessment. Despite claims that peer feedback can be an effective and inexpensive pedagogical approach, there remains significant mystery about how, why, and when peer assessment works.  Peer assessment is a complex phenomenon, and a literature review of peer assessment in online learning contexts highlights three intertwined research gaps.  In peer assessment studies, there exists

1.       an unclear understanding of what takes place during the peer assessment process that contributes to learning,

2.       a general lack of quantitative studies analyzing the impact of feedback on academic performance, and

3.       very little research as to how positive/negative feedback impacts learners during the learning transaction.

Successful peer feedback “is dependent on interrelated factors including feedback quality, competence of assessors, perceptions of the usefulness and importance of feedback” (Demiraslan Cevik, Haslaman & Celik, 2015).  A mini-research agenda for peer assessment in communities of inquiry and MOOCs includes examining how “the types of feedback” (affective or cognitive) affects students’ performance (Nelson & Schunn, 2007), student motivation, and why some students cannot perceive the benefits of peer assessment (Cheng, et al., 2015).  Another research direction includes how to build assessor competence in participatory learning environments.  Much is unknown about how feedback affects motivational variables, which “deserve further exploration” (Van Zundert, Sluijsmans, Konings & van Merrienboer, 2012), and why students decide to use or ignore specific feedback comments.  Demiraslan Cevik et al. (2015) suggest that “the relationships between the nature of group dynamics and the acceptance and use of feedback merit further exploration” in participatory learning environments where peer assessment is given from one learning group to another, rather than an on individual basis, exploring how group learning dynamics affects the acceptance or rejection of peer assessment. 

The importance of understanding peer assessment in terms of learning analytics.

Gasevic, Rogers, Dawson & Gasevic, (2016) “posit that learning analytics must account for (instructional) conditions in order to make any meaningful interpretation of success prediction” [emphasis in the original].  In a 2016 study they conducted, feedback was a type of trace data collected in only one of the nine courses under investigation, and in that case, the feedback tool was used to ask students about their study habits online and the value of quizzes.  It was primarily used for question and answer, not peer feedback, highlighting a lack of research on peer feedback as an instructional condition from a learning analytics research perspective, confirming their observation that there is a “need to consider instructional conditions in order to increase the validity of learning analytics findings.” 

These potentially important differences in peer assessment design have not been fully explored from a learning analytics perspective.  Gasevic, et al., (2016) also suggest that “learning analytics has only recently begun to draw on learning theory and there remains a significant absence of theory in the research literature that focuses on LMS variables.”  Gasevic, et al., (2016) suggest there are distinctive elements in courses, such as peer feedback, that determine learning management system (LMS) use, and they ground learning analytics approaches in Winne and Hadwin’s constructivist learning theory, where learners construct knowledge using tools (e.g. cognitive, physical and digital) to operate on raw information (e.g. readings given by the course instructor or peer assessment artifacts) to construct products of their own learning (Winne, 1996; Winne, 2011; Winne & Hadwin, 1998 as cited in Gasevic et al, 2016).  These products can be evaluated with respect to internal standards, such as time, or external standards (e.g. rubrics used for grading and/or structured peer feedback scripts).  Gasevic, et al., (2016) point out that learners are active agents in their own learning:

As agents, learners make decisions about their learning in terms of choices of study tactics they will apply to evaluate their learning products against.  Decisions made about learning are influenced by conditions, which can be internal (motivation) and external (learning task grading policy).

It will be essential to understand the instructional conditions in order to make any sense of the patterns that existing within the social learning analytics.

Social learning analytics

make use of data generated by learners’ online activity in order to identify behaviours and patterns within the learning environment to signify effective process. The intention is to make these visible to learners, to learning groups and to teachers, together with recommendations that spark and support learning. In order to do this, these analytics make use of data generated when learners are socially engaged. This engagement includes both direct interaction – particularly dialogue – and indirect interaction, when learners leave behind ratings, recommendations, or other activity traces that can influence the actions of others (Shum & Ferguson, 2012, p. 10).  

 

Most modern learning management systems (LMS) come with a built-in peer assessment tool that automatically distributes anonymous student responses to peer graders, enabling easy crowdsourcing of an effective pedagogical approach, but the ability to fully make sense of this data will require structured instructional conditions, outlined above, in order to best understand learners’ online activity and learning behaviours.

Learning analytics has been defined as “measuring, collecting, analysing and communicating data about learners and their contexts with the purposes of understanding and optimizing learning in the context in which it takes place.”  Learning analytics “should support students’ agency and development of positive identities rather than predict and determine them,” with the goal of providing a basis for effective decision making regarding pedagogical design (University of Bristol, 2013).  The potential of learning analytics resides in its ability to “combine information from multiple and disparate sources, to foster more-effective learning conditions in real-time” (Booth, 2012).  Learning analytics approaches typically rely on data emanating from a user's interactions with information and communication technologies (ICTs), such as LMS, student information systems and/or social media.  For example, the trace data (also known as log data) recorded by the learning management system, such as Moodle or Blackboard, contains time-stamped events about use of specific resources, attempts, time spent in the production or interaction with peer assessment feedback, the number of discussion messages read and volume of online discussions posted.  Data mining techniques, employing “large amounts of data to support the discovery of novel and potentially useful information” (Piatetsky-Shapiro, 1995 as cited in Shum & Ferguson, 2012), are commonly applied to identify patterns in these trace data (Baker & Yacef, 2009, as cited in Gasevic et al., 2016).

Identifying and understanding whatever patterns exist in the peer assessment or peer feedback trace data will only be enhanced by well-designed and well-structured peer assessment activities that account for the instructional conditions. It has been suggested that “machine ethics, including learning analytics, stand on the cusp of moral nihilism” (Willis, 2014) because the conduct of learning analytics is viewed legalistically rather than asking the question, “What does this mean for humanity?”  As Willis (2014) suggests, “now is the time to act within frameworks of human autonomy and agency” to help redefine what is learned from past academic failures and “responsibly innovate knowing that competing values often pervade technological innovations” and push for learning analytics’ interventions that are in the best interests of learners.  As the forces of massification move forward, the promise of peer assessment will only be realized if it is also firmly based in effective pedagogical practices, some of which are largely understood, while others are still unknown territory. 

Detailed APA citations available upon request.