When I first attended my doctoral orientation, I shared my keen interest in student assessment, and one of the faculty in the program asked a key question, assessment of what?  That's really the fundamental question as the focus in higher education seems to have squarely landed on assessment. And the intense scrutiny is coming from different places, some enduring, some new.

The importance of assessment has long been recognized. Assessment has been described as “the heart of the student experience” and is “probably the single biggest influence on how students approach their learning” (Brown & Knight, 1994, cited in Rust, O’Donovan & Price, 2005).  Assessment is also highly emotional; students describe it as a process that invokes fear, anxiety, stress, and judgment (Vaughn, Cleveland-Innes & Garrison, 2013, p. 81).  It is fair to say, “nowhere are the stakes and student interest more focused than on assessment” (Campbell & Schwier, 2014, p. 360).

Other key trends in higher education have heightened focus on student assessment. Notable trends include accountability and the “increasing level of scrutiny applied to their [colleges and universities] ability to capture and report performance outcomes” (Newman, 2015, p. 48).  The need for robust quality assurance processes respond both to the still-lingering perception that online learning is ineffective, as well as the precipitous increase in online learning, which is becoming recognized as a crucial 21st century skill, not just a mode of delivery.  The increasing demographic of adult learners who desire to gain competencies desired by employers has also led to a heightened awareness of the challenges and opportunities in assessment.  A 2015 study from Colleges Ontario shows that 44 percent of current Canadian college students already possess post-secondary experience and return to college for the purposes of finding “that extra piece that makes them employable” or to “upgrade skills in a particular area” (Ginsberg, 2015).

As such, any discussion of assessment has to confront one of the great current debates in higher education. Wall, Hursh, and Rodgers (2014) define assessment as “a set of activities that seeks to gather systematic evidence to determine the worth and value of things in higher education,” including the examination of student learning. They assert that assessment “serves an emerging market-focused university” which has replaced the goals of providing a liberal education, developing intrinsically valuable knowledge, and serving society. The purpose of educational attainment has narrowed to serving society through economic development. This narrow focus has led some to suggest that students “come into play only as potential bearers of skills producing economic value rather than as human beings in their own right” (Barnett & Coate, 2005). This author does not (at this time) take a stance as to whether or not this development is good or bad, but recognizes that learning and assessment are inextricably linked, and that both increasingly focus on skills development. This focus on skills development leads directly to assessment because “one of the most telling indicators of the quality of educational outcomes is the work students submit for assessment” (Gibbs, 2010, p. 7).

Assessment, then, provides evidence of the “outcome” in any “outcomes-based” approach to education. In Ontario, for example, “postsecondary learning outcomes are rapidly replacing credit hours as the preferred unit of measurement for learning,” but “the expanded presence of learning outcomes at the postsecondary level has outstripped our abilities to validate those outcomes through assessment” (Deller, Brumwell & MacFarlane, 2015).  Assessment “remains the keystone of the learning outcomes approach,” and assessment practices are increasingly focused on demonstrating acquisition of learning outcomes for the “purposes of accountability and quality measurement,” which is increasingly measured by their alignment with market-oriented aims and closing the Canadian “skills gap,” where Canada loses as much as $24.3 billion dollars in economic activity (Bountrogianni, 2015).  The perspective of students as potential bearers of skills to support economic development drives the move towards authentic assessment, where students provide “direct evidence of meaningful application of learning” (Angelo, 1999; Maki, 2010, as cited in Goff, et. al., 2015) by using skills, knowledge, values and attitudes they have learned in “the performance context of the intended discipline.”

And yet, a book on online assessment theory and practice has never been more in need. In the Sloan Online Survey (Allen & Seaman, 2015), the proportion of academic leaders who report that online education is a critical component of their long-term strategy has grown to 70.8% in 2015 (p. 4), an all-time high. The growth rate of distance enrollments has slowed in recent years but continues to outpace the growth rate of the higher education student body in the United States. While faculty perceptions of online learning lag behind those of administrators, Contact North’s online learning wish list for 2016 includes a wish that “we stop debating about whether or not online learning is as effective as classroom teaching.” There is a clear evidence base that at worst, it makes no significant difference, but at best, the affordances of online technology provide some enhancement and opportunities to transform the learning experience and demonstrate learning outcomes. Learners “expect authentic, engaged learning” that involves “a range of different learning activities appropriate to their own learning needs.” According to this report,

the focus [for online learning] has been on courses, programs and learning processes. It has not been on assessment. But that is changing with the development of new methods of assessment involving simulations and games: adaptive learning engines which enable assessment as learning is taking place; new methods for developing assessment tools using machine intelligence; and new developments in ensuring the security and integrity of online assessments.

Contact North claims “we are approaching an era in which new thinking about how we assess knowledge, competencies and skills start to bear fruit.” This new era includes badges, verified learning certificates, and micro-credentials, as well as Prior Learning Assessment and Recognition (PLAR) to facilitate student mobility.  In this new era, assessment will also become a central component of any definition of quality. Within the Ontario Quality Assurance Framework, for example, “each academic unit is asked: What do you expect your students to be able to do, and to know, when they graduate with a specific degree? How are you assessing students to make sure that these educational goals have been achieved?” (p. 12).  Assessment flows directly from learning outcomes and its importance in the educational transaction has grown. The development of new programs in Ontario requires identification of learning outcomes and the “methods to be used to assess student achievement of those outcomes.” The strengthening focus on quality, and new opportunities afforded by technology, certainly demand a fresh look at assessment.

This book works to move towards this new era, and away from the current era where the “field of educational assessment is currently divided and in disarray” (Hill & Barber, 2014). This is not an entirely new claim. Over a decade ago, Barr and Tagg (1995) declared that a shift had occurred in higher education from an instruction paradigm to a learning paradigm and that learner-centered assessment was a central element in this new paradigm (Webber, 2011).

This epochal shift in assessment has moved like a glacier, slowly and yet with dramatic effect. The “traditional view of assessment defines its primary role as evaluating a student’s comprehension of factual knowledge” whereas a more contemporary definition “sees assessment as activities designed primarily to foster student learning” (Webber, 2011). Examples of learner-centered assessment activities include “multiple drafts of written work in which faculty provide constructive and progressive feedback, oral presentations by students, student evaluations of other’s work, group and team projects that produce a joint product related to specified learning outcomes, and service learning assignments that require interactions with individuals the community or business/industry” (Webber, 2011). As Webber points out, there is a growing body of evidence from multiple disciplines (Dexter, 2007; Candela et. al., 2006; Gerdy, 2002) illustrating the benefits of learner-centered assessment, but these examples “do not provide convincing evidence that reform has actually occurred.”

Perhaps one of the greatest transformations has been the development of the Community of Inquiry framework. While it is not within the scope of this book to give the Community of Inquiry (CoI) model a thorough and comprehensive coverage and treatment, the CoI’s approach to assessment very much falls in line with the spirit of this new era of assessment.  Within the Community of Inquiry framework, assessment is part of “Teaching Presence,” “the unifying force” which “brings together the social and cognitive processes directed to personally meaningful and educationally worthwhile outcomes” (Vaughn, et al., 2013, p. 12). Teaching presence consists of design, facilitation, and direction of a community of inquiry, and design includes assessment, along with organization and delivery.  “Assessment very much shapes the quality of learning and the quality of teaching. In short, students do what is rewarded. For this reason one must be sure to reward activities that encourage deep and meaningful approaches to learning” (Vaughn, et al., 2013, p. 42)

In designing assessment through the Community of Inquiry lens, it is essential to plan and design for the maximum amount of student feedback. “The research literature is clear that feedback is arguably the most important part in its potential to affect future learning and student achievement” (Hattie, 1987; Black & Wiliam, 1998; Gibbs & Simpson, 2002 as cited in Rust, et al., 2005).  Good feedback helps clarify what good performance is, facilitates self-assessment and reflection, encourages teacher and peer dialogue around learning, encourages positive motivational beliefs and self-esteem, provides opportunities to close the gap between current and desired performance and can be used by instructors to help shape teaching (Nicol & Macfarlane-Dick, 2006 as cited in Vaughn, et al., 2013, p. 82).

Planning for positive feedback can help students “succeed early and often.”  Positive feedback environments can help students “get on winning streaks and keep them there” so that this emotional dynamic feeds on itself and students get into “learning trajectories where optimism overpowers pessimism, effort replaces fatigue and success leaves failure in its wake” (Stiggins, 2008, p. 37). 

Direct instruction, part of teaching presence, is most effective when the feedback is encouraging, timely, and specific. This further confirms that irrespective of teaching environment, “instructors who take the time to acknowledge the contributions of students through words of encouragement, affirmation or validation can achieve high levels of teaching presence” (Wisneski, Ozogul, & Bichelmeyer, 2015).  In addition to providing feedback, a social constructivist approach requires that students actively engage with the feedback.  “Sadler (1989) identified three conditions for effective feedback. These are (1) a knowledge of the standards; (2) having to compare those standards to one’s own work; and (3) taking action to close the gap between the two” (Rust, et al., 2005).  In order to promote student engagement with feedback, “instructors in a blended community of inquiry are also encouraged to take a portfolio approach to assessment. This involves students receiving a second chance or opportunity for summative assessment on their course assignments” (Vaughn, et al., 2013, p. 93).  Providing multiple opportunities to submit work is highly authentic to real-world work contexts, it encourages students to work to close the gap between current and desired performance, and it exemplifies the spirit of learner-centered assessments. 

Peer assessment is another component of learner-centered assessments (see Students teaching students: Or the blind leading the blind?) and can be a particularly useful approach; “one of the strategies that can improve the quality of education, particularly in web-based classes, is electronic peer review.  When students assess their co-students’ work, the process becomes reflexive: they learn by teaching and by assessing” (Nagel & Kotze, 2010).  A useful model to account for self-reflection, peer assessment, and instructor-led assessment is the Community of Inquiry’s Triad Model (Vaughn, et al., 2013, p. 95).  The triad model also helps identify the most beneficial technologies and interactive platforms to be used.  “Technology-enabled collaborative learning environments can be a rich context for assessing higher order skills, as long as the purpose and design of the assessment is clear about its targets and the assessment tasks are constructed to include technology as part of the collaborative problem solving task and the assessment provides timely useful feedback to teachers and students” (Webb & Gibson, 2015).  To create synergy among the learner, the task and technology, the use of technology must directly support the learning outcomes and the real-life nature of the activities (Herrinton, Oliver & Reeves, 2006). In other words, the technology needs to be used in an authentic fashion.

In addition to assessment's enduring importance to the student experience and its growing importance to learning outcomes and quality assurance, the Community of Inquiry has also focused on assessment as a research agenda for proof of cognitive presence, the most elusive of all the presences. “Complementing other studies, they (Shea, et. al., 2011, p. 109) found little evidence of student engagement at the higher levels of cognitive presence, irrespective of their grades. They propose various explanation for this includes a failure to develop measures of assessment of learning that are meaningful to both students and instructors and recommend more research exploring correlations between cognitive presence and instructor assessment” (Evans & Haughey, 2014).