Marquette Warrior: An Outsider’s View of Assessment

Monday, March 07, 2005

An Outsider’s View of Assessment

Marquette is hardly the only institution that has had to deal with assessment. We, in fact got on the bandwagon rather late and rather ineptly. An e-mail correspondent at another university sent the following comments to the Marquette Warrior Blog:
I’ve said all along that what the experts call “course-based imbedded assessment” (i.e., tests, papers, class presentations, and similar assignments) work pretty well at the course level. After all, what else is there?
Some schools, having identified key learning objectives, make sure that those objectives are (a) taught, (b) reinforced, and (c) assessed throughout a student’s career – and in multiple venues. Writing samples, for example, can becollected in the students’ first classes (or now, by using the SAT writing section), then collected again in sophomore, junior, and senior courses, to bereviewed by faculty – perhaps using electronic portfolios – over time, especially before graduation. That both helps us focus on a given student but, more importantly, by choosing a representative sample, we can determine whether our coursework is in fact reinforcing the lessons we try to teach “up front” to freshmen.

The real question always has been whether our curriculum – the sum total of presumably well designed, sequenced, and integrated courses – appropriately teach the skills, knowledge, and values we want our students to graduate with. No one course, for example, can adequately teach writing or speaking or quantitative literacy. No one course can teach students “biology” or “economics.” But clearly we want our graduates to know something about a discipline and, more generally, to be able to do some things (write, present, calculate, research, analyze, etc.). That’s where outcomes assessment can bring huge payoffs – by telling us whether the sum total of what we do produces the “total product” we want to produce. (And different colleges, schools, and departments may have legitimately different intentions.) If so, great; no need for change. If not, we’d better look at what we are doing, at what works, and what doesn’t work. Parents and students have a right to know, and we as professors have an obligation to know, how successful we are.

All of this must be determined by a faculty acting collectively in the interests of the students (or of the students majoring in X or Y or Z). The puzzle your blog alludes to is this: either we don’t know how to determine what our students need or how to deliver it, or as a “faculty,” we are incapable of collective action for the common good. I suspect both forces are at work.

I recall when the issue of outcomes assessment first arose here. The faculty in one department, for example, absolutely bridled at the suggestion of assessing student learning outcomes, claiming that the administration was seeking measurable and – especially - quantifiably verifiable measures -- even though neither of those words or concepts was ever raised in the original communication to the faculty. The philosophers said, “how can you assess growth in wisdom.”

Both groups missed the boat, because both either wanted to obfuscate, or because they didn’t have a clue, or because they were fearful that someone might discover that, after 30 credits, majors in this or that department still couldn’t understand survey research or analyze a case study or read a text or present a cogent argument. I would hope that we as professional educators would not shy away from out duty to assess how well we and our curricula are doing. At its base, however, is a fundamental misunderstanding of what assessment is and why it is important. Many state legislatures have legislated outcomes assessment under political pressure to justify academic expenditures. The real purpose, obviously, is simply to see if we are producing what we claim to produce – and, indeed, if we can articulate what it is that we want to produce (few faculty or administrative experts can do that, I think). Frankly, as an administrator and academic leader, I would rejoice if we could successfully take the first step: stating, clearly and publicly, what precisely we expect our students in this or that major to be able to do, to know, and to value. Although faculty shy away from tackling this job – it does, after all, create tensions and clashes within a given discipline – it ought to be one of the most fascinating discussions we’ve ever had, because it goes to the heart of what a liberal or professional education is all about and because to asks us to reflect on what is essential in our disciplines.

So, welcome aboard the outcomes assessment express. Maybe you folks can do it right, and then teach the rest of us how to do it. Total quality improvement, or whatever, reigns supreme in academe! I confess I’ve become more than a tad cynical both about externally imposed assessment requirements – which in principle I obviously support – and about faculty, many of whom are more interested in protecting their prerogatives and comfort than in transforming these 18 year-olds into competent and prepared 22 year-olds.
I responded as follows:
The thing that what most of us hated was the hassle imposed on each and every one of us, doing something that produced no usable data at all.

I think it’s fairly trivial to show that our student learn some (say) political science. But if we are going to do it, it should be done in a methodologically sound way.

More useful, in my view, is the sort of assessment that compares Marquette students against students elsewhere (comparable institutions) on a variety of indices.
To which my e-mail correspondent replied:

Right. Some of the professional associations (e.g., Economics, Psychology) have laid out in a reasonably good fashion the sorts of things they expect of their graduates. ABET, the Engineering and Computing Science accrediting body, likewise provide a compact list of, I think, eleven or twelve key indicators of learning achievement. Comparing students across campuses would be the ideal, but it would require an agreed- upon set of indicators. Survey data from students of course is notoriously unreliable. Standardized tests, such as those offered by the Princeton Educational Testing Service or by ACT might work, but there are real questions. Some schools have all graduating seniors take the GRE exams in the particular disciplines. But getting graduating seniors voluntarily to take a test, and take it seriously, is another challenge.

I fear we get too hung up on technique and measures. I would hope that if folks in a given discipline or in a given department of a university could simply and clearly articulate what it is that their graduates should be able to do, know, and value, we’d be halfway there. Academic professionals surely can figure out how to assess or even measure whether the learning objectives are being met – if we know what those objectives are, using crisp and clear language. There’s a whole cottage industry that has sprung up around assessment, and as it’s grown, it’s become quite good. In short, there’s a body of knowledge out there, if only we can get ourselves organized to tackle the matter. That, in the end, has to be a faculty prerogative, but it may take a lot of courage.

We, obviously, are not so favorable to assessment as our correspondent, but the observations are interesting, and worth hearing here at Marquette. Assessment was put in place here with little outside expertise. But neither did the administration tap its own scholars to do something that’s valid social science. Seldom has so much effort been expended in the service of so little real expertise.

0 Comments:

Post a Comment

<< Home