Basically Speaking II

Did we get there get there and did it work?

True genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information.

— Winston Churchill, 1874-1965
Politician, Orator, Statesman, Painter

What does success look like?

How can we tell the difference between success and failure?

How do we get better?

Several times over the life of a project, we need to measure what we’re doing and if what we’re doing is effective.

At the task level, we (may) measure informally, while at the goals level we’ll (probably) use more formal approaches.

Select image to download as a pd

Assessment is the organized and ongoing process of collecting and analyzing data and information so as to describe activities, practices, progress, and other dimensions of performance.

The word comes first from Latin, then Middle English, for sit beside. An assessor was a judge’s assistant, responsible for setting fines but not for determining guilt or innocence.

Assessment is often considered a formative process – along the way, at the same time as active implementation of a program.

Assessment focuses on improvement and progress.

FORMATIVE assessments are done to improve programs, products, personnel, policies, or practices; to form or shape the “thing being studied.” (Patton, 2002, p. 220)

Evaluations are systematic investigations that involve synthesizing and integrating assessment data and then using this information to make inferences and judgments about the merit (i.e., quality, excellence), worth (i.e., value, cost-effectiveness), and/or significance (i.e., importance, impact) of a project, program, or or organization (Scriven, 1998).

Evaluations are often considered summative processes – done at an end point or key milestone, focused on “did it work,” on making statements about effectiveness and outcomes.

SUMMATIVE evaluation “. . . is evaluation done for, or by, any observers or decision makers (by contrast with developers) who need evaluative conclusions for any reasons besides development” (Scriven, 1991, p. 20).

CRITERION (singular)/CRITERIA (plural)
What we want to measure, a dimension along which performance is rated or judged or ranked.

Each criterion must fall within the domain that is being evaluated.

For example, selecting “neatness” as a criterion for judging creativity would likely be viewed as a poor choice (although many highly creative people are also highly organized), while “ideational fluency” would be both evidence-based and useful.

Criteria are defined by indicators, descriptors, and other elements of an evaluation plan or system.

Levels of performance on criteria are determined by standards – within an industry, as defined by a profession, or within an organization for programs and activities.

INDICATORS are performance metrics – signals, measures, yardsticks, markers, guides – used to measure an attribute of the thing being evaluated; a dimension of the desired outcome.

How will we know whether we are making progress toward our goals and objectives?

Indicators are almost always expressed as quantitative (numerical data) classifications, scales, ranks, etc.

Although the observations or measurements might be qualitative (textual data), they are often (not always!) assigned or “turned into” numbers. Math done on the latter can be tricky, so beware.

In case you missed it . . .
Basically Speaking I: Where are we going and what are we doing? »

  • Patton, M. Q. (2002). Qualitative Research & Evaluation Methods. (3rd ed.). Thousand Oaks, CA: SAGE Publications.
  • Scriven, M. (1991). Beyond formative and summative evaluation. In M. W. McLaughlin & D. C. Phillips (eds.) Evaluation: At Quarter Century, pp. 18-64. Chicago, IL: The University of Chicago Press.
  • Scriven, M. (1998). Minimalist theory of evaluation: The least theory that practice requires.  American Journal of Evaluation. 19(1), 57-70.