Competence: It’s All About Context!

By May 23, 2019Uncategorized
Great test design depends on a shared understanding of competence.

Great test design for performance-based exams depends on a shared understanding of competence.

Competence is doing meaningful work to standard under normal conditions. Agreeing on what is normal is where the hang up is. Normal assumes agreement on the uniformity and diversity of both work tasks and the workplace environment. People with the same job title may or may not do the same tasks even in the same company. They certainly do not share the same workplace environment if they work in different locations or for different bosses.

Rob Foshay, Ph.D. et.al, in their research on emerging best practices in the private sector discusses the rising interest in performance-based tests to measure competence. Part of that interest is fueled by the evidence that performance-based tests have higher fidelity (they more fully reflect the workplace environment) and; therefore, can demonstrate greater external integrity (passing correlates with something important to the organization). For example, is there a correlation between the number of people who pass the test and those who do well on the job, stay in the job longer, or have fewer cost overruns or missed deadlines?

The Test Design Challenge

The challenge is to design a test that accommodates workplace variables so test results correlate with measures the organization cares about. Consider the following scenarios. Assume you want to measure the skills of:

  • Mobile software developers who are writing code for mobile computing platforms for a vendor, such as Google,
  • Network management professionals who use software to manage corporate networks for a vendor, such as CISCO, or
  • System administrators who manage servers. for a vendor, such as SUSE

In each of these examples you want to identify what is different and similar about the tasks and the work environment. For example:

  • Are tasks done as part of a team or individually?
  • Is there a difference in the maturity of the technology used and the efficiency of the procedures?
  • Do bosses have the same expectations?

You do this by asking 1) the right questions and 2) the right people. The process is called a practice analysis. Typically, the task analysis depends on the input of a few people and the questions are limited to what you do, and what do you have to know to do it.

A practice analysis solicits opinions from diverse perspectives such as the people upstream and downstream who are impacted by what is done and how it is done. The questions are about what information, systems, and support people can access, how they work, what feedback is available, the maturity of the systems, and what outputs and behaviors really get rewarded. This information allows the test designer to build in constraints that more accurately measure and predict on-the-job performance.

A knowledge test can answer the question do people know relevant rules, terms, and concepts. However, performance-based tests can answer questions about competence if they reflect the context in which the work is done, that is, emulate the environment.

Performance-based assessment was once only accessible to large organizations and only used for high-risk scenarios. Today, many of the economic and technical barriers to adoption have been largely removed. TrueAbility, for example, has shown that it can cost-effectively emulate software platforms used at the worksite and deliver performance-based assessment globally and at scale.

The challenges that remain relate to organizational alignment and the tried-and-true practice of test design. And fortunately for many practitioners, the resources available to help you on this journey are growing every day.


About the Author

This week’s article is by guest author Judith Hale, Ph.D., CPT, CACP, a noted expert, writer, and proponent of performance-based assessment. Judith is the CEO of the Center for International Credentials and has had the privilege of working in the public and private sectors across all industries for more than 30 years. During that time, she’s written nine books on performance improvement, credentialing, and evaluation. Writing has given her the opportunity to codify and share her thinking and experiences with colleagues. She’s also become increasingly more specialized, focusing on performance-based credentials that demonstrate meaningful change.  To connect or to learn more about Judith’s workshops, visit www.HaleCenter.org.  

Judith Hale, CEO of the Center for International Credentials and test design expert.

Dispelling the 5 Myths of Modern Performance-Based Certification

Did you miss our live webinar? Click on the button below to catch a replay and learn the truth about modern performance-based assessment.