Assessment Glossary

A comprehensive list of essential terms and definitions for effectively understanding and utilizing TestInvite's online assessment tools.

Created by Cabir Topo / July, 2024

A

Activity logs: Records of all actions performed during an activity.

Adaptive questions: Questions that adjust in difficulty based on the test-taker's performance.

Anti-cheating measures: Strategies implemented to prevent dishonesty during tests.

Assessment integrity: Measures and practices to ensure that online tests are fair and free from cheating.

B

Balanced coverage: Ensuring that all learning objectives are assessed evenly, providing a comprehensive evaluation of students' knowledge and skills.

Benchmark: A standard or point of reference against which things may be compared or assessed.

Bias: Unfair and systematic preferences or disadvantages that hinder the selection of the most qualified candidates.

Bulk recruitment: Hiring many people at once.

C

Camera capture: Recording activities using a camera.

Classical Test Theory (CTT): A psychometric theory that focuses on understanding the relationship between observed test scores and true scores.

Comprehensive records: Detailed documentation of events and actions.

Comprehensive testing: Thorough evaluation using multiple methods.

Composite groups: Groups of related questions that are randomly chosen during the test to maintain integrity and consistency.

Concurrent Validity: The degree to which the results of a particular test correlate with those of a well-established measure of the same construct administered at the same time.

Content Validity: The extent to which a test measures a representative sample of the subject matter or behavior it is intended to assess.

Continuous oversight: Ongoing monitoring of actions.

Cronbach's alpha: A measure of internal consistency or reliability of a test.

Customizable quizzes: Quizzes that can be tailored to specific needs or criteria.

D

Detailed logging: Comprehensive recording of all actions taken by the test-taker during the assessment.

Detailed time management: Close management of time allocations.

Desktop recording: Video recording of a computer screen.

Domain sampling: The process of selecting items that represent the content area.

Diverse assessment formats: Various ways of structuring assessments.

Diverse assessments: A variety of questions and formats within an exam.

Dynamic exams from question banks: Exams generated by selecting questions from a larger repository, ensuring varied and comprehensive assessments.

Dynamic question sets: Collections of questions that can change based on pre-set rules to create unique exams.

E

Employee churn: The rate at which employees leave a company and need to be replaced.

Exam creation: The development and assembly of an exam from available questions.

Exam fairness: Ensuring equal conditions and standards for all test-takers.

Exam surveillance: Observing test-takers during an exam.

F

Fair play: Knowing that everyone is tested under the same fair conditions fosters a sense of equality.

Fairness: Ensuring that the assessment process does not result in unjust advantages or disadvantages for individuals.

Flexible test creation: The ability to design tests that can be modified easily according to requirements.

Form variation: Differences in test formats and content.

G

Granular time restrictions: Detailed and specific time limits set for different sections or questions within an assessment.

H

High-volume recruitment: The process of recruiting a high number of candidates.

Hiring mistakes: Errors made during the recruitment process.

I

Instant supervision: Immediate oversight during activities.

Integrity in Online Assessments: Measures and practices to ensure that online tests are fair and free from cheating.

L

Large-scale hiring: Recruitment on a large scale.

Learning objectives: Specific goals that assessments aim to evaluate, ensuring a comprehensive evaluation of knowledge and skills.

Live exam monitoring: Continuous observation of test-takers during an exam.

Live proctoring: The monitoring of test-takers in real-time to ensure exam integrity.

Live tracking: Following activities in real-time.

M

Mass hiring: The process of recruiting a large number of employees in a short period.

Measurement error: The difference between the observed score and the true score.

Memorization: The practice of learning by rote, which randomized assessments aim to reduce.

Mixed question sets: A variety of questions combined in a non-fixed sequence.

Mis-hires: Employees who are hired but do not fit the role or company, leading to poor performance.

Multi-faceted evaluation: Assessment from different angles and perspectives.

Multi-measure assessment: Evaluations that use various methods and tools to measure different skills and competencies.

Multiple test forms: Various versions of a test.

O

Online invigilation: Watching test-takers through an online platform.

P

Performance standard: A level of performance or achievement that is used as a standard for judging or measuring.

Precise timing: Exact control over time limits.

Predictive validity: The extent to which test scores predict future performance.

Q

Question allocation: The process of randomly assigning questions to each test-taker from a predefined pool.

Question assignment: Allocating specific questions to test-takers.

Question cycling: A method of presenting questions in a rotated manner.

Question database: A repository where questions are stored and organized.

Question distribution: Spreading questions randomly among test-takers.

Question pools: Categories of questions based on difficulty and learning objectives.

Question rotation: A system where questions are rotated among different test-takers to minimize predictability.

Question set variability: Ensuring that each test-taker receives a unique set of questions to enhance test integrity.

R

Random question allocation: The process of randomly assigning questions to each test-taker from a predefined pool.

Random selection: Choosing questions from pools immediately before the test begins, ensuring unpredictability.

Randomized assessment Format: Assessments where the format and order of questions are varied to enhance fairness and security.

Randomized question distribution: Spreading questions randomly among test-takers.

Randomized questions: Questions that appear in a random order for each test-taker.

Real-time monitoring: Continuous observation of test-takers during the exam to ensure compliance with rules.

Real-time supervision: Immediate oversight during an exam.

Reference point: A standard for comparison in measuring or judging quality, value, or performance.

Reliable and secure: Reducing the risk of question leakage and unauthorized access, ensuring dependable assessments.

Reliability: The consistency of a test in measuring what it is intended to measure.

Rotating questions: The practice of changing the order of questions.

S

Screen capture: Recording the content displayed on a screen.

Screen recording: Capturing a video of the test-taker's screen during the assessment to prevent cheating.

Sectional time limits: Specific time restrictions for different parts of an exam.

Systematic randomization: A process that ensures unbiased and unpredictable question selection.

Systemic bias: Widespread biases that are built into the policies and practices of an organization.

T

Tailored tests: Customizable quizzes to meet specific needs or criteria.

Test consistency: The stability of test scores over time or across different raters.

Test coverage: The extent to which a test represents all aspects of the construct.

Test customization: Tailoring tests to meet specific needs or criteria.

Test diversity: A wide range of questions and formats within an exam.

Test generation: The process of creating tests from a question bank.

Test integrity: Measures and practices to ensure that online tests are fair and free from cheating.

Test security measures: Actions taken to secure the integrity of a test.

True score theory: The concept that each test score is composed of a true score and an error score.

Turnover rate: The frequency at which employees leave and are replaced.

U

Unconscious bias: Unintentional and automatic biases held by individuals.

V

Variable assessment forms: Different versions of an assessment that vary in content and structure.

Variable question sets: Different sets of questions for each test-taker.

Variable test structures: Different formats and layouts for tests.

Video monitoring: Watching and recording through video.

W

Webcam recording: Recording the test-taker's webcam during the exam to monitor their environment.

Wrong hires: Employees who are not suitable for their positions.

Go Back
Talk to a representative
Figure out if TestInvite is a good match for your organization