Veidlapa Nr. M-3 (8)
Study Course Description

Quantitative Data and Data Collection Methods in Psychology

Main Study Course Information

Course Code
VPUPK_435
Branch of Science
Psychology
ECTS
6.00
Target Audience
Psychology
LQF
Level 6
Study Type And Form
Full-Time; Part-Time

Study Course Implementer

Course Supervisor
Structure Unit Manager
Structural Unit
Department of Health Psychology and Paedagogy
Contacts

Dzirciema Street 16, Riga, vppk@rsu.lv

About Study Course

Objective

To provide students with theoretical knowledge of data structures and the fundamental principles of psychometrics, while developing practical skills in the selection of data collection methods, the scientific development and adaptation of psychological tests, and the empirical evaluation of their psychometric properties using modern statistical analysis software.

Preliminary Knowledge

Theoretical Knowledge in Psychology:

  • Understanding of the historical development of psychology, its major theoretical schools, and core processes (cognitive processes, emotions), which is essential for the evidence-based definition and operationalization of measurable constructs.
  • Knowledge of the fundamental principles of the philosophy of science and logic, enabling the critical analysis of scientific argumentation and the formulation of logical conclusions regarding the utility of measurement instruments.

Research Methodology and Statistics:

  • Basic knowledge of mathematical statistics, including an understanding of data distributions, descriptive statistics (mean, standard deviation), and basic relationship analysis (correlation), which is a mandatory prerequisite for reliability and validity calculations.
  • Understanding of research designs and data collection methods, as well as knowledge of research ethics standards and the legal aspects of personal data protection (GDPR).

Core Academic Skills:

  • Information Literacy: The ability to independently search for, select, and critically evaluate scientific literature and international publications on psychological assessment instruments in specialized databases.
  • Foreign Language Proficiency (English): The ability to read and analyze scientific texts and test manuals in English, ensuring a high-quality instrument adaptation process.
  • Digital Skills: Proficiency in office software (word processors, spreadsheets) and a readiness to master specialized statistical analysis software (e.g., JASP or Jamovi).

Learning Outcomes

Knowledge

1.Identifies data types and measurement scales (nominal, ordinal, interval, and ratio), evaluating their applicability and constraints in statistical data processing.

Individual work and tests

Knowledge Test: Data Types, Scales, Ethics, and Test Types Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests

2.Describes the fundamental principles of research ethics, including informed consent, data confidentiality, and GDPR compliance within psychological research.

Individual work and tests

Knowledge Test: Data Types, Scales, Ethics, and Test Types

3.Classifies psychological tests according to their type, purpose, and construction specifics (e.g., personality, ability, or achievement tests).

Individual work and tests

Knowledge Test: Data Types, Scales, Ethics, and Test Types

4.Defines the stages of psychological test development and adaptation, explaining the methodological significance of each stage in the instrument construction process.

Individual work and tests

Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests

5.Identifies and describes the stages of psychological test construction, adaptation, and standardization, adhering to international methodological standards (e.g., ITC guidelines).

Individual work and tests

Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests Empirical Pilot Protocol

6.Explains psychometric quality criteria by interpreting item analysis metrics as well as various types of reliability and validity evidence.

Individual work and tests

Item Analysis Protocol Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests Scale Analysis Protocol

7.Recognizes and characterizes various statistical methods (factor analysis, correlation analysis, comparison of indicators) and their suitability for addressing specific psychometric questions.

Individual work and tests

Scale Analysis Protocol Item Analysis Protocol Empirical Pilot Protocol Overview of Existing Instruments

Skills

1.Performs the adaptation of an instrument developed in a foreign language, ensuring linguistic, cultural, and conceptual equivalence.

Individual work and tests

Empirical Pilot Protocol

2.Utilizes statistical software (JASP/Jamovi) for empirical data processing, calculating item parameters, test reliability coefficients, and validity indicators.

Individual work and tests

Scale Analysis Protocol

3.Conducts data analysis using factor analysis (EFA and CFA) to evaluate the internal structure of the instrument.

Individual work and tests

Scale Analysis Protocol

4.Demonstrates the ability to independently operationalize a psychological construct and formulate psychometrically sound test items in accordance with the chosen strategy.

Individual work and tests

Initial Item Pool and Content Validity New Test "Passport" (Vision)

Competences

1.Critically evaluates the psychometric quality of measurement instruments based on empirical evidence of reliability and validity.

Individual work and tests

Initial Item Pool and Content Validity Overview of Existing Instruments Scale Analysis Protocol Final Presentation on Test Construction or Adaptation

2.Provides a reasoned argument for the chosen methodology and the significance of the Standard Error of Measurement (SEM) and confidence intervals in the interpretation of individual results.

Individual work and tests

Scale Analysis Protocol Item Analysis Protocol

3.Develops and formats a scientific psychometric report or presentation in accordance with APA 7 standards, adhering to the principles of open science and transparent data management.

Individual work and tests

Final Presentation on Test Construction or Adaptation

Assessment

Individual work

Title
% from total grade
Grade
1.

Overview of Existing Instruments

-
Test

Format: Written report (table or structured overview recommended).

The group must prepare a review of at least two international instruments, including:

  • Instrument Name and Authors: (e.g., The Digital Stress Scale (Hallenbeck et al., 2021)).
  • Psychometric Properties: Specific numerical values (e.g., alpha = .85, omega = .88) and types of validity evidence (content, construct, or criterion-related) derived from original sources or manuals.
  • Structural Analysis: Number of items and the response scale format (e.g., 5-point Likert scale, visual analogue scale).
  • Critical Evaluation: Identification of key limitations (e.g., "overly specific sample," "content-wise outdated items," or "lack of cross-cultural validation").
2.

New Test "Passport" (Vision)

-
Test

Objective: Based on the revision of existing instruments, each group must develop and provide a written justification for the conceptual framework of their proposed test. This deliverable serves as a scientific argument for the necessity of a new instrument and demonstrates how it addresses gaps in the current landscape.

Deliverable Content and Structure:

1. Technical Parameters:

  • Construct to be Measured: A precise and scientifically defined name of the construct.
  • Target Group: A defined population (e.g., working youth, clinical sample) for whom the instrument is intended.
  • Structural Plan: The planned number of dimensions (subscales) and the approximate number of items.
  • Response Format: The chosen scale type (e.g., Likert scale with a specific number of points).

2. Argumentation and Scientific Rationale:

  • Justification of Choices: A clear explanation of why your chosen parameters differ from the instruments analyzed during the revision. If you retain the same parameters, justify why they are the most effective in their current form.
  • Problem-Solving: Argue which specific limitation of existing tests (e.g., excessive length, outdated content, or lack of a midpoint in the scale) your new vision will resolve.
  • Usability: A rationale for why the chosen structure (e.g., unidimensional vs. multidimensional) is better suited for your target group and the context of application (e.g., rapid screening).

Assessment Criteria: The deliverable will be graded as "Pass" if each technical parameter is logically linked to the conclusions of the previous revision, and the provided rationale is based on psychometric arguments rather than personal opinions.

3.

Initial Item Pool and Content Validity

-
Test

Objective: To develop the initial pool of test items (questions) and perform a systematic selection process based on expert evaluations of content validity, ensuring that the final pilot version is scientifically sound.

Requirements for "Pass":

  • Content Equivalence: The formulated items directly and logically align with the defined construct and all planned dimensions.
  • Expert Integration: A clear, systematic display of expert ratings regarding the content validity of each item.
  • Analytical Selection: A reasoned argument for including or excluding specific items in the pilot version, based on expert consensus or qualitative analysis.
  • Language & Style: Items are written in a consistent style, avoiding ambiguity, double negatives, or overly complex jargon.
4.

Empirical Pilot Protocol

-
Test

Objective: To develop a methodologically sound and technically functional research platform for the empirical testing of both the new and adapted instruments, ensuring ethical standards and high-quality data collection.

Technical Requirements (Pass/Fail):

  • Adaptation Quality: A brief overview of the adapted test and translated items, including comments on modifications made to ensure conceptual equivalence in the Latvian cultural context.
  • Data Collection Design: A clearly defined target sample, recruitment methods, and the planned number of respondents for both instruments.
  • Technical Implementation: A functional link to the online survey with a logical flow (Informed Consent, Demographics, Items).
  • Ethics & GDPR: Integration of required informed consent elements and measures to ensure data anonymity according to research standards.

Examination

Title
% from total grade
Grade
1.

Knowledge Test: Data Types, Scales, Ethics, and Test Types

10.00% from total grade
10 points

Objective: To assess theoretical knowledge and the ability to identify variable types, measurement scales, test classifications, and basic principles of research ethics.

Requirements for "Pass":

  • Passing Score: Minimum 60% correct answers.
  • Assessment: Automated scoring.
  • Content: Theoretical questions and brief case study tasks for variable identification.
  • Completion: Must be completed within the specified time limit and number of attempts.
2.

Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests

10.00% from total grade
10 points

Objective: To assess the understanding of test construction stages, the adaptation process, basic item analysis, and types of reliability and validity.

Requirements for "Pass":

  • Passing Score: Minimum 60% correct answers.
  • Assessment: Automated scoring.
  • Content: Questions on the test development cycle, interpretation of psychometric indicators, and validity evidence.
  • Completion: Must be completed within the specified time limit and number of attempts.
3.

Item Analysis Protocol

20.00% from total grade
10 points

Objective: To conduct a statistical analysis of the pilot data and, based on item metrics, provide a reasoned justification for the final item selection of the instrument.

Assessment Levels (Brief):

  • 10-9: Flawless calculations, professional APA-style visualization, and deep analytical integration of statistics and item content.
  • 8-7: Correct data and clear tables. Logical reasoning for item retention/exclusion with minor stylistic or formatting inconsistencies.
  • 6-4: Basic calculations present but analysis is mechanical or superficial. Formatting lacks professional standards. Minimal requirement: a complete table and a basic summary.
4.

Scale Analysis Protocol

20.00% from total grade
10 points

Objective: To conduct a scale-level analysis of the newly developed and adapted instruments by evaluating their reliability, validity, and the alignment of data distribution with psychometric requirements.

Technical Requirements (Mandatory Components):

  • Descriptive Statistics and Distribution: Mean (M), Standard Deviation (SD), Skewness, and Kurtosis for each scale to characterize the data distribution.
  • Reliability Indicators: Calculation and interpretation of internal consistency coefficients (Cronbach’s alpha is mandatory; McDonald’s omega is highly recommended).
  • Validity Evidence: Correlation analysis (e.g., correlation between the new test and the adapted instrument or other external criteria) to justify construct or criterion-related validity.
  • Visualization: Data summarized in clear tables strictly following APA 7th edition standards.
  • Interpretation of Results: A concise, professional written description evaluating whether the scale metrics meet the psychometric standards for further practical use.

Evaluation criteria:

Grades 10-9: Comprehensive analysis including reliability (alpha, omega), descriptive statistics (M, SD, Sk/Ku), and validity evidence. Flawless APA 7 visualization and professional interpretation.

Grades 8-7: Methodologically sound data processing. Clear presentation of results with logical but concise interpretation. Minor formatting issues.

Grades 6-4: Basic scale metrics provided. Analysis is mechanical, lacks depth in validity assessment or distribution description. Minimal compliance with academic standards.

5.

Final Presentation on Test Construction or Adaptation

40.00% from total grade
10 points

Objective: To demonstrate the full cycle of test development or adaptation, justifying the instrument's quality through a methodologically sound research design and empirical data.

Presentation Structure:

  1. Introduction: Test vision or background of the adapted test; scientific definition of the construct.
  2. Method: * Sample: Demographic characteristics and size.
    • Instruments: Description of scales used (new, adapted, and criterion measures).
    • Procedure: Data collection process, ethics, and GDPR compliance.
    • Data Analysis: Statistical methods and software employed.
  3. Item Analysis: Key metrics and justification for item selection/exclusion.
  4. Psychometric Quality: Evidence of reliability (alpha, omega) and validity.
  5. Conclusions: Expert assessment of usability and future recommendations.

Assessment Summary:

  • Grades 10-9: Flawless scientific structure. Comprehensive Method section. Sophisticated integration of psychometric indicators and professional defense during Q&A.
  • Grades 8-7: Solid structure with a clear Method description. Correct data interpretation and reasoned conclusions. Professional delivery.
  • Grades 6-4: Descriptive rather than analytical. Incomplete Method section. Weak link between research design and final findings.

Study Course Theme Plan

FULL-TIME
Part 1
  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Quantitative Data and Data Collection Methods in Psychology
Description

During the lecture, students learn the hierarchy of quantitative research data (from primary to tertiary data) and understand its application in psychological research. Measurement scales and their impact on subsequent statistical analysis are analyzed in detail. Particular attention is devoted to the diversity of data collection methods and modern digital opportunities, while consolidating the understanding of personal data protection (GDPR) and research ethics standards.

Lecture Sub-topics:

1. Data Taxonomy and Sources:

  • Primary Data: Data collected firsthand for a specific research project.
  • Secondary Data: Data collected by other researchers, national statistics portals, and Open Science databases (e.g., OSF, Mendeley Data).
  • Tertiary Data: Results of meta-analyses and systematic reviews as data sources.

2. Levels of Measurement and Scale Types:

  • S.S. Stevens’ NOIR Classification: Nominal, Ordinal, Interval, and Ratio scales.
  • Variable Types: Qualitative (categorical) vs. Quantitative (metric); Discrete vs. Continuous variables.
  • Specialized Scales in Psychology: Likert-type scales (their nature: ordinal vs. interval), Visual Analogue Scales (VAS), and Semantic Differential.

3. Data Collection Methods and Tools:

  • Self-report Instruments: Surveys and questionnaires.
  • Observation Protocols: Coding and quantification of behavior.
  • Objective Measures: Reaction time, physiological measurements.
  • Digital Phenotyping: Data retrieved from smart devices and sensors.

4. Data Protection and Research Ethics:

  • General Data Protection Regulation (GDPR): Data minimization, transparency, and integrity.
  • Informed Consent: Structure and mandatory elements.
  • Data Security: Anonymization, pseudonymization, and data storage protocols.
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
4

Topics

Practical Assignment: Data Classification and Open Data Resources
Description

Objective: To consolidate the ability to classify data and to locate existing data resources.

1. Article Analysis (Group Work): Each group receives one short, high-quality research article (or abstract).

Task: Complete a structured worksheet:

  • What is the research object/subject?
  • Identify the main variables.
  • Determine the measurement level (scale) and type (quantitative/qualitative) for each variable.
  • Indicate whether the data used are primary or secondary.

2. Secondary Data "Scavenger Hunt": Using their computers, students must navigate to an open-access database (e.g., European Social Survey or the Latvian Open Data Portal).

Task: Locate one dataset related to psychology or social sciences and describe the measurement scales predominantly used in that dataset.

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Psychological tests and Classical Test Theory
Description

During the lecture, students explore the nature of a psychological test as a standardized measurement instrument, distinguishing scientific approaches from pseudo-psychological methods. A comprehensive overview of test classification by purpose and format is provided, alongside an analysis of the core characteristics of a sound psychometric test: objectivity, standardization, and utility. Students are introduced to the historical development of psychometrics and master the fundamental axiom of Classical Test Theory (CTT): X = T + E. Particular attention is devoted to types of measurement error (systematic and random error), fostering an understanding of how environmental, respondent, and instrumental factors influence the precision of observed scores. These insights serve as a foundation for the practical session, where students analyze professional test batteries and perform a measurement error "audit."

  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
4

Topics

Practical Session: Revision of Existing Instruments and Vision for a New Test
Description

Objective: To conduct an in-depth study of existing tests and, based on that analysis, develop a well-reasoned structure for an original test.

Group Work Procedure and Tasks:

1. Targeted Search and Selection:

  • Groups select a construct to be measured (e.g., "Digital Device Use Anxiety").
  • Locate at least two existing instruments. Analyze the rationale provided by the authors: Why was this test originally created? What specific problem did it address?

2. Structural and Psychometric Revision:

  • Item Count and Content: How many items are in each scale? Are they concise or lengthy?
  • Response Format: Why was this specific scale chosen (e.g., a 4-point forced-choice scale vs. a 7-point Likert scale)?
  • Identification of Limitations: Students examine the "Limitations" section of the original research. Are the tests too long? Are they outdated? Were they validated only in a specific population?

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Test Construction and Item Development
Description

During the lecture, students explore the scientific cycle of psychometric test development, from the initial theoretical concept to the first draft of test items. The three primary approaches to test construction—deductive, inductive, and integrative—are analyzed, emphasizing their differences in achieving research objectives. Students learn to perform construct operationalization, select the most appropriate response scale formats, and identify common types of response bias. The lecture concludes with an examination of content and face validity as the primary quality control mechanisms in the initial stages of test development.

Lecture Sub-topics:

1. Strategies for Test Construction:

  • Deductive (Theory-driven) Approach: Building items based on existing theoretical frameworks.
  • Inductive (Data-driven/Empirical) Approach: Using statistical relationships to derive dimensions.
  • Integrative Approach: Combining theoretical foundations with empirical evidence.

2. From Construct to Measurement:

  • Nominal and operational definitions of a construct.
  • Identification of indicators and domains.

3. Item Formulation and Scales:

  • Response Scale Formats: Likert, semantic differential, forced-choice, and dichotomous scales.
  • Item Writing Hygiene: Linguistic precision and clarity.
  • Reverse-scored Items: Purpose and implementation.

4. Response Bias:

  • Social desirability, acquiescence bias, extreme response bias, and central tendency bias.
  • Strategies for mitigating response bias during the design phase.

5. Initial Validation:

  • Content Validity: Expert judgment and review.
  • Face Validity: The respondent's perception of the instrument's relevance.
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
4

Topics

Practical Session: Item Development and Initial Validation
Description

Objective: To practically develop the first set of test items and perform an initial quality assessment.

Session Procedure:

1. Group Work – Item Formulation:

  • Based on the "test vision" from the previous session, groups formulate the operational definition of their construct.
  • Select and justify the chosen response scale format.
  • Create an initial list of items (e.g., 12–15 items).
  • Using a consensus approach, the group agrees on which items to retain for further evaluation.

2. Content Validity Simulation:

  • Groups "exchange" their items. Group A acts as "experts" for Group B’s test.
  • Experts rate the relevance of each item to the operational definition (e.g., using a 4-point scale: "1 – Irrelevant", "2 – Slightly Relevant", "3 – Relevant", "4 – Highly Relevant").

3. Face Validity Check:

  • If students in the classroom match the target audience, they provide feedback on the clarity, comprehensibility, and whether the items "really look like what they are supposed to measure."
  • General feedback is provided on linguistic phrasing and respondent burden.

4. Reflection and Refinement:

  • Groups receive their items back with comments and implement corrections.
  • A finalized set of items is developed for future empirical evaluation.

Pass/Fail Submission:

Each small group submits a "Test Vision & Initial Item Set" report, which includes:

  • A brief operational definition of the measured construct.
  • A rationale for the necessity of the test development.
  • The initial list of items with expert content validity ratings.
  • Conclusions on which items are retained for further empirical approbation.
  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Adaptation of Psychological Tests and ITC Guidelines
Description

During the lecture, students acquire the methodological framework for adapting psychological tests developed in foreign languages to a different linguistic and cultural environment. Primary emphasis is placed on the International Test Commission (ITC) guidelines and the application of TARES standards to ensure measurement objectivity. Students are introduced to the various types of equivalence and their significance, as well as learning to identify specific biases that may arise as a result of inaccurate adaptation.

Lecture Sub-topics:

1. Introduction to Adaptation:

  • The distinction between translation and adaptation.
  • Rationale for adaptation (cultural, linguistic, and conceptual differences).

2. Stages of Adaptation According to ITC Guidelines:

  • Preparation: Obtaining permission from the original test authors.
  • The Translation Process: Forward translation and back-translation protocols.
  • Synthesis and Expert Committee: Resolving discrepancies in translations.
  • Approbation: Pilot studies and cognitive interviews with respondents.

3. Equivalence and Types of Bias:

  • Equivalence: Linguistic, functional, conceptual, and metric equivalence.
  • Bias: Construct bias, method bias (e.g., response styles), and item bias (e.g., culture-specific idioms).

4. Quality Frameworks:

  • TARES Standards: Assessing the usability of the adapted test.
  • EFPA (European Federation of Psychologists' Associations): A brief overview of the test evaluation model.

5. Data Collection Planning and Ethics:

  • Sample size requirements for approbation.
  • Re-evaluating GDPR: Ensuring data security throughout the adaptation process.
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
4

Topics

Practical Session: Instrument Adaptation and Empirical Approbation Planning
Description

Objective: To perform the adaptation of a selected international instrument and to prepare for the empirical testing of both instruments (the original student-developed test and the adapted one).

Session Procedure:

1. Instrument Selection and Rationale:

  • From the tests researched in the 2nd session, each group selects one that is most closely related to their chosen construct.
  • This test will serve as the "gold standard" or a comparative tool for evaluating the validity of their original test.

2. Translation and Adaptation Workshop:

  • Forward Translation: The group translates the test items into Latvian.
  • Expert Review: An internal group discussion on conceptual equivalence. For example, is "anxious" better translated as "satraukts", "bažīgs", or "trauksmains" within the context of the specific construct?
  • Cultural Adaptation: Are all examples from the original test understandable to a Latvian audience? If not, how can they be replaced without altering the item's underlying meaning?

3. Development of the Research Instrument (Questionnaire):

  • Students prepare a unified survey (e.g., Google Forms or MS Forms) including:
    • Demographic questions (age, gender, etc.).
    • The original student-developed test (from Session 3).
    • The adapted international test (from Session 4).
  • This design will enable correlation analysis for validity testing during the practical work in Sessions 5 and 6.

4. Data Collection Plan:

  • Groups agree on a recruitment strategy (e.g., each student must survey 5–10 acquaintances/peers) for the approbation process.
  • Ethical principles are reinforced: participation is voluntary and anonymous.

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Empirical Item Analysis
Description

During the lecture, students learn methods for the empirical quality assessment of psychological test items. Item difficulty (response index) and discrimination indicators are analyzed in detail, teaching students how to interpret the distribution of responses and identify "non-performing" items. The lecture concludes with an introduction to test reliability using internal consistency measures and demonstrates the link between individual item quality and overall scale reliability.

Lecture Sub-topics:

1. Data Cleaning and Preparation:

  • Handling missing values.
  • Recoding reverse-coded items.

2. Item Response and Distribution Analysis:

  • Item Difficulty (Response) Index (M / p): Determining what proportion of respondents selected the "keyed" or "measured" response.
  • Response Distribution Analysis: Frequency tables and histograms. Recognizing "ceiling" or "floor" effects (when responses lack variance).

3. Item Discrimination Indicators:

  • Item Discrimination Index (D): The ability to distinguish between respondents with high and low levels of the measured trait (comparing extreme groups).
  • Discrimination Coefficient (r): Item-total correlation. Critical threshold values (typically r > 0.20).

4. Introduction to Scale Reliability:

  • Internal Consistency and Cronbach’s Alpha (alpha): Measuring the interrelatedness of items.
  • "Alpha if Item Deleted": Understanding how specific item parameters (difficulty and discrimination) influence the reliability of the entire test.

  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
4

Topics

Practical Session: Empirical Item Analysis and Test Optimization
Description

Objective: To perform an in-depth empirical analysis of the collected (or simulated) data and to make reasoned decisions regarding test shortening and refinement.

Session Procedure:

1. Working with Software (JASP/Jamovi):

  • Students import their survey data.
  • Perform data preparation (e.g., recoding reverse-scored items).

2. Item "Audit":

  • Students calculate response indices (means/proportions) and discrimination coefficients for all items within each scale or subscale.
  • Task: Identify items that exhibit:
    • Critically low discrimination (r < 0.20).
    • Highly asymmetrical distributions (e.g., floor or ceiling effects where most responses are clustered at one end of the scale).

3. Test Optimization:

  • Students experiment with the "Alpha if item deleted" tool.
  • They practice removing the weakest items to observe how the overall reliability (internal consistency) of the test improves.

4. Conclusions and Discussion:

  • Groups discuss: "Why exactly did this specific question fail? Was it due to the translation, the phrasing, or the underlying construct itself?"

Submission (Pass/Fail): "Item Analysis Protocol"

Each group submits a protocol containing:

  • A summary table with all calculated indicators (p/M, r, and alpha if item deleted).
  • A 3–4 sentence summary explaining which items are retained for the final version and the rationale behind these decisions.
  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Reliability of Psychological Measurements
Description

During the lecture, students deepen their understanding of psychological test reliability as the stability and precision of a measurement. Various methods for assessing reliability are analyzed (test-retest, parallel forms, split-half, and internal consistency), alongside their suitability for different types of tests.

Lecture Sub-topics:

1. The Concept of Reliability and CTT:

  • Re-evaluating X = T + E. Reliability as the ratio of true score variance to total observed variance.
  • Factors influencing reliability (test length, sample heterogeneity).

2. Methods for Assessing Reliability:

  • Stability (Test-retest Reliability): The time factor and the memory/practice effect.
  • Equivalence (Parallel Forms Reliability): When is it necessary to develop alternate versions?
  • Internal Consistency: Cronbach’s alpha (alpha) and McDonald’s omega (omega) – understanding why omega is becoming the preferred indicator in modern psychometrics.
  • Inter-rater Reliability: Essential for observational and projective methods (Cohen’s kappa).

3. Interpretation of Reliability Coefficients:

  • Threshold values in research (0.70) versus individual clinical diagnostics (0.90).
  • TARES standards for reporting reliability.
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
4

Topics

Practical Session: Calculation of Reliability Indicators
Description

Objective: To practically calculate and interpret various reliability indicators for the student's mini-project.

Session Procedure:

1. Reliability Calculations in Software:

  • Students calculate internal consistency (Alpha and Omega) for both their original and adapted tests.
  • If data are available (or simulated), test-retest correlation is calculated.

2. SEM (Standard Error of Measurement) Workshop:

  • Using the obtained reliability coefficient and the standard deviation, students calculate the SEM for their test.
  • Task: Create a sample for an individual report: "If Client X scores 45 points, within what interval does their true score lie?"

3. Comparative Analysis:

  • Groups compare: which version of the test (original or adapted) shows higher reliability? Why?

4. Report Preparation:

  • Record the reliability indicators for the final project, formatting them in a table and providing a structured description.

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Test Validity
Description

During the lecture, students explore the modern concept of validity as a unified body of evidence supporting the appropriateness of test score interpretations. Evidence of construct validity (convergent and discriminant validity) and types of criterion validity (predictive, concurrent, incremental, and differential validity) are analyzed in detail. Students learn to evaluate evidence confirming that a test indeed measures the intended psychological construct and is capable of predicting real-world behavior or other clinical indicators.

Lecture Sub-topics:

1. Evolution of the Validity Concept:

  • From the "Three Pillars" (content, criterion, construct) to a unified validity framework.
  • The relationship between reliability and validity (reliability as a necessary but insufficient condition).

2. Construct Validity:

  • Convergent Validity: Correspondence with other instruments measuring the same construct.
  • Discriminant (Divergent) Validity: Low correlation with instruments measuring unrelated constructs.

3. Types of Criterion Validity:

  • Concurrent Validity: Relationship with a criterion measured at the same point in time.
  • Predictive Validity: The ability to forecast future behavior or outcomes.
  • Incremental Validity: Assessing whether the new test provides additional information beyond existing instruments.
  • Differential Validity: The ability to distinguish between different groups (e.g., a clinical group vs. a normative group).

4. Interpretation of Validity Coefficients:

  • Utilizing correlation matrices.
  • Limitations (e.g., the impact of criterion unreliability on validity coefficients).
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
4

Topics

Practical Session: Analysis and Interpretation of Validity Evidence
Description

Objective: To master the methodology for calculating various types of validity and to perform an initial validity assessment for the student's mini-project.

Session Procedure:

Part 1: Working with the Instructor's Dataset

  • Calculation of Criterion Validity: Using data from a larger study, students conduct a correlation analysis between a new scale and several external criteria (e.g., academic achievement, clinical diagnosis, or behavioral indicators).
  • Testing Differential Validity: Using t-tests or ANOVA, students learn to determine whether the test significantly distinguishes between different groups (e.g., a control group vs. a clinical group).
  • Demonstration of Incremental Validity: A brief conceptual overview of hierarchical regression to see if the new scale improves predictive accuracy beyond existing variables.

Part 2: Working with Self-Collected Data

  • Testing Concurrent Validity: Students calculate the correlation in their own data between the original scale they developed and the adapted international test.
  • Data Visualization: Creation and interpretation of correlation scatterplots—is the relationship linear? Are there visible outliers?
  • Initial Construct Validity Analysis: Students compare their test with another variable included in the survey (e.g., age or gender) to test for discriminant validity (low correlation where theoretically no relationship is expected).

Students record the indicators that support the validity of their test, ensuring they are correctly formatted and described for the final project report.

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Modern Psychometrics, Factor Analysis, and Result Reporting
Description

Lecture Annotation: In this final lecture of the psychometrics course, students learn methods for determining a test’s internal structure and diagnostic value. An overview of factor analysis (EFA and CFA) is provided, explaining how latent variables structure observed data. Students are introduced to clinical psychometric indicators (sensitivity, specificity, and the ROC curve) and modern measurement theories, such as Item Response Theory (IRT) and the psychological network approach.

Lecture Sub-topics:

1. Factor Analysis: Structural Validity:

  • EFA (Exploratory Factor Analysis): How do items cluster? Interpretation of factor loadings.
  • CFA (Confirmatory Factor Analysis): Testing the theoretical model. Key fit indices ($CFI$, $RMSEA$).

2. Clinical Utility and Diagnostic Accuracy:

  • Sensitivity vs. Specificity: Balancing true positive and true negative rates.
  • ROC Curve: Determining the optimal cut-off point for diagnostic decisions.

3. Development of Norms:

  • Standardized scores ($z$, $T$, stens, percentiles) and their application.

4. Expanded Horizons: IRT and Network Psychometrics:

  • IRT (Item Response Theory): Item difficulty and discrimination as non-linear functions of latent ability.
  • Network Approach: Viewing psychological constructs as a system of interacting symptoms rather than a single common cause.

5. Preparing a Psychometric Report:

  • Report Structure: From construct definition to norms.
  • Data Visualization and Table Formatting: Following APA (American Psychological Association) style.

  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
4

Topics

Practical Session: Factor Analysis and Course Conclusion
Description

Objective: To master the practical steps of conducting factor analysis and to prepare for the development of the final project report.

Session Procedure:

1. Working with Instructor's Data – EFA:

  • Students perform Exploratory Factor Analysis (EFA) on a large dataset using JASP or Jamovi.
  • Task: Determine the number of factors (using a Scree plot or Parallel Analysis) and interpret factor loadings. Identify "cross-loading" items.

2. Working with Instructor's Data – CFA:

  • Demonstration: Students attempt to confirm the previously identified factor structure.
  • Task: Interpret the primary model fit indices—does the model "fit" the data? (Evaluating CFI, TLI, and RMSEA).

3. Preparation for the Final Submission:

  • Students work in their groups to compile all results from previous sessions (Sessions 2–7) into a single, unified structure.
  • Consultations are provided regarding the formal requirements and standards for the final report.

4. Conclusion and Feedback:

  • A comprehensive summary of the course and information regarding the evaluation criteria for the final presentation and submission.
Total ECTS (Creditpoints):
6.00
Contact hours:
64 Academic Hours
Final Examination:
Exam
PART-TIME
Part 1
  1. Lecture

Modality
Location
Contact hours
On site
Computer room
4

Topics

Quantitative Data and Data Collection Methods in Psychology
Description

During the lecture, students learn the hierarchy of quantitative research data (from primary to tertiary data) and understand its application in psychological research. Measurement scales and their impact on subsequent statistical analysis are analyzed in detail. Particular attention is devoted to the diversity of data collection methods and modern digital opportunities, while consolidating the understanding of personal data protection (GDPR) and research ethics standards.

Lecture Sub-topics:

1. Data Taxonomy and Sources:

  • Primary Data: Data collected firsthand for a specific research project.
  • Secondary Data: Data collected by other researchers, national statistics portals, and Open Science databases (e.g., OSF, Mendeley Data).
  • Tertiary Data: Results of meta-analyses and systematic reviews as data sources.

2. Levels of Measurement and Scale Types:

  • S.S. Stevens’ NOIR Classification: Nominal, Ordinal, Interval, and Ratio scales.
  • Variable Types: Qualitative (categorical) vs. Quantitative (metric); Discrete vs. Continuous variables.
  • Specialized Scales in Psychology: Likert-type scales (their nature: ordinal vs. interval), Visual Analogue Scales (VAS), and Semantic Differential.

3. Data Collection Methods and Tools:

  • Self-report Instruments: Surveys and questionnaires.
  • Observation Protocols: Coding and quantification of behavior.
  • Objective Measures: Reaction time, physiological measurements.
  • Digital Phenotyping: Data retrieved from smart devices and sensors.

4. Data Protection and Research Ethics:

  • General Data Protection Regulation (GDPR): Data minimization, transparency, and integrity.
  • Informed Consent: Structure and mandatory elements.
  • Data Security: Anonymization, pseudonymization, and data storage protocols.
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
2

Topics

Practical Assignment: Data Classification and Open Data Resources
Description

Objective: To consolidate the ability to classify data and to locate existing data resources.

1. Article Analysis (Group Work): Each group receives one short, high-quality research article (or abstract).

Task: Complete a structured worksheet:

  • What is the research object/subject?
  • Identify the main variables.
  • Determine the measurement level (scale) and type (quantitative/qualitative) for each variable.
  • Indicate whether the data used are primary or secondary.

2. Secondary Data "Scavenger Hunt": Using their computers, students must navigate to an open-access database (e.g., European Social Survey or the Latvian Open Data Portal).

Task: Locate one dataset related to psychology or social sciences and describe the measurement scales predominantly used in that dataset.

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Psychological tests and Classical Test Theory
Description

During the lecture, students explore the nature of a psychological test as a standardized measurement instrument, distinguishing scientific approaches from pseudo-psychological methods. A comprehensive overview of test classification by purpose and format is provided, alongside an analysis of the core characteristics of a sound psychometric test: objectivity, standardization, and utility. Students are introduced to the historical development of psychometrics and master the fundamental axiom of Classical Test Theory (CTT): X = T + E. Particular attention is devoted to types of measurement error (systematic and random error), fostering an understanding of how environmental, respondent, and instrumental factors influence the precision of observed scores. These insights serve as a foundation for the practical session, where students analyze professional test batteries and perform a measurement error "audit."

  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
2

Topics

Practical Session: Revision of Existing Instruments and Vision for a New Test
Description

Objective: To conduct an in-depth study of existing tests and, based on that analysis, develop a well-reasoned structure for an original test.

Group Work Procedure and Tasks:

1. Targeted Search and Selection:

  • Groups select a construct to be measured (e.g., "Digital Device Use Anxiety").
  • Locate at least two existing instruments. Analyze the rationale provided by the authors: Why was this test originally created? What specific problem did it address?

2. Structural and Psychometric Revision:

  • Item Count and Content: How many items are in each scale? Are they concise or lengthy?
  • Response Format: Why was this specific scale chosen (e.g., a 4-point forced-choice scale vs. a 7-point Likert scale)?
  • Identification of Limitations: Students examine the "Limitations" section of the original research. Are the tests too long? Are they outdated? Were they validated only in a specific population?

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Test Construction and Item Development
Description

During the lecture, students explore the scientific cycle of psychometric test development, from the initial theoretical concept to the first draft of test items. The three primary approaches to test construction—deductive, inductive, and integrative—are analyzed, emphasizing their differences in achieving research objectives. Students learn to perform construct operationalization, select the most appropriate response scale formats, and identify common types of response bias. The lecture concludes with an examination of content and face validity as the primary quality control mechanisms in the initial stages of test development.

Lecture Sub-topics:

1. Strategies for Test Construction:

  • Deductive (Theory-driven) Approach: Building items based on existing theoretical frameworks.
  • Inductive (Data-driven/Empirical) Approach: Using statistical relationships to derive dimensions.
  • Integrative Approach: Combining theoretical foundations with empirical evidence.

2. From Construct to Measurement:

  • Nominal and operational definitions of a construct.
  • Identification of indicators and domains.

3. Item Formulation and Scales:

  • Response Scale Formats: Likert, semantic differential, forced-choice, and dichotomous scales.
  • Item Writing Hygiene: Linguistic precision and clarity.
  • Reverse-scored Items: Purpose and implementation.

4. Response Bias:

  • Social desirability, acquiescence bias, extreme response bias, and central tendency bias.
  • Strategies for mitigating response bias during the design phase.

5. Initial Validation:

  • Content Validity: Expert judgment and review.
  • Face Validity: The respondent's perception of the instrument's relevance.
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
2

Topics

Practical Session: Item Development and Initial Validation
Description

Objective: To practically develop the first set of test items and perform an initial quality assessment.

Session Procedure:

1. Group Work – Item Formulation:

  • Based on the "test vision" from the previous session, groups formulate the operational definition of their construct.
  • Select and justify the chosen response scale format.
  • Create an initial list of items (e.g., 12–15 items).
  • Using a consensus approach, the group agrees on which items to retain for further evaluation.

2. Content Validity Simulation:

  • Groups "exchange" their items. Group A acts as "experts" for Group B’s test.
  • Experts rate the relevance of each item to the operational definition (e.g., using a 4-point scale: "1 – Irrelevant", "2 – Slightly Relevant", "3 – Relevant", "4 – Highly Relevant").

3. Face Validity Check:

  • If students in the classroom match the target audience, they provide feedback on the clarity, comprehensibility, and whether the items "really look like what they are supposed to measure."
  • General feedback is provided on linguistic phrasing and respondent burden.

4. Reflection and Refinement:

  • Groups receive their items back with comments and implement corrections.
  • A finalized set of items is developed for future empirical evaluation.

Pass/Fail Submission:

Each small group submits a "Test Vision & Initial Item Set" report, which includes:

  • A brief operational definition of the measured construct.
  • A rationale for the necessity of the test development.
  • The initial list of items with expert content validity ratings.
  • Conclusions on which items are retained for further empirical approbation.
  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Adaptation of Psychological Tests and ITC Guidelines
Description

During the lecture, students acquire the methodological framework for adapting psychological tests developed in foreign languages to a different linguistic and cultural environment. Primary emphasis is placed on the International Test Commission (ITC) guidelines and the application of TARES standards to ensure measurement objectivity. Students are introduced to the various types of equivalence and their significance, as well as learning to identify specific biases that may arise as a result of inaccurate adaptation.

Lecture Sub-topics:

1. Introduction to Adaptation:

  • The distinction between translation and adaptation.
  • Rationale for adaptation (cultural, linguistic, and conceptual differences).

2. Stages of Adaptation According to ITC Guidelines:

  • Preparation: Obtaining permission from the original test authors.
  • The Translation Process: Forward translation and back-translation protocols.
  • Synthesis and Expert Committee: Resolving discrepancies in translations.
  • Approbation: Pilot studies and cognitive interviews with respondents.

3. Equivalence and Types of Bias:

  • Equivalence: Linguistic, functional, conceptual, and metric equivalence.
  • Bias: Construct bias, method bias (e.g., response styles), and item bias (e.g., culture-specific idioms).

4. Quality Frameworks:

  • TARES Standards: Assessing the usability of the adapted test.
  • EFPA (European Federation of Psychologists' Associations): A brief overview of the test evaluation model.

5. Data Collection Planning and Ethics:

  • Sample size requirements for approbation.
  • Re-evaluating GDPR: Ensuring data security throughout the adaptation process.
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
2

Topics

Practical Session: Instrument Adaptation and Empirical Approbation Planning
Description

Objective: To perform the adaptation of a selected international instrument and to prepare for the empirical testing of both instruments (the original student-developed test and the adapted one).

Session Procedure:

1. Instrument Selection and Rationale:

  • From the tests researched in the 2nd session, each group selects one that is most closely related to their chosen construct.
  • This test will serve as the "gold standard" or a comparative tool for evaluating the validity of their original test.

2. Translation and Adaptation Workshop:

  • Forward Translation: The group translates the test items into Latvian.
  • Expert Review: An internal group discussion on conceptual equivalence. For example, is "anxious" better translated as "satraukts", "bažīgs", or "trauksmains" within the context of the specific construct?
  • Cultural Adaptation: Are all examples from the original test understandable to a Latvian audience? If not, how can they be replaced without altering the item's underlying meaning?

3. Development of the Research Instrument (Questionnaire):

  • Students prepare a unified survey (e.g., Google Forms or MS Forms) including:
    • Demographic questions (age, gender, etc.).
    • The original student-developed test (from Session 3).
    • The adapted international test (from Session 4).
  • This design will enable correlation analysis for validity testing during the practical work in Sessions 5 and 6.

4. Data Collection Plan:

  • Groups agree on a recruitment strategy (e.g., each student must survey 5–10 acquaintances/peers) for the approbation process.
  • Ethical principles are reinforced: participation is voluntary and anonymous.

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Empirical Item Analysis
Description

During the lecture, students learn methods for the empirical quality assessment of psychological test items. Item difficulty (response index) and discrimination indicators are analyzed in detail, teaching students how to interpret the distribution of responses and identify "non-performing" items. The lecture concludes with an introduction to test reliability using internal consistency measures and demonstrates the link between individual item quality and overall scale reliability.

Lecture Sub-topics:

1. Data Cleaning and Preparation:

  • Handling missing values.
  • Recoding reverse-coded items.

2. Item Response and Distribution Analysis:

  • Item Difficulty (Response) Index (M / p): Determining what proportion of respondents selected the "keyed" or "measured" response.
  • Response Distribution Analysis: Frequency tables and histograms. Recognizing "ceiling" or "floor" effects (when responses lack variance).

3. Item Discrimination Indicators:

  • Item Discrimination Index (D): The ability to distinguish between respondents with high and low levels of the measured trait (comparing extreme groups).
  • Discrimination Coefficient (r): Item-total correlation. Critical threshold values (typically r > 0.20).

4. Introduction to Scale Reliability:

  • Internal Consistency and Cronbach’s Alpha (alpha): Measuring the interrelatedness of items.
  • "Alpha if Item Deleted": Understanding how specific item parameters (difficulty and discrimination) influence the reliability of the entire test.

  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
2

Topics

Practical Session: Empirical Item Analysis and Test Optimization
Description

Objective: To perform an in-depth empirical analysis of the collected (or simulated) data and to make reasoned decisions regarding test shortening and refinement.

Session Procedure:

1. Working with Software (JASP/Jamovi):

  • Students import their survey data.
  • Perform data preparation (e.g., recoding reverse-scored items).

2. Item "Audit":

  • Students calculate response indices (means/proportions) and discrimination coefficients for all items within each scale or subscale.
  • Task: Identify items that exhibit:
    • Critically low discrimination (r < 0.20).
    • Highly asymmetrical distributions (e.g., floor or ceiling effects where most responses are clustered at one end of the scale).

3. Test Optimization:

  • Students experiment with the "Alpha if item deleted" tool.
  • They practice removing the weakest items to observe how the overall reliability (internal consistency) of the test improves.

4. Conclusions and Discussion:

  • Groups discuss: "Why exactly did this specific question fail? Was it due to the translation, the phrasing, or the underlying construct itself?"

Submission (Pass/Fail): "Item Analysis Protocol"

Each group submits a protocol containing:

  • A summary table with all calculated indicators (p/M, r, and alpha if item deleted).
  • A 3–4 sentence summary explaining which items are retained for the final version and the rationale behind these decisions.
  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Reliability of Psychological Measurements
Description

During the lecture, students deepen their understanding of psychological test reliability as the stability and precision of a measurement. Various methods for assessing reliability are analyzed (test-retest, parallel forms, split-half, and internal consistency), alongside their suitability for different types of tests.

Lecture Sub-topics:

1. The Concept of Reliability and CTT:

  • Re-evaluating X = T + E. Reliability as the ratio of true score variance to total observed variance.
  • Factors influencing reliability (test length, sample heterogeneity).

2. Methods for Assessing Reliability:

  • Stability (Test-retest Reliability): The time factor and the memory/practice effect.
  • Equivalence (Parallel Forms Reliability): When is it necessary to develop alternate versions?
  • Internal Consistency: Cronbach’s alpha (alpha) and McDonald’s omega (omega) – understanding why omega is becoming the preferred indicator in modern psychometrics.
  • Inter-rater Reliability: Essential for observational and projective methods (Cohen’s kappa).

3. Interpretation of Reliability Coefficients:

  • Threshold values in research (0.70) versus individual clinical diagnostics (0.90).
  • TARES standards for reporting reliability.
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
2

Topics

Practical Session: Calculation of Reliability Indicators
Description

Objective: To practically calculate and interpret various reliability indicators for the student's mini-project.

Session Procedure:

1. Reliability Calculations in Software:

  • Students calculate internal consistency (Alpha and Omega) for both their original and adapted tests.
  • If data are available (or simulated), test-retest correlation is calculated.

2. SEM (Standard Error of Measurement) Workshop:

  • Using the obtained reliability coefficient and the standard deviation, students calculate the SEM for their test.
  • Task: Create a sample for an individual report: "If Client X scores 45 points, within what interval does their true score lie?"

3. Comparative Analysis:

  • Groups compare: which version of the test (original or adapted) shows higher reliability? Why?

4. Report Preparation:

  • Record the reliability indicators for the final project, formatting them in a table and providing a structured description.

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Test Validity
Description

During the lecture, students explore the modern concept of validity as a unified body of evidence supporting the appropriateness of test score interpretations. Evidence of construct validity (convergent and discriminant validity) and types of criterion validity (predictive, concurrent, incremental, and differential validity) are analyzed in detail. Students learn to evaluate evidence confirming that a test indeed measures the intended psychological construct and is capable of predicting real-world behavior or other clinical indicators.

Lecture Sub-topics:

1. Evolution of the Validity Concept:

  • From the "Three Pillars" (content, criterion, construct) to a unified validity framework.
  • The relationship between reliability and validity (reliability as a necessary but insufficient condition).

2. Construct Validity:

  • Convergent Validity: Correspondence with other instruments measuring the same construct.
  • Discriminant (Divergent) Validity: Low correlation with instruments measuring unrelated constructs.

3. Types of Criterion Validity:

  • Concurrent Validity: Relationship with a criterion measured at the same point in time.
  • Predictive Validity: The ability to forecast future behavior or outcomes.
  • Incremental Validity: Assessing whether the new test provides additional information beyond existing instruments.
  • Differential Validity: The ability to distinguish between different groups (e.g., a clinical group vs. a normative group).

4. Interpretation of Validity Coefficients:

  • Utilizing correlation matrices.
  • Limitations (e.g., the impact of criterion unreliability on validity coefficients).
  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
2

Topics

Practical Session: Analysis and Interpretation of Validity Evidence
Description

Objective: To master the methodology for calculating various types of validity and to perform an initial validity assessment for the student's mini-project.

Session Procedure:

Part 1: Working with the Instructor's Dataset

  • Calculation of Criterion Validity: Using data from a larger study, students conduct a correlation analysis between a new scale and several external criteria (e.g., academic achievement, clinical diagnosis, or behavioral indicators).
  • Testing Differential Validity: Using t-tests or ANOVA, students learn to determine whether the test significantly distinguishes between different groups (e.g., a control group vs. a clinical group).
  • Demonstration of Incremental Validity: A brief conceptual overview of hierarchical regression to see if the new scale improves predictive accuracy beyond existing variables.

Part 2: Working with Self-Collected Data

  • Testing Concurrent Validity: Students calculate the correlation in their own data between the original scale they developed and the adapted international test.
  • Data Visualization: Creation and interpretation of correlation scatterplots—is the relationship linear? Are there visible outliers?
  • Initial Construct Validity Analysis: Students compare their test with another variable included in the survey (e.g., age or gender) to test for discriminant validity (low correlation where theoretically no relationship is expected).

Students record the indicators that support the validity of their test, ensuring they are correctly formatted and described for the final project report.

  1. Lecture

Modality
Location
Contact hours
On site
Study room
4

Topics

Modern Psychometrics, Factor Analysis, and Result Reporting
Description

Lecture Annotation: In this final lecture of the psychometrics course, students learn methods for determining a test’s internal structure and diagnostic value. An overview of factor analysis (EFA and CFA) is provided, explaining how latent variables structure observed data. Students are introduced to clinical psychometric indicators (sensitivity, specificity, and the ROC curve) and modern measurement theories, such as Item Response Theory (IRT) and the psychological network approach.

Lecture Sub-topics:

1. Factor Analysis: Structural Validity:

  • EFA (Exploratory Factor Analysis): How do items cluster? Interpretation of factor loadings.
  • CFA (Confirmatory Factor Analysis): Testing the theoretical model. Key fit indices ($CFI$, $RMSEA$).

2. Clinical Utility and Diagnostic Accuracy:

  • Sensitivity vs. Specificity: Balancing true positive and true negative rates.
  • ROC Curve: Determining the optimal cut-off point for diagnostic decisions.

3. Development of Norms:

  • Standardized scores ($z$, $T$, stens, percentiles) and their application.

4. Expanded Horizons: IRT and Network Psychometrics:

  • IRT (Item Response Theory): Item difficulty and discrimination as non-linear functions of latent ability.
  • Network Approach: Viewing psychological constructs as a system of interacting symptoms rather than a single common cause.

5. Preparing a Psychometric Report:

  • Report Structure: From construct definition to norms.
  • Data Visualization and Table Formatting: Following APA (American Psychological Association) style.

  1. Class/Seminar

Modality
Location
Contact hours
On site
Computer room
2

Topics

Practical Session: Factor Analysis and Course Conclusion
Description

Objective: To master the practical steps of conducting factor analysis and to prepare for the development of the final project report.

Session Procedure:

1. Working with Instructor's Data – EFA:

  • Students perform Exploratory Factor Analysis (EFA) on a large dataset using JASP or Jamovi.
  • Task: Determine the number of factors (using a Scree plot or Parallel Analysis) and interpret factor loadings. Identify "cross-loading" items.

2. Working with Instructor's Data – CFA:

  • Demonstration: Students attempt to confirm the previously identified factor structure.
  • Task: Interpret the primary model fit indices—does the model "fit" the data? (Evaluating CFI, TLI, and RMSEA).

3. Preparation for the Final Submission:

  • Students work in their groups to compile all results from previous sessions (Sessions 2–7) into a single, unified structure.
  • Consultations are provided regarding the formal requirements and standards for the final report.

4. Conclusion and Feedback:

  • A comprehensive summary of the course and information regarding the evaluation criteria for the final presentation and submission.
Total ECTS (Creditpoints):
6.00
Contact hours:
48 Academic Hours
Final Examination:
Exam

Bibliography

Required Reading

1.

Cohen, R. J., & Swerdlik, M. E. (2022). Psychological Testing and Assessment: An Introduction to Tests and Measurement (10th ed.). McGraw-Hill Education.

2.

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for Educational and Psychological Testing. American Educational Research Association. (akceptējams izdevums)

3.

Furite, G., & Raščevska, M. (2018). Psiholoģiskais novērtējums: Teorija un prakse. LU Akadēmiskais apgāds.

4.

International Test Commission. (2017). The ITC Guidelines for Translating and Adapting Tests (Second edition). [Pieejams: www.intestcom.org]. (Obligāts avots 4. lekcijai par adaptāciju).

Additional Reading

1.

Navarro, D. J., & Foxcroft, D. R. (2022). learning statistics with jamovi: a tutorial for psychology students and other beginners.

2.

Goss-Sampson, M. A. (2022). Statistical Analysis in JASP: A Guide for Students.

3.

Kline, T. J. (2005). Psychological Testing: A Practical Approach to Design and Evaluation. Sage Publications.

4.

Zumbo, B. D., & Chan, E. K. (Eds.). (2014). Validity and Validation in Social, Behavioral, and Health Sciences. Springer. (Noderēs 7. lekcijai, lai labāk izprastu konverģento un diskriminanto validitāti).

5.

Epskamp, S., Borsboom, D., & Fried, E. I. (2018). Estimating psychological networks and their stability: A tutorial. Psychological Methods.

Other Information Sources

1.

Rīgas Stradiņa universitāte. (2026). Psiholoģijas virziena metodiskie norādījumi kursa, bakalaura un maģistra darbu izstrādei. Rīga: Rīgas Stradiņa universitāte.