Quantitative Data and Data Collection Methods in Psychology
Study Course Implementer
Dzirciema Street 16, Riga, vppk@rsu.lv
About Study Course
Objective
To provide students with theoretical knowledge of data structures and the fundamental principles of psychometrics, while developing practical skills in the selection of data collection methods, the scientific development and adaptation of psychological tests, and the empirical evaluation of their psychometric properties using modern statistical analysis software.
Preliminary Knowledge
Theoretical Knowledge in Psychology:
- Understanding of the historical development of psychology, its major theoretical schools, and core processes (cognitive processes, emotions), which is essential for the evidence-based definition and operationalization of measurable constructs.
- Knowledge of the fundamental principles of the philosophy of science and logic, enabling the critical analysis of scientific argumentation and the formulation of logical conclusions regarding the utility of measurement instruments.
Research Methodology and Statistics:
- Basic knowledge of mathematical statistics, including an understanding of data distributions, descriptive statistics (mean, standard deviation), and basic relationship analysis (correlation), which is a mandatory prerequisite for reliability and validity calculations.
- Understanding of research designs and data collection methods, as well as knowledge of research ethics standards and the legal aspects of personal data protection (GDPR).
Core Academic Skills:
- Information Literacy: The ability to independently search for, select, and critically evaluate scientific literature and international publications on psychological assessment instruments in specialized databases.
- Foreign Language Proficiency (English): The ability to read and analyze scientific texts and test manuals in English, ensuring a high-quality instrument adaptation process.
- Digital Skills: Proficiency in office software (word processors, spreadsheets) and a readiness to master specialized statistical analysis software (e.g., JASP or Jamovi).
Learning Outcomes
Knowledge
1.Identifies data types and measurement scales (nominal, ordinal, interval, and ratio), evaluating their applicability and constraints in statistical data processing.
Knowledge Test: Data Types, Scales, Ethics, and Test Types • Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests
2.Describes the fundamental principles of research ethics, including informed consent, data confidentiality, and GDPR compliance within psychological research.
Knowledge Test: Data Types, Scales, Ethics, and Test Types
3.Classifies psychological tests according to their type, purpose, and construction specifics (e.g., personality, ability, or achievement tests).
Knowledge Test: Data Types, Scales, Ethics, and Test Types
4.Defines the stages of psychological test development and adaptation, explaining the methodological significance of each stage in the instrument construction process.
Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests
5.Identifies and describes the stages of psychological test construction, adaptation, and standardization, adhering to international methodological standards (e.g., ITC guidelines).
Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests • Empirical Pilot Protocol
6.Explains psychometric quality criteria by interpreting item analysis metrics as well as various types of reliability and validity evidence.
Item Analysis Protocol • Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests • Scale Analysis Protocol
7.Recognizes and characterizes various statistical methods (factor analysis, correlation analysis, comparison of indicators) and their suitability for addressing specific psychometric questions.
Scale Analysis Protocol • Item Analysis Protocol • Empirical Pilot Protocol • Overview of Existing Instruments
Skills
1.Performs the adaptation of an instrument developed in a foreign language, ensuring linguistic, cultural, and conceptual equivalence.
Empirical Pilot Protocol
2.Utilizes statistical software (JASP/Jamovi) for empirical data processing, calculating item parameters, test reliability coefficients, and validity indicators.
Scale Analysis Protocol
3.Conducts data analysis using factor analysis (EFA and CFA) to evaluate the internal structure of the instrument.
Scale Analysis Protocol
4.Demonstrates the ability to independently operationalize a psychological construct and formulate psychometrically sound test items in accordance with the chosen strategy.
Initial Item Pool and Content Validity • New Test "Passport" (Vision)
Competences
1.Critically evaluates the psychometric quality of measurement instruments based on empirical evidence of reliability and validity.
Initial Item Pool and Content Validity • Overview of Existing Instruments • Scale Analysis Protocol • Final Presentation on Test Construction or Adaptation
2.Provides a reasoned argument for the chosen methodology and the significance of the Standard Error of Measurement (SEM) and confidence intervals in the interpretation of individual results.
Scale Analysis Protocol • Item Analysis Protocol
3.Develops and formats a scientific psychometric report or presentation in accordance with APA 7 standards, adhering to the principles of open science and transparent data management.
Final Presentation on Test Construction or Adaptation
Assessment
Individual work
|
Title
|
% from total grade
|
Grade
|
|---|---|---|
|
1.
Overview of Existing Instruments |
-
|
Test
|
|
Format: Written report (table or structured overview recommended). The group must prepare a review of at least two international instruments, including:
|
||
|
2.
New Test "Passport" (Vision) |
-
|
Test
|
|
Objective: Based on the revision of existing instruments, each group must develop and provide a written justification for the conceptual framework of their proposed test. This deliverable serves as a scientific argument for the necessity of a new instrument and demonstrates how it addresses gaps in the current landscape. Deliverable Content and Structure: 1. Technical Parameters:
2. Argumentation and Scientific Rationale:
Assessment Criteria: The deliverable will be graded as "Pass" if each technical parameter is logically linked to the conclusions of the previous revision, and the provided rationale is based on psychometric arguments rather than personal opinions. |
||
|
3.
Initial Item Pool and Content Validity |
-
|
Test
|
|
Objective: To develop the initial pool of test items (questions) and perform a systematic selection process based on expert evaluations of content validity, ensuring that the final pilot version is scientifically sound. Requirements for "Pass":
|
||
|
4.
Empirical Pilot Protocol |
-
|
Test
|
|
Objective: To develop a methodologically sound and technically functional research platform for the empirical testing of both the new and adapted instruments, ensuring ethical standards and high-quality data collection. Technical Requirements (Pass/Fail):
|
||
Examination
|
Title
|
% from total grade
|
Grade
|
|---|---|---|
|
1.
Knowledge Test: Data Types, Scales, Ethics, and Test Types |
10.00% from total grade
|
10 points
|
|
Objective: To assess theoretical knowledge and the ability to identify variable types, measurement scales, test classifications, and basic principles of research ethics. Requirements for "Pass":
|
||
|
2.
Knowledge Test: Development, Adaptation, and Quality Assessment of Psychological Tests |
10.00% from total grade
|
10 points
|
|
Objective: To assess the understanding of test construction stages, the adaptation process, basic item analysis, and types of reliability and validity. Requirements for "Pass":
|
||
|
3.
Item Analysis Protocol |
20.00% from total grade
|
10 points
|
|
Objective: To conduct a statistical analysis of the pilot data and, based on item metrics, provide a reasoned justification for the final item selection of the instrument. Assessment Levels (Brief):
|
||
|
4.
Scale Analysis Protocol |
20.00% from total grade
|
10 points
|
|
Objective: To conduct a scale-level analysis of the newly developed and adapted instruments by evaluating their reliability, validity, and the alignment of data distribution with psychometric requirements. Technical Requirements (Mandatory Components):
Evaluation criteria: Grades 10-9: Comprehensive analysis including reliability (alpha, omega), descriptive statistics (M, SD, Sk/Ku), and validity evidence. Flawless APA 7 visualization and professional interpretation. Grades 8-7: Methodologically sound data processing. Clear presentation of results with logical but concise interpretation. Minor formatting issues. Grades 6-4: Basic scale metrics provided. Analysis is mechanical, lacks depth in validity assessment or distribution description. Minimal compliance with academic standards. |
||
|
5.
Final Presentation on Test Construction or Adaptation |
40.00% from total grade
|
10 points
|
|
Objective: To demonstrate the full cycle of test development or adaptation, justifying the instrument's quality through a methodologically sound research design and empirical data. Presentation Structure:
Assessment Summary:
|
||
Study Course Theme Plan
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Quantitative Data and Data Collection Methods in Psychology
Description
During the lecture, students learn the hierarchy of quantitative research data (from primary to tertiary data) and understand its application in psychological research. Measurement scales and their impact on subsequent statistical analysis are analyzed in detail. Particular attention is devoted to the diversity of data collection methods and modern digital opportunities, while consolidating the understanding of personal data protection (GDPR) and research ethics standards. Lecture Sub-topics:1. Data Taxonomy and Sources:
2. Levels of Measurement and Scale Types:
3. Data Collection Methods and Tools:
4. Data Protection and Research Ethics:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
4
|
Topics
|
Practical Assignment: Data Classification and Open Data Resources
Description
Objective: To consolidate the ability to classify data and to locate existing data resources. 1. Article Analysis (Group Work): Each group receives one short, high-quality research article (or abstract). Task: Complete a structured worksheet:
2. Secondary Data "Scavenger Hunt": Using their computers, students must navigate to an open-access database (e.g., European Social Survey or the Latvian Open Data Portal). Task: Locate one dataset related to psychology or social sciences and describe the measurement scales predominantly used in that dataset. |
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Psychological tests and Classical Test Theory
Description
During the lecture, students explore the nature of a psychological test as a standardized measurement instrument, distinguishing scientific approaches from pseudo-psychological methods. A comprehensive overview of test classification by purpose and format is provided, alongside an analysis of the core characteristics of a sound psychometric test: objectivity, standardization, and utility. Students are introduced to the historical development of psychometrics and master the fundamental axiom of Classical Test Theory (CTT): X = T + E. Particular attention is devoted to types of measurement error (systematic and random error), fostering an understanding of how environmental, respondent, and instrumental factors influence the precision of observed scores. These insights serve as a foundation for the practical session, where students analyze professional test batteries and perform a measurement error "audit." |
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
4
|
Topics
|
Practical Session: Revision of Existing Instruments and Vision for a New Test
Description
Objective: To conduct an in-depth study of existing tests and, based on that analysis, develop a well-reasoned structure for an original test. Group Work Procedure and Tasks:1. Targeted Search and Selection:
2. Structural and Psychometric Revision:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Test Construction and Item Development
Description
During the lecture, students explore the scientific cycle of psychometric test development, from the initial theoretical concept to the first draft of test items. The three primary approaches to test construction—deductive, inductive, and integrative—are analyzed, emphasizing their differences in achieving research objectives. Students learn to perform construct operationalization, select the most appropriate response scale formats, and identify common types of response bias. The lecture concludes with an examination of content and face validity as the primary quality control mechanisms in the initial stages of test development. Lecture Sub-topics:1. Strategies for Test Construction:
2. From Construct to Measurement:
3. Item Formulation and Scales:
4. Response Bias:
5. Initial Validation:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
4
|
Topics
|
Practical Session: Item Development and Initial Validation
Description
Objective: To practically develop the first set of test items and perform an initial quality assessment. Session Procedure:1. Group Work – Item Formulation:
2. Content Validity Simulation:
3. Face Validity Check:
4. Reflection and Refinement:
Pass/Fail Submission:Each small group submits a "Test Vision & Initial Item Set" report, which includes:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Adaptation of Psychological Tests and ITC Guidelines
Description
During the lecture, students acquire the methodological framework for adapting psychological tests developed in foreign languages to a different linguistic and cultural environment. Primary emphasis is placed on the International Test Commission (ITC) guidelines and the application of TARES standards to ensure measurement objectivity. Students are introduced to the various types of equivalence and their significance, as well as learning to identify specific biases that may arise as a result of inaccurate adaptation. Lecture Sub-topics:1. Introduction to Adaptation:
2. Stages of Adaptation According to ITC Guidelines:
3. Equivalence and Types of Bias:
4. Quality Frameworks:
5. Data Collection Planning and Ethics:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
4
|
Topics
|
Practical Session: Instrument Adaptation and Empirical Approbation Planning
Description
Objective: To perform the adaptation of a selected international instrument and to prepare for the empirical testing of both instruments (the original student-developed test and the adapted one). Session Procedure:1. Instrument Selection and Rationale:
2. Translation and Adaptation Workshop:
3. Development of the Research Instrument (Questionnaire):
4. Data Collection Plan:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Empirical Item Analysis
Description
During the lecture, students learn methods for the empirical quality assessment of psychological test items. Item difficulty (response index) and discrimination indicators are analyzed in detail, teaching students how to interpret the distribution of responses and identify "non-performing" items. The lecture concludes with an introduction to test reliability using internal consistency measures and demonstrates the link between individual item quality and overall scale reliability. Lecture Sub-topics:1. Data Cleaning and Preparation:
2. Item Response and Distribution Analysis:
3. Item Discrimination Indicators:
4. Introduction to Scale Reliability:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
4
|
Topics
|
Practical Session: Empirical Item Analysis and Test Optimization
Description
Objective: To perform an in-depth empirical analysis of the collected (or simulated) data and to make reasoned decisions regarding test shortening and refinement. Session Procedure:1. Working with Software (JASP/Jamovi):
2. Item "Audit":
3. Test Optimization:
4. Conclusions and Discussion:
Submission (Pass/Fail): "Item Analysis Protocol"Each group submits a protocol containing:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Reliability of Psychological Measurements
Description
During the lecture, students deepen their understanding of psychological test reliability as the stability and precision of a measurement. Various methods for assessing reliability are analyzed (test-retest, parallel forms, split-half, and internal consistency), alongside their suitability for different types of tests. Lecture Sub-topics:1. The Concept of Reliability and CTT:
2. Methods for Assessing Reliability:
3. Interpretation of Reliability Coefficients:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
4
|
Topics
|
Practical Session: Calculation of Reliability Indicators
Description
Objective: To practically calculate and interpret various reliability indicators for the student's mini-project. Session Procedure:1. Reliability Calculations in Software:
2. SEM (Standard Error of Measurement) Workshop:
3. Comparative Analysis:
4. Report Preparation:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Test Validity
Description
During the lecture, students explore the modern concept of validity as a unified body of evidence supporting the appropriateness of test score interpretations. Evidence of construct validity (convergent and discriminant validity) and types of criterion validity (predictive, concurrent, incremental, and differential validity) are analyzed in detail. Students learn to evaluate evidence confirming that a test indeed measures the intended psychological construct and is capable of predicting real-world behavior or other clinical indicators. Lecture Sub-topics:1. Evolution of the Validity Concept:
2. Construct Validity:
3. Types of Criterion Validity:
4. Interpretation of Validity Coefficients:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
4
|
Topics
|
Practical Session: Analysis and Interpretation of Validity Evidence
Description
Objective: To master the methodology for calculating various types of validity and to perform an initial validity assessment for the student's mini-project. Session Procedure:Part 1: Working with the Instructor's Dataset
Part 2: Working with Self-Collected Data
Students record the indicators that support the validity of their test, ensuring they are correctly formatted and described for the final project report. |
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Modern Psychometrics, Factor Analysis, and Result Reporting
Description
Lecture Annotation: In this final lecture of the psychometrics course, students learn methods for determining a test’s internal structure and diagnostic value. An overview of factor analysis (EFA and CFA) is provided, explaining how latent variables structure observed data. Students are introduced to clinical psychometric indicators (sensitivity, specificity, and the ROC curve) and modern measurement theories, such as Item Response Theory (IRT) and the psychological network approach. Lecture Sub-topics:1. Factor Analysis: Structural Validity:
2. Clinical Utility and Diagnostic Accuracy:
3. Development of Norms:
4. Expanded Horizons: IRT and Network Psychometrics:
5. Preparing a Psychometric Report:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
4
|
Topics
|
Practical Session: Factor Analysis and Course Conclusion
Description
Objective: To master the practical steps of conducting factor analysis and to prepare for the development of the final project report. Session Procedure:1. Working with Instructor's Data – EFA:
2. Working with Instructor's Data – CFA:
3. Preparation for the Final Submission:
4. Conclusion and Feedback:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
4
|
Topics
|
Quantitative Data and Data Collection Methods in Psychology
Description
During the lecture, students learn the hierarchy of quantitative research data (from primary to tertiary data) and understand its application in psychological research. Measurement scales and their impact on subsequent statistical analysis are analyzed in detail. Particular attention is devoted to the diversity of data collection methods and modern digital opportunities, while consolidating the understanding of personal data protection (GDPR) and research ethics standards. Lecture Sub-topics:1. Data Taxonomy and Sources:
2. Levels of Measurement and Scale Types:
3. Data Collection Methods and Tools:
4. Data Protection and Research Ethics:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
2
|
Topics
|
Practical Assignment: Data Classification and Open Data Resources
Description
Objective: To consolidate the ability to classify data and to locate existing data resources. 1. Article Analysis (Group Work): Each group receives one short, high-quality research article (or abstract). Task: Complete a structured worksheet:
2. Secondary Data "Scavenger Hunt": Using their computers, students must navigate to an open-access database (e.g., European Social Survey or the Latvian Open Data Portal). Task: Locate one dataset related to psychology or social sciences and describe the measurement scales predominantly used in that dataset. |
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Psychological tests and Classical Test Theory
Description
During the lecture, students explore the nature of a psychological test as a standardized measurement instrument, distinguishing scientific approaches from pseudo-psychological methods. A comprehensive overview of test classification by purpose and format is provided, alongside an analysis of the core characteristics of a sound psychometric test: objectivity, standardization, and utility. Students are introduced to the historical development of psychometrics and master the fundamental axiom of Classical Test Theory (CTT): X = T + E. Particular attention is devoted to types of measurement error (systematic and random error), fostering an understanding of how environmental, respondent, and instrumental factors influence the precision of observed scores. These insights serve as a foundation for the practical session, where students analyze professional test batteries and perform a measurement error "audit." |
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
2
|
Topics
|
Practical Session: Revision of Existing Instruments and Vision for a New Test
Description
Objective: To conduct an in-depth study of existing tests and, based on that analysis, develop a well-reasoned structure for an original test. Group Work Procedure and Tasks:1. Targeted Search and Selection:
2. Structural and Psychometric Revision:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Test Construction and Item Development
Description
During the lecture, students explore the scientific cycle of psychometric test development, from the initial theoretical concept to the first draft of test items. The three primary approaches to test construction—deductive, inductive, and integrative—are analyzed, emphasizing their differences in achieving research objectives. Students learn to perform construct operationalization, select the most appropriate response scale formats, and identify common types of response bias. The lecture concludes with an examination of content and face validity as the primary quality control mechanisms in the initial stages of test development. Lecture Sub-topics:1. Strategies for Test Construction:
2. From Construct to Measurement:
3. Item Formulation and Scales:
4. Response Bias:
5. Initial Validation:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
2
|
Topics
|
Practical Session: Item Development and Initial Validation
Description
Objective: To practically develop the first set of test items and perform an initial quality assessment. Session Procedure:1. Group Work – Item Formulation:
2. Content Validity Simulation:
3. Face Validity Check:
4. Reflection and Refinement:
Pass/Fail Submission:Each small group submits a "Test Vision & Initial Item Set" report, which includes:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Adaptation of Psychological Tests and ITC Guidelines
Description
During the lecture, students acquire the methodological framework for adapting psychological tests developed in foreign languages to a different linguistic and cultural environment. Primary emphasis is placed on the International Test Commission (ITC) guidelines and the application of TARES standards to ensure measurement objectivity. Students are introduced to the various types of equivalence and their significance, as well as learning to identify specific biases that may arise as a result of inaccurate adaptation. Lecture Sub-topics:1. Introduction to Adaptation:
2. Stages of Adaptation According to ITC Guidelines:
3. Equivalence and Types of Bias:
4. Quality Frameworks:
5. Data Collection Planning and Ethics:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
2
|
Topics
|
Practical Session: Instrument Adaptation and Empirical Approbation Planning
Description
Objective: To perform the adaptation of a selected international instrument and to prepare for the empirical testing of both instruments (the original student-developed test and the adapted one). Session Procedure:1. Instrument Selection and Rationale:
2. Translation and Adaptation Workshop:
3. Development of the Research Instrument (Questionnaire):
4. Data Collection Plan:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Empirical Item Analysis
Description
During the lecture, students learn methods for the empirical quality assessment of psychological test items. Item difficulty (response index) and discrimination indicators are analyzed in detail, teaching students how to interpret the distribution of responses and identify "non-performing" items. The lecture concludes with an introduction to test reliability using internal consistency measures and demonstrates the link between individual item quality and overall scale reliability. Lecture Sub-topics:1. Data Cleaning and Preparation:
2. Item Response and Distribution Analysis:
3. Item Discrimination Indicators:
4. Introduction to Scale Reliability:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
2
|
Topics
|
Practical Session: Empirical Item Analysis and Test Optimization
Description
Objective: To perform an in-depth empirical analysis of the collected (or simulated) data and to make reasoned decisions regarding test shortening and refinement. Session Procedure:1. Working with Software (JASP/Jamovi):
2. Item "Audit":
3. Test Optimization:
4. Conclusions and Discussion:
Submission (Pass/Fail): "Item Analysis Protocol"Each group submits a protocol containing:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Reliability of Psychological Measurements
Description
During the lecture, students deepen their understanding of psychological test reliability as the stability and precision of a measurement. Various methods for assessing reliability are analyzed (test-retest, parallel forms, split-half, and internal consistency), alongside their suitability for different types of tests. Lecture Sub-topics:1. The Concept of Reliability and CTT:
2. Methods for Assessing Reliability:
3. Interpretation of Reliability Coefficients:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
2
|
Topics
|
Practical Session: Calculation of Reliability Indicators
Description
Objective: To practically calculate and interpret various reliability indicators for the student's mini-project. Session Procedure:1. Reliability Calculations in Software:
2. SEM (Standard Error of Measurement) Workshop:
3. Comparative Analysis:
4. Report Preparation:
|
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Test Validity
Description
During the lecture, students explore the modern concept of validity as a unified body of evidence supporting the appropriateness of test score interpretations. Evidence of construct validity (convergent and discriminant validity) and types of criterion validity (predictive, concurrent, incremental, and differential validity) are analyzed in detail. Students learn to evaluate evidence confirming that a test indeed measures the intended psychological construct and is capable of predicting real-world behavior or other clinical indicators. Lecture Sub-topics:1. Evolution of the Validity Concept:
2. Construct Validity:
3. Types of Criterion Validity:
4. Interpretation of Validity Coefficients:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
2
|
Topics
|
Practical Session: Analysis and Interpretation of Validity Evidence
Description
Objective: To master the methodology for calculating various types of validity and to perform an initial validity assessment for the student's mini-project. Session Procedure:Part 1: Working with the Instructor's Dataset
Part 2: Working with Self-Collected Data
Students record the indicators that support the validity of their test, ensuring they are correctly formatted and described for the final project report. |
-
Lecture
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Study room
|
4
|
Topics
|
Modern Psychometrics, Factor Analysis, and Result Reporting
Description
Lecture Annotation: In this final lecture of the psychometrics course, students learn methods for determining a test’s internal structure and diagnostic value. An overview of factor analysis (EFA and CFA) is provided, explaining how latent variables structure observed data. Students are introduced to clinical psychometric indicators (sensitivity, specificity, and the ROC curve) and modern measurement theories, such as Item Response Theory (IRT) and the psychological network approach. Lecture Sub-topics:1. Factor Analysis: Structural Validity:
2. Clinical Utility and Diagnostic Accuracy:
3. Development of Norms:
4. Expanded Horizons: IRT and Network Psychometrics:
5. Preparing a Psychometric Report:
|
-
Class/Seminar
|
Modality
|
Location
|
Contact hours
|
|---|---|---|
|
On site
|
Computer room
|
2
|
Topics
|
Practical Session: Factor Analysis and Course Conclusion
Description
Objective: To master the practical steps of conducting factor analysis and to prepare for the development of the final project report. Session Procedure:1. Working with Instructor's Data – EFA:
2. Working with Instructor's Data – CFA:
3. Preparation for the Final Submission:
4. Conclusion and Feedback:
|
Bibliography
Required Reading
Cohen, R. J., & Swerdlik, M. E. (2022). Psychological Testing and Assessment: An Introduction to Tests and Measurement (10th ed.). McGraw-Hill Education.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for Educational and Psychological Testing. American Educational Research Association. (akceptējams izdevums)
Furite, G., & Raščevska, M. (2018). Psiholoģiskais novērtējums: Teorija un prakse. LU Akadēmiskais apgāds.
International Test Commission. (2017). The ITC Guidelines for Translating and Adapting Tests (Second edition). [Pieejams: www.intestcom.org]. (Obligāts avots 4. lekcijai par adaptāciju).
Additional Reading
Navarro, D. J., & Foxcroft, D. R. (2022). learning statistics with jamovi: a tutorial for psychology students and other beginners.
Kline, T. J. (2005). Psychological Testing: A Practical Approach to Design and Evaluation. Sage Publications.
Zumbo, B. D., & Chan, E. K. (Eds.). (2014). Validity and Validation in Social, Behavioral, and Health Sciences. Springer. (Noderēs 7. lekcijai, lai labāk izprastu konverģento un diskriminanto validitāti).
Epskamp, S., Borsboom, D., & Fried, E. I. (2018). Estimating psychological networks and their stability: A tutorial. Psychological Methods.
Other Information Sources
Rīgas Stradiņa universitāte. (2026). Psiholoģijas virziena metodiskie norādījumi kursa, bakalaura un maģistra darbu izstrādei. Rīga: Rīgas Stradiņa universitāte.