Novel translation studies
Translation and validation of psychological instrument is a 'cottage industry' (to borrow Prof Jimmy's words - he is a psychometric professor based in Hong Kong). For publications in good journals, the aspect of novelty should go beyond the language in which the instrument is validated. For example, European's Journal of Psychology stated that "(p)urely psychometric studies focused on adapting or validating a scale are not typically accepted however unless they help to refine a concept or propose a methodological innovation. The way I understand it is that there need to be a novel and significant research problem that the studies address.
So, how do we identify a novel research problem in psychometric studies? One needs to be knowledgable in psychometrics to know the breadth and depth of instrument validation. Through this knowledge, one can be aware of the limits of validity evidence for a given instrument.
Through a proper literature review, we can discover what kind of validity evidence is missing. For example, a newly developed measure e.g. Fear of Covid-19 was developed in the English language and it was validated using a confirmatory factor analysis approach. This paper is published in April 2020. Being the eager beaver that you are, perhaps you're already planning of translating this FoC into Malay language. The question is, how would you add something new to the instrument other than offering it in a new language? There are some suggestions that could be taken up on top of the translation that you do.
1. Validate across different groups using multigroup CFA. For example, compare the measurement invariance for those answering the test with Google Form and Telegram.
2. Do a Rasch analysis which is based on a different measurement theory used in the original study (classical test theory). There is a lot of possible validation evidence offered by this analysis such as person and item reliability, item targeting, differential item functioning, and response categories functioning, to name a few.
3. Extend the validity evidence. If the instrument was originally shown to have convergent validity, then consider other types of validity like divergent, concurrent criterion-related validity, discriminant validity, and additional validation afforded by the theory behind the measure (e.g. older people are expected to be more fearful of COVID-19 compared to younger people due to more liberal risk-taking behaviour among the latter group).
4. Provide a normative data for different sub-population (e.g.males vs females, different age groups). The normative data is very important to help interpret individuals' standing relative to their reference group.
5. If you're interested to use the instrument to screen the people for counselling services, then perhaps you can establish a cut-off score based on a criterion measure (e.g. the FoC score of above 10 is parallel to a sub-clinical anxiety score measured using an already established measure) or perhaps based on percentile scores (e.g. offer the counselling service to the persons in the top 10% of the FoC score distribution).
If the measure that you want to translate already had been translated into different languages, you have more work to do. My suggestion is that you develop a literature review matrix by finding all translation and validation studies. After populating the matrix with the studies and the validation evidence, then you can identify the missing piece of the psychometric puzzle. I would hesitate to use the term 'gap in the literature' unless there is a continuum for the body of evidence. If, for example, there were Thai and Singaporean versions of an instrument, is the lack of Malaysian instrument a 'gap' (because Malaysia is in between Thai and Singapore? I don't think so. A geographical void is not the same as an intellectual gap. If, for example, there were a lot of CFA studies, but not Cognitive Debriefing Interview studies to show semantic equivalence, then I would be more accepting of the term 'gap in the literature'. The reason is that CDI should be done BEFORE the CFA studies.
So, if you can design a translation study with a properly identified research problem, you would have a better chance of getting your paper published in a higher ranking journal. And you could also avoid the cliche titles like 'Psychometric evaluation of ...", "Validation of ...." and "Psychometric properties of ....". Instead, you can write a title that directly emphasises your new contribution e.g. "Measurement invariance of ...." or "Malay-FoC Discriminates Lockdown Violators from the Abiders."
Happy translating and validating.
#psychometrics
Comments