Evaluating literature is a complex process requiring familiarity with the field of research, research methods and types of publications. These are skills that will continue to develop throughout your program and into your professional life.
At its core, critical evaluation means investigating quality and identifying bias. While separated out here, this is an artificial division, and many of the same critical questions address both aspects.
Take a tiered approach to your analysis:
Growing your subject expertise will allow you to understand and critically evaluate the methods, outcomes and conclusions in a paper and evaluate its quality. While building this body of knowledge, consult with your peers, instructors and other experts.
Some of the question you will want to be asking as you read are:
In a 2019 systematic review, Jongbloed et al. developed and used a checklist to roughly assess whether research was conducted ‘in a good way’ according to common Indigenous research standards. (Jongbloed K, Pooyak S, Sharma R, et al. Experiences of the HIV Cascade of Care Among Indigenous Peoples: A Systematic Review. AIDS and behavior. 2019;23:984-1003.)
Bias is always present. Analysis should focus on how bias is identified and accounted for, not its absence.
Developing an awareness of bias, both in study design, implementation, reporting and in publishing will help with your critical evaluation of the literature.
On an individual article level, ask if the publication is peer-reviewed, or what the quality of the journal's reputation is.
On a collective level, consider publication bias; positive results are more likely to appear in journals than negative results, impacting summative evaluations of the literature.
Do the authors discuss potential bias in the study and how they mitigated this (population selection, measurement tools...)? Is there an element of bias not identified? What might they have done differently?
Are the results comprehensively reported and clearly articulated, either verbally or diagramtically? Do the authors highlight bias that may have emerged from procedural elements in the design?
The critical question here is, are the the conclusions supported by the results?
For a comprehensive overview of bias in research, check out: