The Title/Abstract screening stage is sometimes one of the longest, and you'll see many of the results you found in the databases excluded due to lack of applicability to your eligibility criteria. It is not uncommon to see your list of results cut down by 50% or more.
This is why it is particularly important to ensure you are finding enough results in the databases - if you're not, your question may be too specific, or your search may not be including the most relevant search terms.
To Screen Title/Abstract in Covidence:
Things to Remember:
Once your group approves publications to move to the next stage of screening, they will populate in the Full Text Review stage for you all to add the full text and screen again for eligibility.
Again, it is very common to see many of your remaining articles be screened out when reviewing the full text. This is because abstracts are notoriously lacking details that often relate to clinical questions and eligibility criteria - you'll find you can only truly know if an article will be relevant to answering your question by reviewing its full methods.
To Add Full Text Documents:
To Screen in Full Text Review:
Things to Remember:
Once your group has finalized the articles it will most likely or definitely use via the full text review in Covidence, you can review the reference lists and/or forward citations for each of the included studies - this is called Snowballing, and it is very common in systematic reviews. It is not required for your group assignment.
Review each of the citations the articles for inclusion have listed to see if there are any more relevant articles to include in your review. This is good practice because sometimes articles are not published in the journals that get indexed in the subject databases you have chosen.
It is common to see that no new articles were found, but it is still good practice to go through and check and note in your methods that this was at least completed with nothing new located.
Remember - always stick to your eligibility criteria though, especially date range.
Use Google Scholar, Web of Science, or Scopus to check each of the included articles to review any future articles that have cited the one you are including. This can help to locate other publications that are newer, and to double check that nothing was missed in your original searches.
It is common to see that no new articles were found, but it is still good practice to go through and check and note in your methods that this was at least completed with nothing new located.
Remember - always stick to your eligibility criteria though, especially date range.
In exporting included studies, you'll be pulling from the system information related to the article, including citation information, the abstract (if available), any of the notes your group might have made on individual titles, and any study tags applied. You can also export your inter-rater reliability for your title & abstract screening and your full-text review phases.
Exporting General Article Information:
The reason you are doing this, is to move to the next phase in the review process: Critical Appraisal. While the system allows for critical appraisal, you will not be completing this step in the Covidence system.
NOTE: You may also want to export your article information into Zotero or another citation management system, to allow for easy citation creation for your References list. It is suggested that you do this at the end before you begin writing your final assignment.
Exporting Inter-Rater Reliability Information:
As good practice, it is common to see inter-rater reliability mentioned in a methods section of a review. This tells readers how often the reviewers agreed or disagreed on the inclusion of an article to the review. This information is often shared to disclose any potential biases in the process, and to promote transparency of the experience of completing that specific review.
For your group assignment, it is easy for you to include this information because Covidence creates it for you. Do try and include this in your methods section.
**McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 267-282. https://hrcak.srce.hr/89395