Skip to main content

Methodological quality of systematic reviews in dentistry including animal studies: a cross-sectional study

Abstract

Background

The overall confidence in the results of systematic reviews including animal models can be heterogeneous. We assessed the methodological quality of systematic reviews including animal models in dentistry as well as the overall confidence in the results of those systematic reviews.

Material & methods

PubMed, Web of Science and Scopus were searched for systematic reviews including animal studies in dentistry published later than January 2010 until 18th of July 2022. Overall confidence in the results was assessed using a modified version of the A MeaSurement Tool to Assess systematic Reviews (AMSTAR-2) checklist. Checklist items were rated as yes, partial yes, no and not applicable. Linear regression analysis was used to investigate associations between systematic review characteristics and the overall adherence to the AMSTAR-2 checklist. The overall confidence in the results was calculated based on the number of critical and non-critical weaknesses presented in the AMSTAR-2 items and rated as high, moderate, low and critical low.

Results

Of initially 951 retrieved systematic reviews, 190 were included in the study. The overall confidence in the results was low in 43 (22.6%) and critically low in 133 (70.0%) systematic reviews. While some AMSTAR-2 items were regularly reported (e.g. conflict of interest, selection in duplicate), others were not (e.g. funding: n = 1; 0.5%). Multivariable linear regression analysis showed that the adherence scores of AMSTAR-2 was significantly associated with publication year, journal impact factor (IF), topic, and the use of tools to assess risk of bias (RoB) of the systematic reviews.

Conclusion

Although the methodological quality of dental systematic reviews of animal models improved over the years, it is still suboptimal. The overall confidence in the results was mostly low or critically low. Systematic reviews, which were published later, published in a journal with a higher IF, focused on non-surgery topics, and used at least one tool to assess RoB correlated with greater adherence to the AMSTAR-2 guidelines.

Background

Research on laboratory animals, although controversially discussed [1], is a strong pillar in preclinical research and helps to understand the mechanisms of diseases and identify the efficacy and potential harm of new treatments [2, 3]. Systematic reviews of such studies can summarize their findings and improve the process of translational research [4, 5]. Following, clinical trials and systematic reviews of those bring new treatments via clinical guidelines into clinical practice [6]. To not produce misleading results, systematic reviews should follow a sound methodology.

To critically appraise the methodology of systematic reviews of randomized controlled trials, the AMSTAR tool was published and validated in 2007 [7]. In 2017 the tool was updated and now includes the assessment of non-randomized trials [8]. Since, to our knowledge, there is no distinct tool to assess the methodology of systematic reviews of trials using animals as model [9], we used the A MeaSurement Tool to Assess systematic Reviews (AMSTAR-2) checklist and adapted it for the assessment of systematic reviews including animal models.

Multiple recent publications have addressed the topic of methodological quality of systematic reviews in dentistry [10] including the fields of neuromuscular dentistry [11], implant dentistry [12,13,14], periodontology [15, 16], orthodontics [17], endodontics [18] and oral and maxillofacial surgery [19]. In these studies, there was substantial lack of adherence to considered critical methodological quality domains [8]. In a recent study, Hammel et al. (2022) focused on the methodological quality of systematic reviews of in-vitro dental research by using an adapted version of AMSTAR-2 [20]. They found that in the majority of included systematic reviews (68%) the overall confidence in the results was “critically low”.

The evidence on the methodological quality of systematic reviews including animal models is scarce. Mignini and Khan (2006) showed methodological weaknesses of systematic reviews including animal models and addressed the need for rigour when reviewing research involving animal models [21]. The last methodological quality assessment of systematic reviews including animal models in dentistry included publications until January 2010 and used the first version of AMSTAR [22]. Back then, of 54 included systematic reviews, only one study was scored as high quality, 17 as medium quality and 35 as low quality.

The aims of our study were twofold: 1. To assess the methodological quality and overall confidence in the results of systematic reviews of research using laboratory animals as models published on dental topics since February 2010 and, 2. To investigate the association between certain systematic review characteristics and adherence to the AMSTAR-2 checklist.

Methods

Eligibility criteria

We included systematic reviews including animal models in all fields of dentistry. Therefore, the focus of our work was on medical research that uses animals as models, instead of veterinary research that uses animals as subjects. Systematic reviews including both animal and human studies were also included. Study designs other than systematic reviews and systematic reviews with non-pair-wise meta-analyses were excluded (for example, network meta-analysis). An article was considered a systematic review if it was titled as such, or if the authors´ aim was to perform a systematic review. Publications in other languages than English were excluded.

Search strategy

On the 18th of July 2022, we searched the Pubmed, Scopus and Web of Science databases for systematic reviews published in the field of dentistry including animal models. We used a combination of key-words and Boolean operators and limited our search studies published after January 2010 to 18th of July 2022. This cut-off time point was chosen to provide an updated assessment of a previous study on the methodological quality of systematic reviews of dental animal studies [22].

We adapted the syntax of the search performed in PubMed (Table S1, supplementary file) for the Scopus and Web Of Science databases. The search was done in duplicate and independently by two authors to ensure reproducibility (MCM, CMF). If the searches produced the same findings the search was deemed reproducible. The search strategies are reported in Supplementary file 1.

Selection process

We selected articles strictly based on the eligibility criteria, and articles not meeting these criteria were excluded with individual reasons recorded in each phase of the assessment. First, duplicates were removed assisted by the Zotero citation manager (Roy Rosenzweig Center for History and New Media, George Mason University). Following that, we checked the title and abstract of all findings. Lastly, we checked the full-text of the remaining studies. The last 2 processes were done in duplicate and independently by two reviewers (MCM, CMF) for 30 samples and discussed until good agreement on inclusion or exclusion (at least 80%) of articles was reached [8], then the remaining selection was done by one reviewer (MCM).

Data collection process

To give an overview of the assessed systematic reviews and to find associations between methodological quality and study characteristics different objectifiable measures were defined and collected. Of each systematic review, we collected the following characteristics: h-index of first and last author (checked in Scopus on the 30th of April 2023), number of authors, continent of first author, country (region) of first author, year of publication, journal’s name, journal category in Journal Citation Reports, 2-year journal impact factor (2021 Journal Impact Factor, Journal Citation Reports (Clarivate, 2022), topic of study, presence of conflict of interest, type of funding/sponsorship, number of citations (checked in google scholar on the 30th of April 2023), and tools used for Risk of Bias (RoB) assessment.

Data collection was done with a previously created sheet in Microsoft Excel (Microsoft Corporation) in duplicate and independently by two reviewers (MCM, CMF) for 30 samples and discussed until good agreement about the extracted characteristics and methodological quality assessment in terms of AMSTAR-2 scores (at least 80%) was reached [8], then the remaining data collection was done by one reviewer (MCM).

AMSTAR-2 items

For the assessement of the methodological quality of the included systematic reviews we used the AMSTAR-2 tool. The ckecklist includes 16 items which allow for a critical appraisal of systematic reviews of randomised and non-randomised studies of healthcare interventions. AMSTAR-2 is considered a valid and reliable appraisal tool [23]. We adapted AMSTAR-2 slightly to allow for the assessment of trials using animal models (details are reported in the Supplementary file 2). Our evaluation criteria for each item are based on the AMSTAR-2 guidance document (Supplementary file 1 of the AMSTAR-2 publication) [8]. Our adapted checklist did not require the registration of the systematic review protocols since existent registries were not available in the entire timeframe of our search. Also, to meet the requirements for item 3, both an explanation of the selection of the study design and the study population (i.e. animals) would be necessary. Since RoB assessment of studies using animals as models is difficult and SYRCLE’s tool has just been published in 2014, we accepted alternative approaches to assess RoB. Lastly, we added a secound rank to the AMSTAR-2 criteria in the assessment of heterogeneity allowing for a more differentiated view of this criterion. Checklist items were answered as yes (when all checklist criteria were met), partial yes (when some criteria were met), and no (when no criteria were met). If we were not able to rate a checklist item it was documented as “not applicable”. A detailed description of our evaluation is reported in Supplementary file 2.

For each included study, an adherence score was calculated as the number of items answered as “yes” and “partial yes” / number of total applicable items * 100. A higher adherence score indicates a better methodological quality. In addition, the overall confidence in the results of the reviews was categorised into high, moderate, low, and critically low, based on AMSTAR-2 [8]. If at most one non-critical items from AMSTAR-2 was answered with “no” but none of the critical items was answered with “no”, the overall confidence was considered high. If more than one non-critical item was answered with “no” but none of the critical items was answered with “no”, the overall confidence in the results was considered moderate. The overall confidence was considered low or critically low if one or more than one critical item was answered with “no”, respectively.

Statistical analysis

To facilitate the statistical analysis, “country of first author” was dichotomized into “developing country” and “developed country” based on the World Economic Situation and Prospects 2023 of United Nations [24] and “Topic of study” was dichotomized into “non-surgical topic” and “surgical topic”. To assess the association between study characteristics (independent variables) and methodological quality of the included studies (i.e. adherence scores), linear regression analyses were used.

First, univariable linear regression analysis was performed to assess the association of each independent variable with adherence scores, separately. Second, multicollinearity of the independent variables which were significant in the univariable analyses (P < 0.05) was tested using the variance inflation factor (VIF) before they were included in the subsequent multivariable linear regression analysis. When a VIF value of a variable was higher than 5, collinearity was considered present and the variable was excluded from the following analysis [25]. Third, a multivariable linear regression analysis with backward selection was performed to further assess the association between independent variables and adherence scores. In the multivariable analysis, the variables with the highest p values were removed first from the model and the cut-off p value for removal was 0.05.

Results

Study selection

Our search identified 951 records overall. Before screening 109 duplicates were removed. After screening of titles and abstracts, 521 records were excluded. In the step of full-text screening, 131 additional records were excluded. Finally, 190 records were included in this study (Fig. 1). The excluded studies with reasons for exclusion and the included studies are reported in the supplementary files 3 and 4, respectively.

Fig. 1
figure 1

PRISMA Flow diagram

Study characteristics

The mean and standard deviation (SD) of the h-index of the first author was 11.81 ± 12.81 (Range 0–69), the one of the last author 28.98 ± 18.12 (Range 1–117). The mean number of authors was 4.9 ± 1.7 (Range 1–9). Over one third of the first authors were located in Europe (n = 72; 37.9%), even though the most prevalent country of first authors was Brazil (n = 30; 15.8%). The major part of systematic reviews was conducted by multi-center cooperation (n = 159; 83.7%). The most systematic reviews were published in 2018 (n = 32; 16.9%). Systematic reviews were published most often in Archives of Oral Biology (n = 18; 9.47%). About two thirds of the systematic reviews were published in the journal category: DENTISTRY, ORAL SURGERY & MEDICINE – SCIE (n = 122; 64.2%). The mean Journal impact Factor was 3.747 ± 1.474 (Range 1.154–8.755). About a half of the systematic reviews´ topic was Oral Surgery and Implant Dentistry (n = 84; 44.2%). Most of the systematic reviews reported to have no conflict of interest (n = 151; 79.5%). Seventy systematic reviews were not sponsored (36.8%), closely followed by 67 systematic reviews that did not provide clear information on funding (35.3%). On average the included systematic reviews were cited 38.48 ± 61.01 times (Range 0–493). The most often used RoB assessment tool was SYRCLE (n = 71; 29.1%). The complete characteristics are reported in Supplementary file 5.

Methodological quality - AMSTAR-2 items

Overall, two (1.1%) systematic reviews presented high, 12 (6.3%) medium, 43 (22.6%) low and 133 (70.0%) critically low confidence in the results.

The assessment revealed great heterogeneity in reporting of the different checklist items.

The checklist items 16 (“Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review?”) and 5 (“Did the review authors perform study selection in duplicate?”) were the most often in full accordance to the checklist (80.5 and 78.4% respectively). The item 4 (Did the review authors use a comprehensive literature search strategy?) was the item with the greatest percentage (87.9%) of answers with “partial yes”. Only one systematic review considered the funding of primary studies and reported it (0.5%). Results of all assessed items are described in Table 1.

Table 1 Evaluation of the methodological quality of the included reviews based on AMSTAR-2

Methodological quality and study characteristics

Table 2 presents the general characteristics of the included reviews and the corresponding scores. The adherence scores of the included systematic reviews increased over time (Fig. 2). Based on the univariable linear regression analyses, publication year (P < 0.01), journal impact factor (P < 0.01), topic of study (P = 0.03), number of citations (P < 0.01), and number of tools used (P < 0.01) were significantly associated with the adherence scores of the reviews (Table 3). The VIF values of the significant variables in the univariable analyses were all < 4, indicating that the collinearity was absent. Therefore, those variables were included in the subsequent multivariable linear regression analysis. In the multivariable linear regression analysis with backward selection, adherence score was significantly associated with publication year (β: 1.59; 95%CI: 0.84–2.33; P < 0.01), journal impact factor (β: 2.96; 95%CI: 1.57–4.34; P < 0.01), topic of study (β: 4.75; 95%CI: 0.31–9.18; P = 0.04), and number of tools used (for 1 tool: β: 25.59; 95%CI: 20.31–30.88; P < 0.01, for > 1 tool: β: 26.65; 95%CI: 20.43–32.86; P < 0.01) (Table 3).

Table 2 General characteristics of the included reviews and the corresponding adherence scores
Fig. 2
figure 2

Adherence score development over time

Table 3 Regression analysis for the association between study characteristics and the adherence scores

Discussion

Main findings

This study is to our knowledge the first one to assess the methodological quality of the current dental literature on systematic reviews of dental experiments on laboratory animals. The last study to do so was published in 2012 and used the previous version of the AMSTAR [22]. The results of the present study suggest that the methodological quality of these systematic reviews improved over the years, however, there is still room for improvement. Most systematic reviews were rated with low to critically low confidence in the results. The systematic reviews that were published later, published in a journal with higher impact factor, focused on non-surgery topics and used at least one tool had significantly higher adherence scores.

Interpretation of the results

We found that the included studies presented heterogeneous methodological quality. About two-thirds of the included studies used a PICO methodology. The PICO- or its adaptations is a well-established format that helps to orientate the research question and translate it into a reliable bibliographic search; therefore its use is recommended [26].

The second AMSTAR-2 item regarding the prior establishment of review methods was completely met in 61% of the systematic reviews. It is important to note, that it does not mean, that these methods were sound. We did not require the protocols to be published in online databases like PROSPERO since they were not available in the whole timeframe of the included articles. Also, they did not always allow for the registration of systematic reviews including both, animal and human trials [27].

Only six studies explicitly explained their selection of the included study design and population, 48 did either of both, and 136 did neither of both. Reporting this information is important to let the reader know in which research phase the treatment currently is and give information on possible limitations [28].

The literature strategy was only reported completely in two systematic reviews while most systematic reviews (n = 167; 88%) conducted a literature search that included the minimum requirements. Twenty-one systematic reviews scored a no in this item. Following the AMSTAR-2 guidance, the criteria for a yes are very strict. Authors need to search at least two databases, provide keywords and/ or search strategy, justify publication relevant publication restrictions (i.e. language or timeframe) (partial yes) and search reference lists of included studies, search trial registries, consult experts in the field, search for grey literature and conduct the search within 24 months of completion of the review. These strict criteria suggested by AMSTAR-2 might be challenging to fulfill. However, a sound search strategy builds the base of a solid systematic review and helps reduce bias; therefore as much of the above-mentioned criteria should be met by authors [29,30,31,32,33].

Data selection and extraction should be done independently and in duplicate for a representative amount of systematic reviews to reduce the risk of potential mistakes [34]. In the assessed data sample, selection and extraction were done in duplicate in 149 (78%) and 67 (35%) systematic reviews, respectively. This approach is in line with items 5 and 6 from the original AMSTAR-2 checklist.

To make a systematic review transparent and reproducible it is important to provide lists with excluded studies. AMSTAR-2 recommends reporting the full list of excluded studies after full-text assessment with the respective individual reasons for exclusion [8]. Some authors suggest an even stricter form of reporting to allow better reproducibility [35] by requiring the report of reasons for extraction since the title/abstract phase selection. In this project, we followed the AMSTAR-2 guidance and scored yes if the authors provided a list of excluded articles only after full-text assessment with reasons for exclusion. Only about a third n = 68; 36%) scored a yes, the rest did score a no (n = 122; 64%).

Little over three-quarters of the included systematic reviews provided either enough or detailed information about the included studies (yes: n = 53, 28%; partial yes: n = 111, 58%), while only 13% (n = 25) did not. The distinction between yes and partial yes was difficult for this item. For a yes the major part of the categories: population, intervention and comparator should be described in detail. However, particularly in research performed in laboratory animal models, the description of the included population is important since different animals might react differently to different therapies. This might also differ for age, weight, or sex [36,37,38]. Since treatment effects can also be dependent on follow-up and study setting this information should also be included in the systematic review.

RoB assessment is one of the central points of a sound systematic review [39]. The assessment of RoB of primary studies included in the systematic review involves the appraisal of potential limitations or problems in study domains that may influence or bias the estimates of this study. Studies having a high RoB may generate overestimated effect sizes [40, 41]. It is therefore important use the results of the RoB assessment to critically appraise the results of primary studies and put them into context of each other [42]. We adapted the assessment compared to the AMSTAR-2 criteria. For a yes authors would have needed to use an adequate RoB tool and report the results per primary study included (n = 83; 44%); if they reported only an overall score for all studies, they would be considered partial yes (n = 29; 15%). If authors used reporting guidelines for RoB assessment or did not perform it, they were rated as no (n = 78, 41%). Authors used 27 different tools to assess RoB with the most frequently one used being SYRCLE (n = 71; 29.1%). Even though almost 60% of the systematic reviews scored at least a partial yes in the RoB assessment, only 19% (n = 36) accounted for possibly detected bias when interpreting or discussing the results of the review.

Heterogeneity is the variability among studies included in the systematic review that may impact systematic review results. The literature describes three types of heterogeneity: clinical, when there is variability in the PICO format of primary studies [43, 44], statistical, when there is variability in the intervention effects [45], and methodological, when studies included have differences in terms of study design and RoB ratings. It is important to discuss heterogeneity to understand how clinical and methodological aspects of the primary studies relate to the systematic review results, mainly in case a meta-analysis is conducted. In our sample about half of the systematic reviews (n = 89; 47%) discussed found heterogeneity and considered its impact on the results. Thirty (16%) systematic reviews mentioned the existence of heterogeneity or mentioned that they did not perform meta-analysis due to high heterogeneity among studies, while 70 (37%) did not mention heterogeneity at all.

Some evidence suggests that financial and non-financial conflicts of interest can influence study results [46,47,48], and therefore clear reporting of this information is necessary. Of the included systematic reviews 84% (n = 159) provided clear information on conflicts of interest. Only one systematic review reported information on the funding of included primary studies.

Three of the AMSTAR-2 items were specifically designed for the assessment of meta-analyses (items 11,12 and 15). Of the included 190 systematic reviews only 45 performed meta-analyses. Of those, 43 (95%) described a sound methodology for the conduction of the meta-analysis, appropriate weighting techniques and investigated causes of heterogeneity. Like with the discussion of found RoB, authors also did not frequently assess the impact of heterogeneity of individual studies on the meta-analysis estimates (n = 6; 13%).

Publication bias describes the failure to publish the results of a study based on the direction or estimate of the study findings [49]. This can lead to an overestimation of subsequent meta-analysis effects [50]. Therefore, investigation of publication bias is important to understand how much a meta-analysis estimate deviates from its real value. In our sample of systematic reviews with meta-analysis, 19 (42%) performed investigations for publication bias and discussed them or planned the investigation but were not able to do it because of the limited number of included studies. Cochrane states that tests for funnel plot asymmetry need at least 10 studies to have enough statistical power [51]. Therefore, in some cases the implementation of such tests can be problematic.

Regression analysis demonstrated that more recent systematic reviews presented higher methodological quality than older ones. This might be explained by a greater continuous awareness of the medical/dental community regarding methodological aspects of research. Also, journals with higher IF published systematic reviews with higher methodological quality. This finding is in agreement with other studies published in different medical fields [52,53,54]. Systematic reviews using more than one tool to assess the RoB of primary studies included also presented higher quality scores. One hypothesis to explain this finding is the willingness of authors to provide a comprehensive view of the evidence through the application of different methodological tools that might imply a stronger methodological background of these authors.

Comparison to the results of Faggion et al. 2012 [22]

Comparing the findings of the current study and the study from 2012, we can see improvements in nine of the ten comparable checklist items. The complete table is reported in Supplementary file 6. These improvements are supported by the regression analysis showing that the adherence to the AMSTAR-2 guidelines improved by year (β: 1.59; 95%CI: 0.84–2.33; P < 0.01) (see also Fig. 2).

Comparison to systematic reviews not including animals

Several studies have also addressed the topic of methodological quality of systematic reviews of clinical studies in dentistry [10,11,12,13,14,15,16,17,18,19,20, 55,56,57]. Generally, these overviews also concluded that there is room for improving their methodologies. For example, many of the overviews of systematic reviews of clinical studies in dentistry reported over 50% of reviews with low to critically low confidence in the results [10,11,12, 16, 18, 20]. These results are in agreement with our study that rated more than 90% of the systematic reviews included with low to critically low confidence in the results. However, a direct comparison is challenging because the AMSTAR-2 items needed to be modified to adapt to a different scenario (animals vs. humans). However, many of the original items can be applied to any type of systematic review, for example, those items related to selection and data extraction.

Relevance of the present findings and further recommendations

Studies using laboratory animals as research models can be ethically controversial [1]. Therefore, the primary studies themselves, but also following research such as systematic reviews have to be of the highest quality to justify this kind of research. The present study adds value to the scientific community by increasing awareness of researchers on the importance of methodological quality when they plan and conduct a systematic review of animal models. Some items can be improved by increasing awareness of reporting adherence (for example, item 16 on CoI). However, other items will need a more careful plan and therefore the help of colleagues specialized in specific areas such as librarians (for example, when conducting comprehensive searches), experienced statisticians (when deciding, planning and conducting meta-analyses), and methodologists (when planning more complex systematic reviews, for example systematic reviews of complex interventions [58]).

Improvements in the methodological quality of systematic reviews of pre-clinical studies will more accurately inform the benefits and harm of potential therapies as well as identify the need for further studies performed in animal models about some specific topic. This improvement in methodological quality will facilitate the translational process from preclinical to clinical research.

Strengths and limitations

The present study has some limitations. We only included studies published in English, therefore some studies might have been excluded. We did not make this limitation in the search process, but in the selection and excluded one study. Apart from that, the greatest part of studies indexed in online databases is published in English [59] and research has shown that restricting research to English-language publications might only have little impact [60]. Additionally, the AMSTAR-2 checklist we used was not developed for the assessment of systematic reviews including animal studies. Some items had to be adapted, however, this process was transparently reported in the manuscript and the supplementary files.

Apart from those limitations, this study has definite strengths. This study is one of the few studies addressing the topic of methodological quality of systematic reviews including dental animal models. We also used robust methodological standards to develop this study and the sample of systematic reviews included appears to be representative of dental animal model studies.

Conclusions

Although the methodological quality of systematic reviews of experiments on dental laboratory animal models improved over the years, there is still room for improvement in different systematic review domains. The methodological limitations in these domains were the explanation for the low and critically low overall confidence in the results for most of the systematic reviews in the present sample. Year of publication, journal impact factor, number of tools used and topic were significant predictors for adherence to the AMSTAR-2 items.

Availability of data and materials

Data and materials are available upon request.

Abbreviations

AMSTAR-2:

A MeaSurement Tool to Assess systematic Reviews

RoB:

Risk of bias

VIF:

Variance inflation factor

SD:

Standard deviation

References

  1. Gross D, Tolba RH. Ethics in animal-based research. Eur Surg Res. 2015;55(1–2):43–57.

    Article  PubMed  Google Scholar 

  2. Guvva S, Patil M, Mehta D. Rat as laboratory animal model in periodontology. Int J Oral Health Sci. 2017;7(2):68.

    Article  Google Scholar 

  3. Mukherjee P, Roy S, Ghosh D, Nandi SK. Role of animal models in biomedical research: a review. Lab Anim Res. 2022;38(1):18.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. De Vries RBM, Wever KE, Avey MT, Stephens ML, Sena ES, Leenaars M. The usefulness of systematic reviews of animal experiments for the Design of Preclinical and Clinical Studies. ILAR J. 2014;55(3):427–37.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Hooijmans CR, Ritskes-Hoitinga M. Progress in using systematic reviews of animal studies to improve translational research. PLoS Med. 2013;10(7):e1001482.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Wallace SS, Barak G, Truong G, Parker MW. Hierarchy of evidence within the medical literature. Hosp Pediatr. 2022;12(8):745–50.

    Article  PubMed  Google Scholar 

  7. Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7(1):10.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;21:j4008.

    Article  Google Scholar 

  9. Zeng X, Zhang Y, Kwong JSW, Zhang C, Li S, Sun F, et al. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review. J Evid-Based Med. 2015;8(1):2–10.

    Article  PubMed  Google Scholar 

  10. Pauletto P, Polmann H, Réus JC, de Oliveira JMD, Chaves D, Lehmkuhl K, et al. Critical appraisal of systematic reviews of intervention in dentistry published between 2019-2020 using the AMSTAR 2 tool. Evid Based Dent. 2022;14:1–8.

    Google Scholar 

  11. Cerón L, Pacheco M, Delgado B, Bravo W, Astudillo D. Therapies for sleep bruxism in dentistry: a critical evaluation of systematic reviews. Dent Med Probl. 2022 ;60(2):0–0.

  12. AL-Rabab’ah MA, AlTarawneh S, Jarad FD, Devlin H. Methodological quality of systematic reviews relating to performance of all-ceramic implant abutments, frameworks, and restorations. J Prosthodont. 2021;30(1):36–46.

    Article  PubMed  Google Scholar 

  13. Heiderich CMC, Tedesco TK, Netto SS, de Sousa RC, Allegrini Júnior S, Mendes FM, et al. Methodological quality and risk of bias of systematic reviews about loading time of multiple dental implants in totally or partially edentulous patients: an umbrella systematic review. Japanese Dental Science Review. 2020;56(1):135–46.

    Article  PubMed  PubMed Central  Google Scholar 

  14. de Oliveira-Neto OB, Santos IO, Barbosa FT, de Sousa-Rodrigues CF, de Lima FJC. Quality assessment of systematic reviews regarding dental implant placement on diabetic patients: an overview of systematic reviews. Med Oral Patol Oral Cir Bucal. 2019;24(4):e483–90.

    PubMed  PubMed Central  Google Scholar 

  15. Natto ZS, Hameedaldain A. Methodological quality assessment of Meta-analyses and systematic reviews of the relationship between periodontal and systemic diseases. J Evid Based Dent Pract. 2019;19(2):131–9.

    Article  PubMed  Google Scholar 

  16. Hasuike A, Ueno D, Nagashima H, Kubota T, Tsukune N, Watanabe N, et al. Methodological quality and risk-of-bias assessments in systematic reviews of treatments for peri-implantitis. J Periodontal Res. 2019;54(4):374–87.

    Article  PubMed  Google Scholar 

  17. Hooper EJ, Pandis N, Cobourne MT, Seehra J. Methodological quality and risk of bias in orthodontic systematic reviews using AMSTAR and ROBIS. Eur J Orthod. 2021;43(5):544–50.

    Article  PubMed  Google Scholar 

  18. Nagendrababu V, Faggion CM Jr, Pulikkotil SJ, Alatta A, Dummer PMH. Methodological assessment and overall confidence in the results of systematic reviews with network meta-analyses in endodontics. Int Endod J. 2022;55(5):393–404.

    Article  PubMed  Google Scholar 

  19. Chugh A, Patnana AK, Kumar P, Chugh VK, Khera D, Singh S. Critical analysis of methodological quality of systematic reviews and meta-analysis of antibiotics in third molar surgeries using AMSTAR 2. J Oral Biol Craniofac Res. 2020;(4):441–9.

  20. Hammel C, Pandis N, Pieper D, Faggion CM. Methodological assessment of systematic reviews of in-vitro dental studies. BMC Med Res Methodol. 2022;22(1):110.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Mignini LE, Khan KS. Methodological quality of systematic reviews of animal studies: a survey of reviews of basic research. BMC Med Res Methodol. 2006;6(1):10.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Faggion CM Jr, Listl S, Giannakopoulos NN. The methodological quality of systematic reviews of animal studies in dentistry. Vet J. 2012;192(2):140–7.

    Article  PubMed  Google Scholar 

  23. Lorenz RC, Matthias K, Pieper D, Wegewitz U, Morche J, Nocon M, et al. A psychometric study found AMSTAR 2 to be a valid and moderately reliable appraisal tool. J Clin Epidemiol. 2019;1(114):133–40.

    Article  Google Scholar 

  24. World Economic Situation and Prospects 2023.

  25. Kim JH. Multicollinearity and misleading statistical results. Korean J Anesthesiol. 2019;72(6):558–69.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Santos CMDC, Pimenta CADM, Nobre MRC. The PICO strategy for the research question construction and evidence search. Rev Latino-Am Enfermagem. 2007;15(3):508–11.

    Article  Google Scholar 

  27. Booth A, Clarke M, Dooley G, Ghersi D, Moher D, Petticrew M, et al. PROSPERO at one year: an evaluation of its utility. Syst Rev. 2013;2(1):4.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Price JH, Murnan J. Research limitations and the necessity of reporting them. Am J Health Educ. 2004;35(2):66–7.

    Article  Google Scholar 

  29. Delaney A, Tamás PA. Searching for evidence or approval? A commentary on database search in systematic reviews and alternative information retrieval methodologies. Res Synth Methods. 2018;9(1):124–31.

    Article  PubMed  Google Scholar 

  30. Aagaard T, Lund H, Juhl C. Optimizing literature search in systematic reviews – are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders? BMC Med Res Methodol. 2016;16(1):161.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Saleh AA, Ratajeski MA, Bertolet M. Grey literature searching for health sciences systematic reviews: a prospective study of time spent and resources utilized. Evid Based Libr Inf Pract. 2014;9(3):28–50.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Li L, Tian J, Tian H, Moher D, Liang F, Jiang T, et al. Network meta-analyses could be improved by searching more sources and by involving a librarian. J Clin Epidemiol. 2014;67(9):1001–7.

    Article  PubMed  Google Scholar 

  33. Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews. Cochrane Database of Systematic Reviews [Internet] 2011 [cited 2023 Jun 25];(8). Available from: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.MR000026.pub2/full

  34. Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

    Article  PubMed  Google Scholar 

  35. Faggion CM Jr, Huivin R, Aranda L, Pandis N, Alarcon M. The search and selection for primary studies in systematic reviews published in dental journals indexed in MEDLINE was not fully reproducible. J Clin Epidemiol. 2018 Jun;98:53–61.

    Article  PubMed  Google Scholar 

  36. Clayton JA, Collins FS. Policy: NIH to balance sex in cell and animal studies. Nature. 2014;509(7500):282–3.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Bouwknecht JA, Paylor R. Behavioral and physiological mouse assays for anxiety: a survey in nine mouse strains. Behav Brain Res. 2002;136(2):489–501.

    Article  PubMed  Google Scholar 

  38. Shapira S, Sapir M, Wengier A, Grauer E, Kadar T. Aging has a complex effect on a rat model of ischemic stroke. Brain Res. 2002;925(2):148–58.

    Article  CAS  PubMed  Google Scholar 

  39. Chapter 7: Considering bias and conflicts of interest among the included studies [Internet]. [cited 2023 Jul 16]. Available from: https://training.cochrane.org/handbook/current/chapter-07.

  40. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273(5):408–12.

    Article  CAS  PubMed  Google Scholar 

  41. Wood L, Egger M, Gluud LL, Schulz KF, Jüni P, Altman DG, et al. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ. 2008;336(7644):601–5.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Katikireddi SV, Egan M, Petticrew M. How do systematic reviews incorporate risk of bias assessments into the synthesis of evidence? A methodological study. J Epidemiol Community Health. 2015;69(2):189–95.

    Article  PubMed  Google Scholar 

  43. Chess LE, Gagnier JJ. Applicable or non-applicable: investigations of clinical heterogeneity in systematic reviews. BMC Med Res Methodol. 2016;17(16):19.

    Article  Google Scholar 

  44. Gagnier JJ, Moher D, Boon H, Beyene J, Bombardier C. Investigating clinical heterogeneity in systematic reviews: a methodologic review of guidance in the literature. BMC Med Res Methodol. 2012;30(12):111.

    Article  Google Scholar 

  45. 9.5.1 What is heterogeneity? [Internet]. [cited 2023 Jul 15]. Available from: https://handbook-5-1.cochrane.org/chapter_9/9_5_1_what_is_heterogeneity.htm.

  46. Lundh A, Lexchin J, Mintzes B, Schroll JB, Bero L. Industry sponsorship and research outcome. Cochrane Database Syst Rev. 2017;2(2):MR000033.

    PubMed  Google Scholar 

  47. Friedman LS, Richter ED. Relationship between conflicts of interest and research results. J Gen Intern Med. 2004;19(1):51–6.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Wiersma M, Kerridge I, Lipworth W. Dangers of neglecting non-financial conflicts of interest in health and medicine. J Med Ethics. 2018;44(5):319–22.

    Article  PubMed  Google Scholar 

  49. Dickersin K, Min YI. Publication bias: the problem that won’t go away. Ann N Y Acad Sci. 1993;703:135–46.

    Article  CAS  PubMed  Google Scholar 

  50. Schmucker CM, Blümle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, et al. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One. 2017;12(4):e0176210.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Chapter 13: Assessing risk of bias due to missing results in a synthesis [Internet]. [cited 2023 Jul 16]. Available from: https://training.cochrane.org/handbook/current/chapter-13.

  52. Wu X, Sun H, Zhou X, Wang J, Li J. Quality assessment of systematic reviews on total hip or knee arthroplasty using mod-AMSTAR. BMC Med Res Methodol. 2018;18(1):30.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Chung VCH, Wu XY, Feng Y, Ho RST, Wong SYS, Threapleton D. Methodological quality of systematic reviews on treatments for depression: a cross-sectional study. Epidemiol Psychiatr Sci. 2018;27(6):619–27.

    Article  CAS  PubMed  Google Scholar 

  54. Papageorgiou SN, Papadopoulos MA, Athanasiou AE. Reporting characteristics of meta-analyses in orthodontics: methodological assessment and statistical recommendations. Eur J Orthod. 2014;36(1):74–85.

    Article  PubMed  Google Scholar 

  55. Faggion CM Jr, Monje A, Wasiak J. Appraisal of systematic reviews on the management of peri-implant diseases with two methodological tools. J Clin Periodontol. 2018;45(6):754–66.

    Article  PubMed  Google Scholar 

  56. Faggion CM Jr, Cullinan MP, Atieh M. An overview of systematic reviews on the effectiveness of periodontal treatment to improve glycaemic control. J Periodontal Res. 2016;51(6):716–25.

    Article  CAS  PubMed  Google Scholar 

  57. Wasiak J, Shen AY, Tan HB, Mahar R, Kan G, Khoo WR, et al. Methodological quality assessment of paper-based systematic reviews published in oral health. Clin Oral Invest. 2016;20(3):399–431.

    Article  CAS  Google Scholar 

  58. Viswanathan M, McPheeters ML, Murad MH, Butler ME, Devine EEB, Dyson MP, et al. AHRQ series on complex intervention systematic reviews-paper 4: selecting analytic approaches. J Clin Epidemiol. 2017;90:28–36.

    Article  PubMed  Google Scholar 

  59. Rosselli D. The language of biomedical sciences. Lancet. 2016;387(10029):1720–1.

    Article  PubMed  Google Scholar 

  60. Dobrescu A, Nussbaumer-Streit B, Klerings I, Wagner G, Persad E, Sommer I, et al. Restricting evidence syntheses of interventions to English-language publications is a viable methodological shortcut for most medical topics: a systematic review. J Clin Epidemiol. 2021;1(137):209–17.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Open Access funding enabled and organized by Projekt DEAL. The authors declare that they have received no funding in context of this study.

Author information

Authors and Affiliations

Authors

Contributions

Dr. Max Clemens Menne contributed to data acquisition, analysis and interpretation, first drafted and critically revised the manuscript. Dr. Naichuan Su contributed to analysis and interpretation of data and critically revised the manuscript. Prof. Dr. Clovis Mariano Faggion Jr. contributed to design, conception, data acquisition, analysis, interpretation of data and substantially critically revised the manuscript. All authors have approved the submitted version and have agreed both to be personally accountable for the author’s own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.

Corresponding author

Correspondence to Clovis M. Faggion Jr.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Literature Search Strategy.

Additional file 2.

Adapted AMSTAR-2 Checklist.

Additional file 3.

Excluded articles.

Additional file 4.

Included articles.

Additional file 5.

Systematic review characteristics.

Additional file 6.

Comparison to Faggion et al. 2012.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Menne, M.C., Su, N. & Faggion, C.M. Methodological quality of systematic reviews in dentistry including animal studies: a cross-sectional study. Ir Vet J 76, 33 (2023). https://doi.org/10.1186/s13620-023-00261-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13620-023-00261-w

Keywords