Skip to main content

Validation of imaging reporting and data system of coronavirus disease 2019 lexicons CO-RADS and COVID-RADS with radiologists’ preference: a multicentric study

Abstract

Background

A retrospective multicentric study gathered 1439 CT chest studies with suspected coronavirus disease 2019 (COVID-19) affection. Three radiologists, blinded to other results, interpreted all studies using both lexicons with documentation of applicability and preferred score in assessing every case. The purpose of the study is to assess COVID-19 standardized assessment schemes’ (CO-RADS and COVID-RADS lexicons) applicability and diagnostic efficacy.

Results

This study included 991 RT-PCR-confirmed CT studies. An almost perfect agreement was found in COVID-RADS among the three observers (Fleiss Kappa = 0.82), opposed by a substantial agreement in CO-RADS (Κ = 0.78). The preference records favor COVID-RADS/CO-RADS in 78.5%/12.5%, 75.5%/24.5%, and 73.4%/24.5% regarding the three radiologists’ records, respectively. The distinguishability between positive and negative RT-PCR cases was 0.92 for COVID-RADS, while it was 0.85 for CO-RADS. On the other hand, both lexicons’ performance regarding clinical diagnosis and clinical suspicion index was 0.93 for COVID-RADS and 0.94 for CO-RADS. A very high to excellent agreement between the three observers for COVID-RADS/CO-RADS preference was concluded (Fleiss Kappa = 0.80 to 0.94). These results were statistically significant (p < 0.001).

Conclusion

Both lexicon scores (CO-RADS and COVID-RADS) were found to be applicable in the COVID-19 structured report with the preference of COVID-RADS in more than 50% of cases. The diagnostic accuracy of COVID-RADS against RT-PCR was higher than that of CO-RADS.

Background

Since the global coronavirus disease (COVID-19) emerged in late December 2019 and the declaration of a pandemic by the World Health Organization (WHO) in March 2020, the mystery regarding its cause, origin, mode of spread, and vaccination efficacy rose, with the infection of more than 88 million confirmed patients and 1.9 million deaths until the first week of January 2021 [1,2,3]. There are suggestions that the COVID-19 situation will not resolve shortly, on the progress of a second wave worldwide, and its virulence with different postulations of occurrence of variants and mutations [4, 5].

The standard reference for COVID-19 diagnosis is reverse transcriptase-polymerase chain reaction (RT-PCR) due to its high sensitivity, but it has a lengthy turnaround time that may reach up to 72 h. In addition to the fact that multiple negative RT-PCR test results may be needed to exclude the presence of disease in the setting of a high clinical suspicion [6].

Imaging has a leading role in solving the disease’s ambiguity through its application in diagnosis, management, and follow-up of cases. Some countries have even established imaging as an essential first-line diagnostic test [7].

To strengthen the radiology’s role in the pandemic emergency, the radiological report should be tailored to provide clear communication with the referring clinicians [8]. Preferably to be free of ambiguity to facilitate comparison of results and guide appropriate patient care. This can be achieved easily by employing a structured report form linked to a well-defined scoring system simulating a reporting and data system (RADS) lexicon [9].

Few different standardized assessment schemes were developed for COVID-19 pulmonary involvement, among them the CO-RADS lexicon [10], developed by the Dutch radiological society, and the COVID-RADS lexicon, which was postulated by Salehi et al. [11]. The authors recommended that empirical data studies should validate their proposals. To the best of our knowledge, a single study by Inui et al. [12] handled the diagnostic performance and compared the different grading systems of COVID-19 chest CT findings.

The current study aims to assess both lexicons’ (CO-RADS and COVID-RADS) applicability in structural reporting of COVID-19 affection by assessing both scores’ diagnostic performance and estimating the interobserver reliability, performance, and preference agreement among both lexicon scores.

Methods

Study design and patient population

This is a retrospective multicentric study; thus, ethical approval was issued from the local ethical committee, waving the need to receive informed consent from the patients due to the study’s nature.

From February 2020 to July 2020, 1439 consecutive CT chest studies with suspected COVID-19 affection were gathered from four different imaging centers, three in Upper Egypt and one in West of Saudi Arabia (KSA).

A total of 448 studies were excluded: 47 studies were excluded due to technical insufficiency to perform a proper interpretation or assign a score and 401 studies due to deficient RT-PCR results (Fig. 1).

Fig. 1
figure1

CONSORT participant flow diagram

To confirm the sample size efficiency, sample size calculation was carried out using G*Power 3 software [13]. A calculated minimum sample of 900 studies was needed to detect an effect size of 0.1 in the interrater reliability of the COVID-RADS and CO-RADS, with an error probability of 0.05 and 95% power on a two-tailed test.

Image acquisition and interpretation

CT scans covered an area from the root of the neck down to the infra-diaphragmatic region. No intravenous contrast was used.

CT imaging protocols from different vendors were employed, and imaging parameters acquired were summarized in Table 1.

Table 1 Technical parameters employed in chest imaging within the multiple centers

Studies were revised on a dedicated workstation using window width (WW) 1500 and window level (WL) − 400 for the lung window and 450 WW and 60 WL for the mediastinal window.

Lexicon score implementation

Three radiologists revised the CT images: two senior radiologists with 20 years of experience and a younger radiologist with 14 years of experience. Radiologists were blinded to the other interpretation results and did not have access to the RT-PCR results at the time of interpretation.

CT findings were categorized as atypical, fairly typical, or typical findings suggesting COVID-19 affection. A combination of typical or atypical findings was also categorized according to the proposed coronavirus disease 2019 (COVID-19) imaging reporting and data system and RSNA Consensus Statement regarding Chest CT findings of COVID-19 [7, 11, 14,15,16,17,18,19].

After categorizing these findings, radiologists applied both lexicon scores, CO-RADS [10] and COVID-RADS scores [11], for each case and translated the lexicon scores to different equivalent levels of suspicion of COVID-19 affection (Fig. 2). Radiologists performed a case-based score, evaluated the ease of applicability on applying each score, and documented which score was preferred in assessing this particular case (Figs. 3, 4, and 5).

Fig. 2
figure2

Different levels of COVID-19 suspicion and correspondent score in both lexicons

Fig. 3
figure3

CT typical findings of COVID-19 affection. A 48-year-old man with multiple ground-glass lesions and a crazy-paving pattern. Observers agreed on COVID-RADS score 3 and CO-RADS score 5. The preference of COVID-RADS: CO-RADS was (2:1). RT-PCR result was positive

Fig. 4
figure4

A 23-year-old woman with CT findings of consolidation in the left lower lung lobe with the tree on bud appearance Observers agreed on COVID-RADS score 1 and CO-RADS score 2. The preference of COVID-RADS: CO-RADS was (1:2). RT-PCR result was negative

Fig. 5
figure5

A 36-year-old man with CT findings of single ground-glass opacity. Observers interpreted 3 and 4 CO-RADS scores and agreed on COVID-RADS score 2A. The preference was in favor of COVID-RADS to CO-RADS (3:0). RT-PCR result was positive

We omitted CO-RADS zero score in our assessment, as we already excluded cases with technical insufficiency (n = 47). Moreover, we did not include CO-RADS 6 score as the RT-PCR assay results were not revealed during interpretation. This omission facilitated the comparative study of both lexicons as these CO-RADS scores, 0 and 6, are not included in the COVID-RADS lexicon that did not include correspondent scores (Fig. 2)

Statistical analysis

Data were verified, coded, and analyzed by the researcher, using SPSS version 24. Fleiss’ Kappa extension from the Extension Hub in SPSS Statistics was used, and then Fleiss’ Kappa analysis was carried out using the Fleiss’ Kappa procedure. Descriptive statistics: means, standard deviations, medians, interquartile range (IQR), and percentages were calculated. Fleiss’ Kappa was calculated to assess the reliability of agreement between readers. Kappa characteristics of CO-RADS and COVID-RADS of each observer were compared to the median of the other observer. The area under the curve (AUC) of the receiver operating characteristic curve (ROC) for each observer was given and separated from the reference standards defined by RT-PCR alone and RT-PCR together with clinical diagnosis. Spearman’s ranked correlation was employed to assess the correlation between symptom to imaging interval and the clinical and imaging assessment results. P value < 0.05 was considered significant.

Results

Patients and characteristics of the studied cohort

Nine hundred ninety-one cases are the number of eligible cases after excluding 448 studies with total readings of 2973 studies. Positive RT-PCR results were encountered in 949 patients (95.7% of total cases), and negative RT-PCR was recorded in 42 (4.2%) patients (Fig. 1).

Eight hundred ninety-two patients (90%) were Egyptian, while 99 (9.9%) were Saudi among our included patients; their age indices were 44.82 ± 16.1 years (mean ± SD). The cohort included 558 male (56.3%) and 433 female (43.7%) patients.

The main presenting symptoms were cough and fever in 879 (88.7%) cases. Diarrhea was among the most common presenting symptoms as it was encountered in 354 (35.7%) patients, more than 1/3 of our sample size, while anosmia was among the least presenting symptom, observed in 109 (11%) patients of our patient cohort (Fig. 6). The clinical suspicion index of COVID infection was high in 625 (63.1%) patients and intermediate in 254 (25.6%) patients, while 112 (11.3%) patients showed a low clinical suspicion index; the RT-PCR has been repeated in 177 (17.9%) cases and was conclusive from the first time in 814 (82%) patients [20].

Fig. 6
figure6

Prevalence of clinical symptoms among the study patient’s cohort

Figure 7 showed that there was no/minimal insignificant negative correlation between symptoms to imaging interval (S-I Interval) vs. clinical suspicion index (CSI) and assessment of both lexicon score results (CO-RADS or COVID-RADS) or even the preferred score among COVID-19 patients (r = − 0.003 to − 0.036, p > 0.05).

Fig. 7
figure7

Correlation matrix for the symptoms to imaging duration interval (S-I interval) versus clinical suspicion index (CSI) and assessment of both lexicon score results (CO-RADS or COVID-RADS) or even the preferred score among COVID-19 patients

Lexicon score performance concerning the relation with different levels of COVID-19 affection

In comparing both lexicon performance, radiologists applied both of them for every single case with feasibility of 100% for each lexicon.

There was an absolute agreement in 2124 readings (71% of cases) for COVID-RADS, while with CO-RADS, the agreement was confined to only 27% of cases (804 readings) (Fig. 1 and Table 2). This was seen with a perfect agreement in COVID-RADS among the three observers (K = 0.82). On the other hand, there is a substantial agreement for CO-RADS among the three observers’ overall reliability (Κ = 0.78).

Table 2 Diagnostic findings according to observers and different levels of suspicion of COVID affection

COVID-RADS grades showed an absolute agreement among the three observations within grades zero, one, and three. A slight disagreement was encountered in grades 2A and 2B (in an average of 2–3 cases).

On the other hand, the difference in interpretation among the three observers when handling CO-RADS lexicon was in one case in CO-RADS scores 3 and 4, while disagreement in the documentation of CO-RADS score 5 was observed in two cases.

These agreement issues were similar in translating both lexicon scores into a suspicion index (Fig. 2 and Table 2).

Interobserver reliability, performance, and preference agreement

The different observers recorded that the COVID-RADS lexicon was applicable in 97.7% of cases, while CO-RADS categorization was applicable in 91.8% for the first observer, while it was applicable in 92.3% of cases in the records of second and third observers.

The observer performance favored the COVID-RADS lexicon with 78.5%, 75.5%, and 73.4% for the three radiologists involved in the study. In comparison, CO-RADS performance was 21.5%, 24.5%, and 26.6%, respectively.

Distinguishing the ability between positive and negative RT-PCR cases was 0.92 (92% CI: 0.85–0.96) for COVID-RADS, while this distinguishability was 0.85 (85% CI: 0.87–0.99) for CO-RADS. On the other hand, both lexicons’ performance regarding clinical diagnosis and clinical suspicion index was 0.93 (93% CI: 0.87–0.99) for COVID-RADS and 0.94 (94% CI: 0.90–0.99) for CO-RADS.

Table 3 illustrates the interobserver reliability and performance regarding COVID-RADS/CO-RADS. For COVID-RADS, it was found that there was a very high reliability of observer 1 compared with the other two (observer 1 vs. observers 2 and 3; Fleiss Kappa = 0.89). Likewise, there was a high reliability of observer 2 compared with the other two observers (Fleiss Kappa = 0.78). Also, there was a high reliability of observer 3 compared with the other two observers (Fleiss Kappa = 0.75). The overall reliability of the three observers was very high (Fleiss Kappa = 0.82). Regarding CO-RADS, there was very high reliability of observer 1 compared with the others (observer 1 vs. observers 2 and 3; Fleiss Kappa = 0.86). Likewise, there was a very high reliability of observers 2 and 3 in comparison (Fleiss Kappa = 0.91). The overall reliability of the three observers was high (Fleiss Kappa = 0.78).

Table 3 Interobserver reliability and performance

The interobserver preference agreement (Table 4) showed the agreement levels between observers regarding COVID-RADS/CO-RADS preference.

Table 4 Interobserver preference agreement

Our results declared very high agreement between the three observers for COVID-RADS/CO-RADS preference (observer 1 vs. observers 2 and 3; Fleiss Kappa = 0.86 and 0.80, respectively). Likewise, there was an excellent agreement between observers 2 and 3 (Fleiss Kappa = 0.94). These results were statistically significant (p < 0.001).

Discussion

Chest CT has a high sensitivity for diagnosing COVID-19 [21]. Scientific societies have different recommendations for the employment of imaging in COVID-19 management, with many recommendations against CT’s employment in screening. To enhance the role of imaging and CT chest, structural reporting is established to improve communication between radiologists and clinicians with recommendations to include a RADS system among the declared systems since the start of the global pandemic, among the popular lexicons are CO-RADS and COVID-RADS [21].

Both lexicons were created to categorize the suspicion with COVID-19 affection, whereas the typical CT findings were graded as CO-RADS score 5 and COVID-RADS score 3 with the implication of very high and high suspicion level according to each lexicon, respectively. On the other hand, normal chest CT findings are categorized as CO-RADS score 1 or COVID-RADS score 0, reflecting unlikely occurrence or low suspicion, but both scores are not an exclusion of COVID-19 affection [10, 11].

Moreover, the atypical findings are consistent with COVID-RADS score 1 and CO-RADS score 2, implicating a low level of suspicion in both lexicons. Furthermore, the COVID-RADS lexicon included score 2A that includes fairly atypical findings or score 2B that combines typical/fairly typical and atypical findings, while CO-RADS score 4 includes suspicious abnormalities with a high level of suspicion [10, 11].

CO-RADS score 3 reflects an indeterminate level of suspicion. Such a score is not included in the COVID-RADS lexicon [10, 11].

The CO-RADS lexicon contains grade 0 for technically insufficient studies and grade 6 for positive RT-PCR cases [10]. The latter two grades were not included in our study to facilitate comparison between both lexicons as COVID-RADS does not include opposing categories.

The clinical picture of COVID-19 disease is quite variable [22]. Among our patient cohorts, cough, fever, and diarrhea were the most common presenting symptoms, to the point that some authors considered feco-oral transmission as a potential transmission route [23]. While Menni et al. [24] have reported that loss of taste and smell senses as pathognomonic symptoms, the current study recorded them among the least presenting symptom, this in agreement with Gautier et al. [25].

The majority of cases had a high clinical suspicion index as the study was carried out during the pandemic phase. This may also explain the rush to perform CT imaging in the disease’s early phases [18].

Sultan et al. [26] documented a significant difference in pulmonary CT findings of COVID-19 with the variation in clinical presentation onset duration. Ding et al. [27] defined 6 stages of different durations in the course of the disease. Accordingly, Prokop et al. [10] subjects fell in the second and third groups. In comparison, the current study cohort was categorized in the first two stages.

The early tendency to perform imaging could be explained by the fact that in developing countries, CT imaging may be considered the only available diagnostic modality due to the shortage of laboratory kits facing a spike in patient numbers or even logistic strains; developed countries are not an exclusion from these circumstances [24, 28].

Despite that, Ding et al. [27] reported that disease findings change rapidly at the early stages. However, the current study findings indicated no/minimal correlation was present between symptoms to imaging duration interval versus clinical suspicion index or assessment of both lexicon scores results (CO-RADS or COVID-RADS) or even the preferred score. On the other hand, Pan et al. [29] concluded that the greatest affection likely occurs after 10 days from symptoms initiation.

Comparing both lexicon performance, almost perfect agreement in COVID-RADS was found among the three observers (K = 0.82). On the other hand, A substantial agreement with the three observers’ overall reliability (Κ = 0.78); similar results were also reported in Prokop et al. [10]. Also, Inui et al. [12] reported that both CO-RADS and COVID-RADS provided a reasonable agreement in COVID-19 reporting of chest CT findings.

On the other hand, Prokop et al. documented that the indeterminate category CO-RADS-3 offered little diagnostic efficacy as a declaration of the COVID pandemic. This could explain that the COVID-RADS lexicon did not include a grade for indeterminate lesions.

In the current study, radiologists’ recorded that both lexicons feasibility are 100% with possible assignment in all cases, and both lexicons are easily applied in more than 90% of cases according to their interpretation among the different levels of COVID-19 affection. Moreover, the observers preferred COVID-RADS with a higher percentage in more than 50% of the cases.

We attribute this preference to the fact that COVID-RADS includes clear CT findings for the different typical and atypical findings. Also, employing this lexicon is done by revising, checking the criteria, and ticking a checklist of each case, hence detecting the grade and reflecting the suspicion of viral infection, besides the proper organization of the COVID-RADS lexicon as its postulation was based on an evidence-based systemic review.

In contrast, the Dutch group developed the CO-RADS score in the pandemic’s acute stage with rapidly increasing cases and resource restrictions. This was acknowledged among their limitations [10].

Among this study’s advantages are the multicentric enrollment with different exposure levels to the COVID-19 pandemic to formulate a representative sample from four different centers to COVID affection.

Although the radiologists’ experiences are very close with narrow differences, we did not face significant differences in interpretation, and this confirms the applicability of both scores and endorses the employment of the RADS lexicon within a structural report.

Few limitations face the current study: the retrospective nature of the study, the non-inclusion of the severity score of lung affection, and the unavailability of the median interval between imaging and RT-PCR.

On facing the global emergency of COVID-19, we recommend employing a structured report form to fully facilitate the interpretation and improve communication with referring clinicians to conquer the time factor in the management of suspected COVID patients, basing the final diagnosis on clinical, laboratory, and imaging findings and finally a confirmed RT-PCR assay.

Conclusion

In conclusion, both lexicon scores (CO-RADS and COVID-RADS) are applicable in the COVID-19 structured report with the preference of COVID-RADS in more than 50% of cases. The diagnostic accuracy of COVID-RADS against RT-PCR is 92%, while that of CO-RADS is 85%.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

AUC:

Area under the (ROC) Curve

CI

Confidence interval

COVID-19

Coronavirus disease 2019

CT

Computerized tomography

KSA

Kingdom of Saudi Arabia

RAD

Reporting and data system

ROC

Receiver operating characteristic

RSNA

Radiological Society of North America

RT-PCR

Reverse transcription-polymerase chain reaction

WL

Window level

WW

Window width

References

  1. 1.

    Ng M-Y, Wan EYF, Wong HYF, Leung ST, Lee JCY, Chin TW-Y, Lo CSY, Lui MMS, Chan EHT, Fong AHT, Fung SY, Ching OH, Chiu KWH, Chung TWH, Vardhanbhuti V, Lam HYS, To KKW, Chiu JLF, Lam TPW, Khong PL, Liu RWT, Chan JWM, Wu AKL, Lung KC, Hung IFN, Lau CS, Kuo MD, Ip MSM (2020) Development and validation of risk prediction models for COVID-19 positivity in a hospital setting. Int J Infect Dis 101:74–82. https://doi.org/10.1016/j.ijid.2020.09.022

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  2. 2.

    Manigandan S, Wu M-T, Ponnusamy VK, Raghavendra VB, Pugazhendhi A, Brindhadevi K (2020) A systematic review on recent trends in transmission, diagnosis, prevention and imaging features of COVID-19. Process Biochem 98:233–240. https://doi.org/10.1016/j.procbio.2020.08.016

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  3. 3.

    Boerma EC, Bethlehem C, Stellingwerf F, de Lange F, Streng KW, Koetsier PM, Bootsma IT (2021) Hemodynamic characteristics of mechanically ventilated COVID-19 patients: a cohort analysis. Crit Care Res Pract 2021:1–7. https://doi.org/10.1155/2021/8882753

    Article  Google Scholar 

  4. 4.

    Dehelean CA, Lazureanu V, Coricovac D, Mioc M, Oancea R, Marcovici I, Pinzaru I, Soica C, Tsatsakis AM, Cretu O (2020) SARS-CoV-2: repurposed drugs and novel therapeutic approaches—insights into chemical structure—biological activity and toxicological screening. J Clin Med 9(7):2084. https://doi.org/10.3390/jcm9072084

    CAS  Article  PubMed Central  Google Scholar 

  5. 5.

    Horton R (2020) Offline: the second wave. Lancet 395(10242):1960. https://doi.org/10.1016/S0140-6736(20)31451-3

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  6. 6.

    Waller JV, Kaur P, Tucker A, Lin KK, Diaz MJ, Henry TS, Hope M (2020) Diagnostic tools for Coronavirus disease (COVID-19): comparing CT and RT-PCR viral nucleic acid testing. Am J Roentgenol 215(4):834–838. https://doi.org/10.2214/AJR.20.23418

    Article  Google Scholar 

  7. 7.

    Soufi GJ, Hekmatnia A, Nasrollahzadeh M, Shafiei N, Sajjadi M, Iravani P, Fallah S, Iravani S, Varma RS (2020) SARS-CoV-2 (COVID-19): new discoveries and current challenges. Appl Sci 10(10):3641. https://doi.org/10.3390/app10103641

    CAS  Article  Google Scholar 

  8. 8.

    Nobel JM, Kok EM, Robben SGF (2020) Redefining the structure of structured reporting in radiology. Insights Imaging 11(1):10. https://doi.org/10.1186/s13244-019-0831-6

    Article  PubMed  PubMed Central  Google Scholar 

  9. 9.

    Gross A, Heine G, Schwarz M, Thiemig D, Gläser S, Albrecht T (2021) Structured reporting of chest CT provides high sensitivity and specificity for early diagnosis of COVID-19 in a clinical routine setting. Br J Radiol 94(1117):20200574. https://doi.org/10.1259/bjr.20200574

    Article  PubMed  Google Scholar 

  10. 10.

    Prokop M, van Everdingen W, van Rees Vellinga T, Quarles van Ufford H, Stöger L, Beenen L et al (2020) CO-RADS: a categorical CT assessment scheme for patients suspected of having COVID-19—definition and evaluation. Radiology 296(2):E97–E104. https://doi.org/10.1148/radiol.2020201473

    Article  PubMed  Google Scholar 

  11. 11.

    Salehi S, Abedi A, Balakrishnan S, Gholamrezanezhad A (2020) Coronavirus disease 2019 (COVID-19) imaging reporting and data system (COVID-RADS) and common lexicon: a proposal based on the imaging data of 37 studies. Eur Radiol 30(9):4930–4942. https://doi.org/10.1007/s00330-020-06863-0

    CAS  Article  PubMed  Google Scholar 

  12. 12.

    Inui S, Kurokawa R, Nakai Y, Watanabe Y, Kurokawa M, Sakurai K, Fujikawa A, Sugiura H, Kawahara T, Yoon SH, Uwabe Y, Uchida Y, Gonoi W, Abe O (2020) Comparison of chest CT grading systems in coronavirus disease 2019 (COVID-19) pneumonia. Radiol Cardiothorac Imaging 2(6):e200492. https://doi.org/10.1148/ryct.2020200492

    Article  PubMed  PubMed Central  Google Scholar 

  13. 13.

    Faul F, Erdfelder E, Lang A-G, Buchner A (2007) G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods 39(2):175–191. https://doi.org/10.3758/BF03193146

    Article  PubMed  PubMed Central  Google Scholar 

  14. 14.

    Valdivia AR, Chaudhuri A (2020) A need for consensus on mortality reporting related to the coronavirus disease-2019 pandemic in ongoing and future vascular registries and trials. J Vasc Surg 72(4):1507. https://doi.org/10.1016/j.jvs.2020.06.013

    Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Fang Y, Zhang H, Xie J, Lin M, Ying L, Pang P, Ji W (2020) Sensitivity of Chest CT for COVID-19: comparison to RT-PCR. Radiology 296(2):E115–E117. https://doi.org/10.1148/radiol.2020200432

    Article  PubMed  Google Scholar 

  16. 16.

    Sabri YY, Nassef AA, Ibrahim IMH, Abd El Mageed MR, Khairy MA (2020) CT chest for COVID-19, a multicenter study—experience with 220 Egyptian patients. Egypt J Radiol Nucl Med 51(1):144. https://doi.org/10.1186/s43055-020-00263-6

    Article  Google Scholar 

  17. 17.

    Xu G, Yang Y, Du Y, Peng F, Hu P, Wang R et al (2020) Clinical pathway for early diagnosis of COVID-19: updates from experience to evidence-based practice. Clin Rev Allergy Immunol 59(1):89–100. https://doi.org/10.1007/s12016-020-08792-8

    CAS  Article  PubMed  Google Scholar 

  18. 18.

    Yang W, Cao Q, Qin L, Wang X, Cheng Z, Pan A, Dai J, Sun Q, Zhao F, Qu J, Yan F (2020) Clinical characteristics and imaging manifestations of the 2019 novel coronavirus disease (COVID-19): a multi-center study in Wenzhou city, Zhejiang, China. J Infect 80(4):388–393. https://doi.org/10.1016/j.jinf.2020.02.016

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  19. 19.

    Rubin GD, Ryerson CJ, Haramati LB, Sverzellati N, Kanne JP, Raoof S, Schluger NW, Volpi A, Yim JJ, Martin IBK, Anderson DJ, Kong C, Altes T, Bush A, Desai SR, Goldin J, Goo JM, Humbert M, Inoue Y, Kauczor HU, Luo F, Mazzone PJ, Prokop M, Remy-Jardin M, Richeldi L, Schaefer-Prokop CM, Tomiyama N, Wells AU, Leung AN (2020) The role of chest imaging in patient management during the COVID-19 pandemic. Chest 158(1):106–116. https://doi.org/10.1016/j.chest.2020.04.003

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  20. 20.

    Wee LE, Fua T, Chua YY, Ho AFW, Sim XYJ, Conceicao EP et al (2020) Containing COVID-19 in the emergency department: the role of improved case detection and segregation of suspect cases. Acad Emerg Med 27(5):379–387. https://doi.org/10.1111/acem.13984

    Article  PubMed  PubMed Central  Google Scholar 

  21. 21.

    Xu B, Xing Y, Peng J, Zheng Z, Tang W, Sun Y, Xu C, Peng F (2020) Chest CT for detecting COVID-19: a systematic review and meta-analysis of diagnostic accuracy. Eur Radiol 30(10):5720–5727. https://doi.org/10.1007/s00330-020-06934-2

    CAS  Article  PubMed  Google Scholar 

  22. 22.

    Dalglish SL (2020) COVID-19 gives the lie to global health expertise. Lancet 395(10231):1189. https://doi.org/10.1016/S0140-6736(20)30739-X

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  23. 23.

    Gu J, Han B, Wang J (2020) COVID-19: Gastrointestinal manifestations and potential fecal–oral transmission. Gastroenterology 158(6):1518–1519. https://doi.org/10.1053/j.gastro.2020.02.054

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  24. 24.

    Menni C, Valdes AM, Freidin MB, Sudre CH, Nguyen LH, Drew DA, Ganesh S, Varsavsky T, Cardoso MJ, el-Sayed Moustafa JS, Visconti A, Hysi P, Bowyer RCE, Mangino M, Falchi M, Wolf J, Ourselin S, Chan AT, Steves CJ, Spector TD (2020) Real-time tracking of self-reported symptoms to predict potential COVID-19. Nat Med 26(7):1037–1040. https://doi.org/10.1038/s41591-020-0916-2

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  25. 25.

    Gautier J, Ravussin Y (2020) A new symptom of COVID-19: loss of taste and smell. Obesity 28(5):848–848. https://doi.org/10.1002/oby.22809

    CAS  Article  PubMed  Google Scholar 

  26. 26.

    Sultan OM, Al-Tameemi H, Alghazali DM, Abed M, Ghniem MNA, Hawiji DA et al (2020) Pulmonary ct manifestations of COVID-19: changes within 2 weeks duration from presentation. Egypt J Radiol Nucl Med 51(1):105. https://doi.org/10.1186/s43055-020-00223-0

    Article  Google Scholar 

  27. 27.

    Ding X, Xu J, Zhou J, Long Q (2020) Chest CT findings of COVID-19 pneumonia by duration of symptoms. Eur J Radiol 127:109009. https://doi.org/10.1016/j.ejrad.2020.109009

    Article  PubMed  PubMed Central  Google Scholar 

  28. 28.

    Kanne JP, Little BP, Chung JH, Elicker BM, Ketai LH (2020) Essentials for radiologists on COVID-19: an update— radiology scientific expert panel. Radiology 296(2):E113–E114. https://doi.org/10.1148/radiol.2020200527

    Article  PubMed  Google Scholar 

  29. 29.

    Pan F, Ye T, Sun P, Gui S, Liang B, Li L, Zheng D, Wang J, Hesketh RL, Yang L, Zheng C (2020) Time course of lung changes at chest CT during recovery from coronavirus disease 2019 (COVID-19). Radiology 295(3):715–721. https://doi.org/10.1148/radiol.2020200370

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Not applicable

Funding

No external funding was enrolled in this study.

Author information

Affiliations

Authors

Contributions

HA is the guarantor of the integrity of the entire study. Study concepts and design were done by HA, HH, RE, AG, WA, and ME. Literature research was done by HA, RE, and ME. Clinical studies were done by HA, HH, and ME. HA, RE, WA, and AG did the experimental studies/data analysis. AG and WA did the statistical analysis. HA, HH, RE, and ME prepared the manuscript. HA and HH edited the manuscript. The authors have read and approved the final manuscript.

Corresponding author

Correspondence to Haisam Atta.

Ethics declarations

Ethics approval and consent to participate

The institutional review board of South Egypt Cancer Institute approval was obtained with code SECI-IRB-IORG0006563: No: 505 on 30 June 2020, and informed consent is not applicable, waived by the local ethical committee.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Atta, H., Hasan, H.A., Elmorshedy, R. et al. Validation of imaging reporting and data system of coronavirus disease 2019 lexicons CO-RADS and COVID-RADS with radiologists’ preference: a multicentric study. Egypt J Radiol Nucl Med 52, 109 (2021). https://doi.org/10.1186/s43055-021-00485-2

Download citation

Keywords

  • COVID-19
  • Pandemics
  • Pneumonia
  • Viral
  • Tomography
  • X-Ray computed