Skip to main content

Post-operative breast imaging: a management dilemma. Can mammographic artificial intelligence help?

Abstract

Background

Imaging of the postoperative breast is a challenging issue for the interpreting physician with many variable findings that may require additional assessment through targeted ultrasound, more mammography views, or other investigations. Artificial intelligence (AI) is a fast-developing field with various applications in the breast imaging including the detection and classification of lesions, the prediction of therapy response, and the prediction of breast cancer risk. This study aimed to identify whether Artificial Intelligence improves the mammographic detection and diagnosis of breast post-operative changes and hence improves follow-up and diagnostic workflow and reduces the need for additional exposure to extra radiation or contrast material doses as in Contrast Enhanced Mammography, and the need for interventional procedures as biopsy.

Methods

This cross-sectional analytic study included 66 female patients following breast-conserving surgeries coming with breast complaints or for follow-up, with mammographically diagnosed changes.

Results

Mammography had a sensitivity of 91.7%, a specificity of 94.4%, a positive predictive value (PPV) of 78.6%, a negative predictive value (NPV) of 98.1%, and an accuracy of 93.9%, while the AI method indices were sensitivity 91.7%, specificity 92.6%, (PPV) 73.3%, (NPV) 98%, and accuracy 92.4%. The calculated cut-off point for the quantitative AI (probability of malignancy “POM” score) was 51.5%. There was a statistically significantly higher average in the percentage of POM in malignant cases (76.5 ± 27.3%) compared to benign cases (27.1 ± 19.7%). However, the final indices for the combined use of mammography and (AI) were sensitivity 100%, specificity 88.9%, (PPV) 66.7%, (NPV) 100%, and accuracy 90.9%.

Conclusion

Applying the AI algorithm on mammograms showed positive impacts on the sensitivity of the post-operative breast assessment, with an excellent reduction of the mammographic missed cancers.

Background

Nowadays, with increasing the number of patients undergoing breast-conserving treatment (BCT), it is important to support innovations in technology such as artificial intelligence that will improve postoperative breast imaging interpretation [1].

The radiologists find it a confusing task to interpret the expected mammographic post-breast conserving treatment, which includes “skin thickening, parenchymal edema, fluid collection, necrosis of fat, scar distortion, dystrophic calcifications, and recurrence” [2]. This is especially true in patients with “dense breasts, findings with subtle patterns or findings being obscured with the post-operative parenchymal distortion.” Thus, AI techniques have been developed to act as added readers in identifying suspicious lesions [3].

When artificial intelligence-based computer-assisted diagnosis “AI-CAD” is employed as an additional tool to mammography, it has demonstrated comparable diagnostic results as well as a significant radiologists’ performance improvement [4].

Four-view heat maps and an abnormality score “which is the maximum of cranio-caudal and medio-lateral oblique abnormality scores” are produced by Lunit INSIGHT MMG AI software for the input mammogram [5].

The current study aims to evaluate how incorporating artificial intelligence into mammography could enhance the detection of postoperative breast changes, improve diagnostic workflow, and reduce the need for interventional procedures such as biopsies or additional radiation or contrast material exposure. Furthermore, we compared the outcomes with those of mammography.

Methods

Study population and their inclusion and exclusion criteria

This prospective research included 66 female patients who underwent breast-conserving surgeries and came to our institution during the period from December 2022 to May 2023. Their ages ranged from 29 to 76 years (mean 53.1 years, ± 9.8 SD).

Inclusion criteria

  • Female patients who underwent breast conservative surgeries with post-operative mammographic changes.

Exclusion criteria

  • Patients who didn't undergo breast conservative surgery or who underwent mastectomy.

  • Patients contraindicated for mammography, e.g., pregnancy.

  • Patients missed their pathological data/ follow-up examinations or withdrew their consent at any time.

Demographic data and history-taking

All patients gave written informed consent, and our institutional ethical committee approval was given. A comprehensive demographic and clinical history was obtained, including name, age, marital status, number of children, phone number, family history, illness duration, and history of prior illness.

Techniques

Every patient underwent mammography, and “ultrasound was done as a routine work (but its results were not included in our study).” After that, the AI system processed the mammograms. Then, the images were collaboratively analyzed by two consultant radiologists “with over ten years of experience in breast imaging and about three years of experience in AI-aided reading” who were blinded to the final results. Finally, their analytical results and AI results were evaluated and categorized as “true or false” based on their correlation to the final results, either by follow-up “for at least 6 months/more” or pathology by “tru-cut biopsy/aspiration cytology.”

Mammography technique is performed using full-field digital mammography equipment (Amulet Innovality, Fujifilm Global Company, Japan). For each breast, two standard mammographic views (CC and MLO views) were obtained. A five-megapixel “Bellus” workstation supported the mammography equipment.

The AI technique its images were obtained from the mammography images using Lunit INSIGHT MMG, Korea, version 2019. Then each image was analyzed by the computer-aided system (CAD), then a heat map for qualitative analysis and a probability of malignancy (POM) score (0–100%) for quantitative analysis were obtained.

The computer-assisted system was composed of two separate units: the display unit, which was a dedicated mammography autoviewer monitor that displayed low-spatial-resolution digital images of the examination that were hung in the panels above, and the processing unit, which digitized and evaluated the film images. Every digital image might has zero or more marks, indicated areas that required a dedicated radiologist evaluation.

The AI was evaluated quantitatively based on the POM score (which ranged from 1 to 100%), with 100% denoted the highest suspicion level and 1% denoted the lowest suspicion level.

While to analyze the AI images qualitatively, the heat map of the images was evaluated based on the color given by the AI software system, as follows:

  • Blue (or green) color occupying nearly all of the marked area.

  • Green (or blue) color occupying nearly all of the marked area.

  • Yellow (or yellow/orange) color occupying nearly all of the marked area.

  • Red (or red/orange) color occupying nearly all the marked area.

Cases with blue or green or blue/green colors were considered benign, while those with yellow or red (± orange) colors were considered malignant.

The diagnostic efficacy of both AI and mammography was evaluated, their results were correlated with the pathology or the follow-up studies (based on the given BI-RADS score, “Breast imaging reporting database system score”). For benign cases (of BI-RADS score of 2 or 3) follow-up mammography “at least after 6 months or more” was the reference. However, histopathological examinations were performed and used as a reference for patients with BI-RADS 4 score. Also, we calculated the cut-off value for the POM score (quantitative AI).

Statistical analysis

Using the Statistical Package for the Social Sciences, the blinded results for AI and mammography were correlated with the final results. The quantitative data was presented as mean, standard deviation (SD), minimum, maximum and median, while frequency (count) and relative frequency (%) expressed the qualitative data. The Mann/Whitney test compared the quantitative variables (non-parametric). Using a test of Chi-square [2], the qualitative data was compared. When the estimated frequency was < 5 an exact test was utilized instead. The best cut-off value of AI (POM score) to detect malignancy was calculated using receiver operating characteristics curve “ROC curve” and area under curve “AUC” analysis. Also calculated the standard diagnostic indices “sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy” and Logistic regression done to predict malignancy when mammography and AI were combined. P values were considered statistically significant if less than 0.05.

Results

This study included 66 postoperative female patients, their ages ranged from 29 to 76 years (mean 53.1 ± 9.8 SD).

The final results, either by (follow-up “for at least 6 months/more” or pathology by “tru-cut biopsy/aspiration cytology”), revealed that 54 out of the examined 66 cases were benign cases (81.8%), while 12 were malignant (18.2%).

About half of the participants (34 cases, 51.5%) had right breast findings, while 32 cases (48.5%) had left breast findings.

  1. I.

    Analysis of cases according to digital mammography

The detected findings by the mammogram among the participants were: Calcification “in general” was presented in almost one-third (22 cases, 33.3%) of the participants, with 7 cases (31.8%) being suspicious calcifications (in shape “pleomorphic, amorphous”/size “micro-calcifications”/distribution “grouped, segmental”). All of the participants (100%) had a distortion. The asymmetry was seen in 9.1% of the participants (6 cases), while mass was detected in 13.6% of the participants (9 cases).

The final BI-RADS classification based on mammography showed that 38 cases (57.6%) of the participants were categorized as BI-RADS 2, 14 cases (21.2%) as BI-RADS 3 and 14 cases (21.2%) as BI-RADS 4. Considering BI-RADS 2 and 3 as benign, and BI-RADS 4 as malignant, the mammography revealed (52 cases, 78.8%) benign cases and (14 cases, 21.2%) malignant cases.

The mammography and final results showed a good agreement, as they were concordant in 62/66 cases (93.9%), with a Kappa of 0.809 and a P value of < 0 0.001. The mammography results when compared to the final results revealed 11 cases true positive, 51 cases true negative, 3 cases false positive, and 1 case false negative. The mammography diagnostic indices for detecting breast postoperative changes were 91.7% sensitivity, 94.4% specificity, 78.6% PPV, 98.1% NPV, and 93.9% overall accuracy.

  1. II

    Analysis of cases according to the artificial intelligence

  1. 1.

    The distribution of cases based on the AI’s heat map (qualitative analysis)

    We found that 30 out of the 66 examined cases showed blue color (45.5%), 21 cases were green (31.8%), 9 cases were yellow “and yellow/orange” (13.6%), and 6 cases were red “and red/orange” (9.1%). We considered the blue and green colors as benign descriptors, while the yellow and red colors as malignant ones. As a result, qualitative AI categorized about 3/4 of the participants (51 cases, 77.3%) as benign lesions, while 15 cases (22.7%) were classified as malignant.

  2. 2.

    The distribution of cases according to the probability of malignancy (POM) scoring (quantitative analysis)

    The acquired POM values for all cases ranged from 0 to 99% (mean 31.6 ± 29.9). Malignant cases showed significantly higher mean POM value (76.5 ± 27.3) compared to benign cases (mean 21.7 ± 19.7) (P value < 0.001) (Fig. 1).

    Fig. 1
    figure 1

    Relation between Quantitative AI and breast cancer diagnosis

Plotting the true-positive rate (sensitivity) against the false-positive rate (specificity) using an ROC curve analysis (AUC = 0.906) showed that 51.5% is the optimal cut-off point for the POM score (Fig. 2).

Fig. 2
figure 2

ROC curve for analysis of POM values

Considering the calculated POM score cut-off value (which differentiates benign and malignant cases) as 51.5%, as a result, 51 out of the 66 examined cases (77.3%) were identified as benign, with 1 case (2%) found to be malignant by the final result “false negative.” On the other hand, 15 out of the 66 examined cases (22.7%) were considered to be malignant, with 4 cases (26.7%) eventually turning out to be benign by the final results “false positive” (Table 1).

Table 1 Relation between Quantitative AI and final results (according to our calculated cut-off value = 51.5%)

According to the combined previous results of both qualitative and quantitative AI

The AI and final results showed a good agreement as they were concordant in 61/66 cases (92.4%) with a Kappa of 0.852, and a P value of < 0 0.001. The AI results, when compared to the final results, showed that 11 cases were true positive, 50 cases were true negative, 4 cases were false positive, and 1 case was false negative. The sensitivity, specificity, (PPV), (NPV), and total accuracy of the collective AI results (qualitative and quantitative) being correlated to the final results were 91.7, 92.6, 73.3, 98, and 92.4%, respectively, with a P value of < 0.001 (Figs. 3, 4, 5, 6 and 7).

Fig. 3
figure 3

Follow-up for a 63-year-old female patient with left conservative breast surgery. a: Mammography (CC, MLO views) showed Left post-operative changes in the form of UOQ scarring, distortion, calcifications and skin thickening (BI-RADS 2). b: AI highlight area of left post-operative changes with faint blue color (considered as benign) and 33% risk of malignancy “yellow circle” (considered as benign according to our calculated cut-off value = 51.5%). It showed the same appearance with the same BI-RADS after reviewing and comparing with her previous studies done in the last 1 year

Fig. 4
figure 4

65-year-old female patient post-right conservative breast surgery complaining of a right palpable breast lump. a: Mammography (CC, MLO views) showed right post-operative changes along with right UOQ suspicious dense with partially ill-defined margins mass lesion (BI-RADS 4). b: AI highlight area of right suspicious lesion with red color (considered as malignant) and 99% risk of malignancy “orange circle” (considered as malignant according to our calculated cut-off value = 51.5%). After biopsy pathology proved to be Invasive duct carcinoma. So both AI & mammography successfully detected the right suspicious lesion

Fig. 5
figure 5

Follow-up for a 40-year-old female patient post-right conservative breast surgery. a: Mammography (CC, MLO views) showed right UOQ post-operative changes along with operative bed suspicious ill-defined dense mass lesion with partially obscured margins (BI-RADS 4). b: AI highlight area of right UOQ suspicious lesion of red color with extensions (orange color) larger than seen in mammography (considered as malignant) and 99% risk of malignancy “yellow circle” (considered as malignant according to our calculated cut-off value = 51.5%). After biopsy pathology proved to be Invasive duct carcinoma. So AI successfully detected the suspicious lesion same as mammography but with added better localization and detected the extension of the affected tissue

Fig. 6
figure 6

Follow-up for a 43-year-old female patient with post-left conservative breast surgery. a: Mammography (CC, MLO views) showed extensive operative scarring and scattered microcalcifications (no clustering) with underlying newly seen (developing) UOQ ill-defined focal asymmetry (better seen in MLO view “yellow arrow”) (BI-RADS 4). b: AI highlight area of left UOQ suspicious lesion of orange to yellow color (considered as malignant) and 89% risk of malignancy “yellow circle” (considered as malignant according to our calculated cut-off value = 51.5%). Complementary Ultrasound showed underlying abscess formation (BIRADS 3), after aspiration cytology proved to be inflammatory smear-negative for malignant cells. So both AI and mammography fail to recognize the benign nature of the lesion giving false

Fig. 7
figure 7

Follow-up for a 51-year-old female patient post-right conservative breast surgery. a: Mammography (CC, MLO views) showed right UOQ partially ill-defined mass lesion (BI-RADS 4). b: AI highlights the right UOQ lesion with blue color (considered as benign) and 35% risk of malignancy “yellow circle” (considered as benign according to our calculated cut-off value = 51.5%). After biopsy, the pathology proved to be sclerosing adenosis. So AI showed better performance than mammography as it successfully detected the benign nature of the lesion while mammography failed to correctly recognize it giving it false-positive results

  1. III

    The added value of using mammography in combination to AI

We found that 48 out of the 66 examined cases (72.7%) were identified as benign using the combined method, with no detected cases turning out to be malignant with the final result “No false negative.” However, 18 out of the 66 examined cases (27.2%) were considered as malignant using the combined method, out of which 6 cases (33.3%) turning out to be benign with the final results (false positive). The combined method’s diagnostic indices (when compared to the final results) were: sensitivity 100%, specificity 88.9%, PPV 66.7%, NPV 100%, and total accuracy 90.9% (Table 2).

Table 2 Diagnostic indices of the combined Mammogram and AI

Lastly, a comparison of the results and the diagnostic indices for the various imaging methods conducted in this study is displayed in Tables 3 and 4.

Table 3 The results of the different modalities used in the study as correlated to the final results
Table 4 Diagnostic indices of the different modalities used in the study

Discussion

Artificial intelligence is anticipated to have potential applications in breast cancer detection, determining the extent of the cancer within the breast and interpreting pathological findings with accuracy close to that of a human reader, thus reducing false-positive results and saving radiologists’ time and effort [6]. It can also help to interpret challenging postoperative breast mammograms accurately [3].

We aimed in this study to identify whether the mammographic artificial intelligence improves the diagnosis and detection of breast postoperative changes. This will improve the diagnostic workflow, reduce the need for additional techniques, and avoid missing cancers.

In the current study, the diagnostic efficacy of quantitative AI in predicting breast cancer showed the abnormality score cut-off value was 51.5%, while Badawy et al. [7] used a cut-off value of 59%.

In this study, we cleared that there was a statistically significant higher percentage of quantitative artificial intelligence (POM) score in malignant cases compared to benign cases, with a P value < 0.001. This matched many other studies, such as Mansour et al. [8] and Berg et al. [9].

Our research found that AI alone has a little higher specificity than mammography in the characterization of postoperative breast findings of 94.4% for AI compared to 92.6% for mammography. These results agreed with those of Aljondi et al. [10], who discovered that AI has a significantly higher specificity of 91.9% compared to that of mammography (67.7%). Also, we agreed with Roela et al. [11] with an AI specificity of 96.6% compared to that of mammography (84.89%). While in the current study, both AI alone and mammography alone have the same sensitivity of 91.7%, these results were in contrast to those of Badawy et al. [9], who found that the AI was more sensitive than mammography at spotting cancerous breast tumors with a sensitivity of 93.64 and 86.36%, respectively. Also, Rodriguez-Ruiz [12] found that the higher sensitivity of AI was 86 vs. 83% of mammography.

In this study, the combined use of mammography with AI has 100% sensitivity; this improved the sensitivity of postoperative breast cancer detection vs. using mammography alone from 91.7 to 100%. This matched results of Pacilè et al. [13], Sasaki et al. [14] and Watanabe et al. [15].

Limitations

  • The current study’s relatively modest sample size is one of its shortcomings.

  • An additional drawback is that the AI system in use does not account for clinical variables like family history or symptoms, which could cause comprehensive analysis limitations.

Conclusions

In conclusion, mammography is the main modality in breast imaging, including postoperative challenging cases. However, its sensitivity can be augmented by combining it with the AI algorithm (applied to the mammogram), which has the highest sensitivity of (100%), giving excellent results in ruling out and diagnosing malignancy following breast surgeries, so avoid missing any malignant cases. AI is a non-invasive additional tool whose use would enhance diagnostic confidence and decision-making for diagnostic mammography, especially for postoperative breasts.

Availability of data and materials

The corresponding author is responsible for sending the user data and materials upon request.

Abbreviations

AI:

Artificial intelligence

AI-CAD:

Artificial intelligence/based computer-assisted diagnosis

AUC:

Area under the curve

BCT:

Breast-conserving therapy

BI-RADS:

Breast imaging reporting database system score

CC:

Craniocaudal

CI:

Confidence interval

MLO:

Mediolateral oblique

NPV:

Negative predictive value

POM:

Probability of malignancy

PPV:

Positive predictive value

ROC curve:

Receiver operating characteristic curve

SD:

Standard deviation

US:

Ultrasound

References

  1. Yoon JH, Kim EK, Kim GR, Han K, Moon HJ (2022) Mammographic surveillance after breast-conserving therapy: impact of digital breast tomosynthesis and artificial intelligence-based computer-aided detection. AJR Am J Roentgenol 218(1):42–51

    Article  PubMed  Google Scholar 

  2. Hosseini A, Khoury AL, Varghese F, Carter J, Wong JM, Mukhtar RA (2019) Changes in mammographic density following bariatric surgery. Surg Obes Relat Dis 15(6):964–968

    Article  PubMed  Google Scholar 

  3. Hu Q, Giger ML (2021) Clinical artificial intelligence applications: breast imaging. Radiol Clin 59(6):1027–1043

    Article  Google Scholar 

  4. Lee SE, Han K, Yoon JH, Youk JH, Kim EK (2022) Depiction of breast cancers on digital mammograms by artificial intelligence-based computer-assisted diagnosis according to cancer characteristics. Eur Radiol 32(11):7400–7408

    Article  PubMed  Google Scholar 

  5. Kim HE, Kim HH, Han BK, Kim KH, Han K, Nam H, Lee EH, Kim EK (2020) Changes in cancer detection and false-positive recall in mammography using artificial intelligence: a retrospective, multireader study. Lancet Digit Health 2(3):e138–e148

    Article  PubMed  Google Scholar 

  6. Raafat M, Mansour S, Kamal R, Ali HW, Shibel PE, Marey A, Taha SN, AlKalaawy B (2022) Does artificial intelligence aid in the detection of different types of breast cancer? Egypt J Radiol Nucl Med 53:182

    Article  Google Scholar 

  7. Badawy E, ElNaggar R, Soliman SAM, Elmesidy DS (2023) Performance of AI-aided mammography in breast cancer diagnosis: does breast density matter? Egypt J Radiol Nucl Med 54:178

    Article  Google Scholar 

  8. Mansour S, Kamal R, Hashem L, AlKalaawy B (2021) Can artificial intelligence replace ultrasound as a complementary tool to mammogram for the diagnosis of the breast cancer? Br J Radiol 94(1128):20210820

    Article  PubMed  PubMed Central  Google Scholar 

  9. Berg WA, Gur D, Bandos AI, Nair B, Gizienski TA, Tyma CS, Hakim CM (2021) Impact of original and artificially improved artificial intelligence–based computer-aided diagnosis on breast US interpretation. J Breast Imaging 3(3):301–311

    Article  PubMed  Google Scholar 

  10. Aljondi R, Alghamdi SS, Tajaldeen A, Alassiri S, Alkinani MH, Bertinotti T (2023) Application of artificial intelligence in the mammographic detection of breast cancer in Saudi Arabian women. Appl Sci 13(21):12087

    Article  CAS  Google Scholar 

  11. Roela RA, Valenta GV, Shimizu C, Lopez RVM, Tucunduva TM, Folgueira GK (2021) Deep learning algorism performance in mammography screening: a systematic review and meta-analysis. JCO 39:e1355

    Article  Google Scholar 

  12. Rodríguez-Ruiz A, Krupinski E, Mordang JJ, Schilling K, Heywang-Köbrunner SH, Sechopoulos I, Mann RM (2019) Detection of breast cancer with mammography: effect of an artificial intelligence support system. Radiology 290(2):305–314

    Article  PubMed  Google Scholar 

  13. Pacilè S, Lopez J, Chone P, Bertinotti T, Grouin JM, Fillard P (2020) Improving breast cancer detection accuracy of mammography with the concurrent use of an artificial intelligence tool. Radiol Artif intell 2(6):e190208

    Article  PubMed  PubMed Central  Google Scholar 

  14. Sasaki M, Tozaki M, Rodríguez-Ruiz A, Yotsumoto D, Ichiki Y, Terawaki A, Oosako S, Sagara Y, Sagara Y (2020) Artificial intelligence for breast cancer detection in mammography: experience of use of the ScreenPoint Medical Transpara system in 310 Japanese women. Breast Cancer 27(4):642–651

    Article  PubMed  Google Scholar 

  15. Watanabe AT, Lim V, Vu HX, Chim R, Weise E, Liu J, Bradley WG, Comstock CE (2019) Improved cancer detection using artificial intelligence: a retrospective evaluation of missed cancers on mammography. J Digit Imaging 32(4):625–637

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We would like to acknowledge Prof. Dr. Sahar Mansour our mentor in the AI issue & who always supports the research works at our unit, the radiology department, at Cairo University.

Funding

No source of funding.

Author information

Authors and Affiliations

Authors

Contributions

EM is the guarantor of the integrity of the entire study. SL and EM contributed to the study concepts and design. AS, EM, and SL contributed to the literature research. AS and EM contributed to the clinical studies. All authors contributed to the experimental studies/data analysis. AS, EM and OO contributed to the statistical analysis. EM contributed to the manuscript preparation. SL and OO contributed to the manuscript editing. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Menna Allah Gaber Eissa.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the ethical committee of the Radiology Department of Kasr –Al-Ainy Hospital, Cairo University which is an academic governmental supported highly specialized multidisciplinary Hospital. The included patients gave written informed consent.

Consent for publication

All patients included in this research were legible, and above 16 years of age. They gave written informed consent to publish the data contained within this study.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eissa, M.A.G., Al-Tohamy, S.F., Omar, O.S. et al. Post-operative breast imaging: a management dilemma. Can mammographic artificial intelligence help?. Egypt J Radiol Nucl Med 55, 197 (2024). https://doi.org/10.1186/s43055-024-01363-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43055-024-01363-3

Keywords