Accurately identifying bone fractures from the X-ray image is essential to prompt timely and appropriate medical treatment. This research explores the impact of hyperparameters and data augmentation techniques on the performance of the You Only Look Once (YOLO) V10 architecture for bone fracture detection. While YOLO architectures have been widely employed in object detection tasks, recognizing bone fractures, which can appear as subtle and complicated patterns in X-ray images, requires rigorous model tuning. Image augmentation was done using the image unsharp masking approach and contrast-limited adaptive histogram equalization before training the model. The augmented images assist in feature identification and contribute to overall performance of the model. The current study has performed extensive experiments to analyze the influence of hyperparameters like the number of epochs and the learning rate, along with the analysis of the data augmentation on the input data. The experimental outcome has proven that particular hyperparameter combinations, when paired with targeted augmentation strategies, improve the accuracy and precision of fracture detection. It is observed that the proposed model yielded an accuracy of 0.964 on evaluation over the augmented data. The statistical analysis of the classification precision across the augmented and raw images is observed as 0.98 and 0.95, respectively. In comparison with other deep learning models, the empirical evaluation of the YOLO V10 model clearly demonstrates its superior performance over conventional approaches for bone fracture detection.
* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.