In most cases, the starting point for a project to detect bone fractures will be based on a real clinical or operational problem that exists today. For example, one of the goals of hospitals wanting to implement a bone fracture detection AI tool is to reduce their rate of missed fractures on X-rays, while another goal is to improve the consistency of interpretation between readers.
Furthermore, hospitals often want to support their clinical staff who face pressure to work quickly while completing diagnostic imaging; or they may wish to implement fracture detection technology due to the growing demand for imaging as the number of staff performing these exams continues to decrease.
Current estimates of the U.S. workforce and the number of imaging exams performed project higher rates of growth between 2025 and 2055 and that there will be continued upward pressure on the demand for services and technology to provide imaging services for the years ahead.¹ ²
Because of this, a project for detecting bone fractures based on X-ray images must include a structured process from start to finish.
Before starting demos, making broad claims based only on a vendor’s advertisement of AI, or reviewing clinically defined evidence, hospitals must begin their projects with:
- determining the scope of a clinical usability study or project;
- obtaining good-quality evidence; assessing the fit of the technology with the existing workflow;
- reviewing the regulatory status of the technology;
- assessing the local value of the technology for the hospital;
all of which should be addressed before the evaluation of multiple product offerings or an individual product such as AZtrauma.
This blog will explore how to begin planning a project for detecting bone fractures from X-ray imaging by creating a clinical usability model during the initial development of the project, then evaluating bids/models to ultimately assess fracture images for their clinical relevance.
Why bone fracture detection projects are gaining hospital interest
Hospitals are implementing bone fracture detection projects to improve patient care.
They are considering various factors, including routine but critical imaging, the large volume of musculoskeletal X-rays ordered by emergency, outpatient, and hospital providers, and variation among the types of fractures seen.
Some fractures can be easily identified, while others, such as hairline fractures or occult fractures, may be difficult to detect, especially if clinicians are working quickly or are managing high-volume patient loads with varying degrees of experience.
These projects are also related to improving workflow processes for the radiology department.
In addition to evaluating whether or not a software program can detect fractures, hospitals will also evaluate whether or not the software will provide radiologists with support when reviewing musculoskeletal X-rays.
A hospital may implement a bone fracture detection project when its radiology department wants a second review for interpretation, clearer priorities for review, or improved consistency in the way musculoskeletal X-rays are assessed.
The key question is not whether or not an algorithm can detect fractures, but rather how the software fits into the current workflow of radiologists.
Step 1: Define the clinical scope of your bone fracture detection project
It’s surprising how many teams ask for bone fracture detection AI tools without defining where they will be applied, which X-rays they will use, or what problem the software is meant to solve.
Teams should begin with 3 key questions:
- Which exam types will be prioritized first?
- Which care settings are most important?
- What outcome does the team want to improve?
Some projects will be focused on emergency department extremity X-rays; others will focus on high-volume outpatient musculoskeletal studies.
A narrower pilot may begin with any combination of wrist, ankle, foot, or shoulder exams.
The answer to these questions depends largely on the region in which your project is being implemented and where there is fracture burden, clinical risk, and workflow pressure.
In addition, teams should define the role of the AI tool within their bone fracture detection project.
A project may be designed to assist an interpreting physician in reviewing an exam, assist in prioritizing suspicious exams, or reduce false negatives within a study.
Even though these goals are related, they are clearly different from one another.
When the role of a tool is vague, project evaluation will also tend to be vague. When the role of a bone fracture detection project has been clearly defined, it becomes possible to create measurable goals.
These criteria may include improved sensitivity, increased reader confidence, decreased variation in reporting between shifts, or a quicker turnaround time for evaluating selected groups of exams.
The goal is not to force every institution to fit within one framework; it is to develop benchmarks that will allow measurement of success before implementation of software within the institution’s workflow.
Step 2: Review the evidence behind the bone fracture detection project
Reviewing the research supporting a bone fracture detection initiative is the next stage in the process.
Typically, it is at this point that most teams make errors; specifically, they either accept vendor-provided evidence too quickly, or they reject any evidence that isn’t a perfectly designed randomized study.
Both reactions are flawed.
A more appropriate question would be: What type of evidence is strong enough to be used in this clinical application?
Generally, the strongest evidence will contain peer-reviewed validation, evidence from multiple centers or geographic locations, and studies that include performance metrics that can be interpreted in a relevant context.
In the case of detecting fractures, common performance metrics include area under the curve, sensitivity, and specificity.
While these metrics certainly matter, they should only be considered if the team is also aware of the population included in the study, the case mix, and the reference standard.
One excellent example of a multicenter, multi-geographic evidence source would be a recent study titled “Performance of a complete AI radiographic suite across 258,373 X-rays from 26 countries: A worldwide evaluation,” in which the authors evaluated 258,373 radiographs from 26 countries across 5 different continents.
The use of the AZtrauma algorithm demonstrated an area under the curve of 98.3%, sensitivity of 97.4%, and specificity of 96.4%.³
These results are relevant to a bone fracture detection project because of both the sample size and the geographic breadth of the study.
Regulatory documentation can also be used as part of the existing evidence review and can help with assessing the relevance of published study metrics.
For example, the FDA 510(k) summary for Rayvolve® states that the Rayvolve® device is intended to assist radiologists and emergency physicians in the detection of fractures while reviewing musculoskeletal radiographs in patients 2 years old or older.
Furthermore, the summary indicates that the Rayvolve® software is intended to provide an aided diagnosis, but it is not intended to replace the physician’s review of the image.⁴
For hospitals, this information helps clarify the intended use of the system and shapes what physicians should expect from it.
Regardless of this information, a hospital should not take published results as the end of the evaluation process.
In the case of a bone fracture detection project, the questions that should be considered include:
- Are the X-ray series used in the studies similar to those that are read by your physicians in the facility?
- Were the results consistent across the age groups of patients examined, and were the findings consistent for the various examination types?
- Did the evaluations reflect actual reading environments?
Excellent procurement is not only about identifying valid evidence; it is also about being able to interpret that evidence appropriately.
Step 3: Check workflow fit in the bone fracture detection project
In a bone fracture detection project, success or failure often depends more on workflow than on headline metrics.
A very good algorithm can easily become a workflow burden if it adds extra clicks, inefficient interfaces, or a separate reading pathway that a clinician will not trust.
This is why workflow fit is one of the primary tests within a bone fracture detection project.
Teams need to ask questions such as: Where does the AI run, how are images routed, how do the outputs come back, and what does the radiologist actually see?
If the answers to those questions are not clear, then the evaluation will be incomplete.
Standards also play an important role in this process.
The DICOM standard provides the basic framework for communicating and managing information and data related to medical imaging.⁵
Practically speaking, therefore, the project should examine how studies move through the imaging environment and whether the AI will act like a usable part of that environment, as opposed to being a disruptive add-on.
According to the FDA summary of the Rayvolve® software, it was designed to work with the most current edition of the DICOM image standard.⁴
A bone fracture detection project still needs to evaluate how the tool interfaces with PACS, how its annotations or outputs will be presented to the user, and whether the original dataset will remain intact.
Workflow questions to ask during the bone fracture detection project
For instance, can the AI routing of X-ray images to radiologists for analysis occur automatically?
If so, can the results of the AI analysis be delivered to the PACS viewer where clinicians currently review images?
Also, can radiologists utilize the AI output without disrupting their typical workflow patterns?
Lastly, are there defined methods for dealing with AI service unavailability?
These questions are critical not only to the technical success of the AI implementation but also to the long-term viability of the fracture detection tool within everyday clinical environments.
A new technology can perform well in validation, but it can be rendered ineffective if it interferes with the way radiologists perform their work.
Step 4: Review regulatory status in the bone fracture detection project
Reviewing regulatory status is an important but not complete part of any bone fracture detection project.
Hospitals must also verify that the intended use of the tool matches the actual clinical plan and that the tool is cleared or marked for the markets in which it will be used.
According to the 510(k) record for Rayvolve®, the FDA cleared and classified the device under K240845.
The clearance date is July 17, 2024.
“Radiological computer-assisted detection and diagnosis of fractures” is the classification.⁴
The FDA's report also indicates that this software assists radiologists and emergency physicians while reviewing musculoskeletal radiographs.
Therefore, the software is not intended for autonomous use.
This language is important because a bone fracture detection project cannot make claims beyond what the software is cleared to support.
The next step in the process is to obtain the intended use statement, the applicable regulatory documents, and the technical documentation for local review.
This information helps ensure that the clinical deployment being investigated is consistent with what the device has been cleared or marked to support.
Some teams think that once a device has regulatory clearance, the evaluation is complete.
This is incorrect; these projects need local governance and technical review, as well as an understanding of what the tool can change and what it cannot change in the existing workflow.
Step 5: Plan local validation in the bone fracture detection project
A bone fracture detection project should not go from signing a contract to clinical acceptance instantly.
The more appropriate path would be for each local facility to validate the technology before it is deployed in their hospital.
Local validation is important due to variation in imaging environments.
Variation exists due to differences in scanner types, routing logic, caseloads, reporting culture, and staffing patterns.
Hospitals may see the same type of X-rays reviewed in published studies, but the real-world experience may not be similar.
Typically, an adequate bone fracture detection project will begin with technical scoping and retrospective validation.
This allows the hospital to verify whether the AI performs as expected with its data/infrastructure, while also providing radiologists and IT staff with the opportunity to review output quality, display issues, and edge cases before the project goes live.
A project that has not undergone local validation will be more likely to face resistance due to the belief among clinicians that the tool was introduced to their workflow without having demonstrated its value.
Step 6: Set realistic go-live expectations for the bone fracture detection project
When rolling out a bone fracture detection tool, make sure that your expectations for going live are realistic and grounded in operations as opposed to ceremony; define which teams will see the outputs and under what circumstances; define how feedback will be collected after the project has gone live; and ensure that mechanisms are put into place to facilitate feedback from clinical users so that timely improvements to the fracture detection tool can occur.
Escalation pathways should be included as part of your go-live plan. Key questions include:
- What procedures are followed if there is a routing issue?
- How are uptime metrics captured?
- Who captures user feedback?
- What cases will be reviewed to monitor whether or not the tool actually provides a benefit?
Having good answers to these questions before the first live study is processed will enhance your credibility as a project team.
Avoid overstating the benefits derived from using the bone fracture detection tool in the first few weeks after going live.
While using the tool in routine practice will likely occur early and provide clinicians with improved confidence and consistency in their daily practice, true sustainable adoption will be based on the trust that physicians have in both the usability of the tool itself and the workflow value that is seen in practice through use of the tool.
Trust, usability, and demonstrated workflow value are generally developed through a process of disciplined implementation and encouragement.
Step 7: Monitor the bone fracture detection project after deployment
Post-go-live monitoring is one of the most neglected components of a fracture detection project; it shouldn't be.
Going live with your software is not the end of your evaluation. It is the beginning of the next phase.
You should have a small number of relevant performance metrics to monitor after you deploy a fracture detection project.
That may include adoption behavior, uptime, routing performance, user feedback, and fracture review outcomes for the selected cases.
While the specific metrics for each hospital will be different, the general principle remains the same: measure what is clinically and operationally relevant to the reason you initiated the project in the first place.
Conversely, the project should provide insight into how it has created friction, uncertainty, or clinician alert fatigue.
Monitoring also helps teams determine whether they should expand, revise, or suspend a fracture detection project.
A fracture detection project should have the opportunity to prove itself and should never be immune from scrutiny due to financial investment or time already spent.
How AZtrauma fits into a bone fracture detection project
As teams consider the use of AZtrauma, the same structure can be used to determine whether AZtrauma meets the needs of a bone fracture detection project.
The clinical scope, evidence, workflow fit, regulatory status, local validation, and post-go-live monitoring will still be assessed.
AZtrauma is relevant because its intended use supports the detection of fractures on musculoskeletal radiographs.⁴
Large-scale international performance data from the 2026 Radiography study may also assist in early-stage evaluations that could apply to a bone fracture detection project.³
These measures can be used as inputs for evaluation.
However, they should not be considered an alternative to local evaluation; rather, they can serve as a starting point for it.
Conclusion
It’s important for hospitals to evaluate how they intend to use X-rays, collect evidence supporting the use of AI, assess how this fits within existing workflows, check with the appropriate regulatory agencies, carry out validation in the setting where they will be implementing the technology, and perform ongoing monitoring after deployment.
These steps should occur in that order, rather than relying on broad statements about AI in general.
The question when considering whether to purchase a bone fracture detection AI solution is not about the technical terminology associated with the AI technology used; rather, it is whether this AI tool will provide clinically useful and operationally feasible support for reviewing X-ray images.
That is the standard that matters.
References
- Christensen EW, Drake AR, Parikh JR, Rubin EM, Rula EY. Projected US Radiologist Supply, 2025 to 2055.Journal of the American College of Radiology. 2025;22(5):680-688. Available at: https://www.jacr.org/article/S1546-1440(24)00909-8/fulltext. (jacr.org)
- Christensen EW, Drake AR, Parikh JR, Rubin EM, Rula EY. Projected US Imaging Utilization, 2025 to 2055.Journal of the American College of Radiology. 2025;22(5):689-698. Available at: https://www.jacr.org/article/S1546-1440(24)00898-6/fulltext. (jacr.org)
- Cohen E, Ouertani MS, Beaumel P, et al. Performance of a complete AI radiographic suite across 258,373 X-rays from 26 countries: A worldwide evaluation. Radiography. 2026. Available at: https://pubmed.ncbi.nlm.nih.gov/41762966/. (PubMed)
- U.S. Food and Drug Administration. Rayvolve, K240845, 510(k) Premarket Notification. Decision date: July 17, 2024. Available at: https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K240845 and summary PDF: https://www.accessdata.fda.gov/cdrh_docs/pdf24/K240845.pdf. (FDA Access Data)
- DICOM Standards Committee. About DICOM: Overview. DICOM is the international standard for medical images and related information. Available at: https://www.dicomstandard.org/about. (DICOM)
Regulatory information
US - Medical device Class II according to the 510K clearances. Rayvolve: is a computer-assisted detection and diagnosis (CAD) software device to assist radiologists and emergency physicians in detecting fractures during the review of radiographs of the musculoskeletal system. Rayvolve is indicated for adult and pediatric population (≥ 2 years).
EU - Rayvolve: Medical Device Class IIa in Europe (CE 2797) in compliance with the Medical Device Regulation (2017/745). Rayvolve is a computer-aided diagnosis tool, intended to help radiologists and emergency physicians to detect and localize abnormalities on standard X-rays.
Caution: The data mentioned are sourced from internal documents, internal studies and literature reviews. It is for distribution to Health Care Professionals only and should not be relied upon by any other persons. Carefully read the instructions for use before use. Please refer to our Privacy policy on our website For more information, please contact contact@azmed.co.
AZmed 10 rue d’Uzès, 75002 Paris - www.azmed.co - RCS Laval B 841 673 601
© 2026 AZmed – All rights reserved. MM-xx-xx



