More than 340 radiology artificial intelligence (AI) tools have received U.S. regulatory authorization, and adoption is increasing across radiology departments.¹ Fracture detection sits at the front of this adoption curve in emergency radiology because missed fractures represent one of the most common sources of diagnostic error in urgent imaging, and they are frequently cited in medicolegal claims against radiologists who interpret these studies.²
Peer-reviewed meta-analytic evidence has demonstrated that fracture detection algorithms can reach diagnostic performance non-inferior to that of clinicians, and clinical evidence also shows that AI assistance can improve reader sensitivity in clinical interpretation.³
What a vendor shows is the algorithm; what creates the clinical value of the algorithm in production is the pipeline that is in place around the algorithm. An X-ray image acquired on the imaging modality will undergo multiple discrete workflow steps until the finding in the image reaches the reporting workstation.
Each of these steps has specific implications related to radiology AI data privacy, PACS integration, and de-identification workflows. As radiologists, radiographers, and medical imaging professionals evaluate these tools, it is essential to understand this path so that your organization is able to differentiate between a product worth piloting and one worth declining, and ultimately establish medical imaging AI privacy as something auditable rather than something asserted.⁶
Stage 1: The image leaves the modality
Once a radiograph is created at the modality, it is transmitted immediately to the Picture Archiving and Communication System (PACS) as a DICOM object.⁵ In a facility that utilizes a fracture detection tool, a DICOM router can send a duplicate copy of the study to an AI node via DICOM C-STORE based on routing rules established at the PACS or through the DICOM router.
After the duplicate copy is routed to the AI node, the original study still follows its standard path into the PACS archive without modification. This serves as the foundation of AI radiology PACS integration, while also allowing inference to occur without disrupting the clinical worklist.
Why is parallel architecture important?
AI radiology PACS integration that operates alongside the clinical workflow, rather than inside it, means that the modality worklist is not blocked by the inference process, the original DICOM object is not modified in any way, and failure of the AI node does not delay reporting.
The Integrating the Healthcare Enterprise (IHE) AI Results profile and RSNA guidance on standards-based AI integration both endorse this type of architecture.⁷ Routing rules can be configured by modality, body region, and imaging protocol; therefore, a radiology department can route musculoskeletal radiographs to a fracture detection tool, while routing chest radiographs to a separate thoracic pathology model.
When radiology departments utilize a vendor-neutral archive AI integration pattern to orchestrate DICOM routing across multiple AI tools and to centralize audit logging at the routing layer, they can avoid vendor lock-in and scale from one algorithm to many using a common architecture.
This approach to AI radiology PACS integration is becoming the operational baseline, and it serves as a foundation for much of the operational side of radiology AI cybersecurity.
Stage 2: De-identification before inference
Studies used in AI models must be stripped of identifiers that the AI model does not need; thus, the de-identification process is a very important stage of privacy engineering behind radiology AI data privacy, and it can earn or lose the trust of the institutions that deploy radiology AI tools.
There is a considerable amount of confusion around the terms "anonymization" versus "de-identification." Anonymization permanently removes all protected health information (PHI) from a dataset and severs the link back to the patient.
De-identification radiology AI work involves the removal of PHI; however, it preserves a pseudonymous key, which allows the originating institution to re-link the study if clinically necessary.⁸
Radiology AI vendors that engage in PHI removal radiology AI workflows have a choice between 2 methods: Safe Harbor de-identification DICOM, or Expert Determination.
Safe Harbor requires the removal of 18 specified identifiers before the study can be considered de-identified under HIPAA. Expert Determination requires a qualified statistician to determine whether the risk of re-identification is very small.⁹
The implications of choosing Safe Harbor from the perspective of radiology AI HIPAA compliance are that it provides a more rigid, but defensible, path, while Expert Determination allows richer data retention, but only as long as the statistician's documentation is available.
De-identification radiology AI implementations are considerably more complex than they first appear. There are 3 basic locations for identifiers.
The first is the DICOM metadata header, which contains patient name, patient ID, accession number, institution name, referring physician, and device serial numbers as structured fields.¹⁰
The second is the pixel data itself, where text is sometimes burned into the image by the modality, including patient overlays, laterality markers, and technician annotations. Burned-in text redaction requires optical character recognition combined with pixel-level de-identification, and does not involve simply scrubbing metadata.¹¹
Finally, there is anatomy; as demonstrated by a Mayo Clinic study, standard facial recognition software correctly identified 85% of research volunteers from reconstructed MRI scans, which resulted in the adoption of defacing algorithms for 3-dimensional head and neck imaging that is shared for research or commercial training.¹²
An effective DICOM de-identification AI pipeline will perform de-identification before any image leaves the hospital perimeter, or perform it in an encrypted, audited cloud region governed by a Business Associate Agreement (BAA) under HIPAA or a Data Processing Agreement (DPA) under Article 28 of the General Data Protection Regulation.¹³
A Business Associate Agreement that radiology AI vendors sign with their customers defines data use within the HIPAA framework, and includes permissible uses, minimum necessary standards, and breach notification timelines.
Federated learning approaches further strengthen radiology AI data privacy by permitting encrypted model updates to move between institutions while the raw imaging data remains within the originating institution.¹⁴
Stage 3: Where the inference actually happens
The different deployment methods of radiology AI systems, including on-premises inference, cloud inference, and hybrid architectures, each have unique effects on radiology AI data privacy, latency, and regulatory compliance.
With on-premises inference, the AI model runs on a server inside the hospital network, and no images leave the facility. This is preferred by academic medical centers that employ dedicated medical imaging informatics teams; it also allows those institutions to comply with data residency requirements in jurisdictions that restrict cross-border transfer of health information.
With cloud inference, a de-identified study is sent to a secure, encrypted endpoint, usually in a region-specific data center, via a connection that uses TLS 1.2 or higher encryption for data transmitted to and from the cloud environment.
All data stored in the cloud is also encrypted using AES-256 encryption, and access is restricted by role-based access controls and immutable audit logs.
Radiology AI deployed in the cloud can be designed to align with HIPAA Security Rule and GDPR accountability requirements when appropriate safeguards, contracts, access controls, audit logs, and retention policies are in place.¹⁵ ¹⁹ Modern radiology AI cybersecurity frameworks, including the NIST AI Risk Management Framework, also expect continuous monitoring, incident response procedures, and documented vulnerability management in addition to baseline encryption requirements.¹⁶
Finally, hybrid architectures route sensitive or high-priority studies through on-premises systems, while routing less sensitive studies to the cloud in order to achieve scalability.
No one way of deploying radiology AI is more compliant than another; what is most important for determining radiology AI data privacy is whether the vendor has written documentation of the following:
- where data is processed;
- who will access the data;
- how long the data will be retained;
- how the data will be deleted;
- which regulatory regime governs the environment.
The U.S. Department of Health and Human Services has proposed updates to the HIPAA Security Rule for the first time in nearly 2 decades. These proposed updates raise expectations around audit logging, encryption, and AI-specific risk analysis.¹⁸
GDPR governs processing of health data across the European Economic Area, with enforceable rights of erasure and portability.¹⁹
Chile's Law 21.719, adopted in 2024 with a transition period running through 2026, imports several GDPR concepts into Latin American law. According to Chilean law, this will establish a new agency, Agencia de Protección de Datos Personales, with enforcement authority over personal data protection.²⁰
Brazil's Lei Geral de Proteção de Dados is already in force and contains comparable provisions.²¹
The Medicines and Healthcare products Regulatory Agency (MHRA) announced in 2025 that it intends to implement international reliance routes, including Software as a Medical Device under defined conditions, to streamline access for devices already authorized in comparable regulator countries such as the US, Canada, or Australia.²²
A patient data protection radiology AI posture that satisfies all of the compliance frameworks described above at the same time is rare. A European-headquartered vendor holding FDA 510(k) clearance, CE Class IIa marking under Medical Device Regulation 2017/745, and Medical Device Single Audit Program certification covering the United States, Brazil, Australia, and Canada operates under the strictest overlapping regulatory scrutiny currently available to a commercial radiology AI tool.⁴ ²³
AZmed, developer of the Rayvolve® AI Suite, holds this combined posture and operates natively under GDPR as a company headquartered in Paris.
Stage 4: How the result comes back
When an inference has been completed through an AI algorithm, the results must be returned to the radiologist who will write the report.
The way in which the results are returned to the radiologist in the reporting system will impact whether the AI-generated finding can be used, audited, and safely included in a signed report. This will ultimately determine whether AI radiology PACS integration justifies itself or falls short.
The simplest means of providing the finding back to the radiologist is through a DICOM Secondary Capture. In this workflow, the AI generates a duplicate image of the original, which contains a bounding box or other annotation; this duplicate is stored as an additional series within the same DICOM study for archiving and retrieval purposes.
The advantage is that it is visible in PACS at the same time as the original images without further integration work between the AI and PACS. However, the limitation of Secondary Capture is that it is merely a static graphical representation. Therefore, the bounding box cannot be scrolled, windowed, or parsed by the reporting system.²⁵
The next level of integration is through a DICOM Structured Report (SR). This is a coded, machine-readable object that includes coded entries for anatomic location, measurements, and qualitative findings.
A DICOM Structured Report AI fracture workflow allows the reporting system to pull the SR and auto-populate relevant fields within a dictation template, which reduces transcription work as well as providing a pathway for downstream analytics and quality assurance of the generated data.²⁶
The most advanced integration utilizes HL7 ORU AI radiology integration patterns to deliver AI-generated findings directly into the radiology information system worklist and the electronic health record chart as an HL7 ORU^R01 observation result message or Fast Healthcare Interoperability Resources; therefore, this level of integration allows the radiology worklist to prioritize its workload and place a suspected positive study at the top of the reading queue, and also allows for direct pre-fill into a structured template for reporting.²⁷
Rayvolve® returns the inference result to the PACS of the reporting radiologist as a DICOM Secondary Capture, appearing natively in the PACS environment, and allows for RIS-integrated structured report pre-fill into partner systems such as EDL and Evolucare.²⁸
For teleradiology platforms and third-party integrators requiring programmatic access to the underlying finding, a JSON application programming interface is available. The inference time is less than 60 seconds per study.²⁹
Stage 5: Retention, deletion, and the audit trail
At this stage, we move from evaluating the superficial capabilities of a radiology AI tool to evaluating its operational maturity through rigorous means.
The assessing radiologist or radiographer must have access to the vendor's written retention documentation so that they have answers to the following questions regarding retention:³⁰
- After inference, how long will the studies be retained?
- Will the retention schedule allow for immediate deletion?
- Will the audit logs capture every access and every inference by all users?
- Are these audit logs exportable to the institution's privacy officer?
- Is there a documented pathway for honoring a patient's right to erasure under GDPR Article 17 or any similar legal provision?
These are not optional questions, as the HIPAA Security Rule proposed revision by the Office for Civil Rights in January 2025 raises expectations around audit logging, encryption, breach notification, and AI-specific risk analysis.
Under GDPR, there is an enforceable right of erasure for any tool that processes identifiable or pseudonymized health data. Under Chile's Law 21.719, a right to erasure exists, along with accountability obligations on covered entities to demonstrate, rather than simply declare, their compliance. Under Brazil's LGPD, comparable rights are enforced through the Autoridade Nacional de Proteção de Dados (ANPD).
Any vendor that does not have a written retention schedule, exportable audit logs, a clear data flow diagram, and a documented deletion pathway does not have the ability to survive a regulator's inspection. Any vendor that can satisfactorily answer all of the above questions demonstrates operational maturity that maps directly to clinical safety and genuine radiology AI HIPAA compliance rather than declared compliance.
The evidence that the pipeline works at scale
There is a difference between architecture presented in a slide deck and architecture implemented in production.
Peer-reviewed research demonstrating that a radiology AI tool performs consistently across jurisdictions, scanner types, and patient populations provides the strongest signal that radiology AI data privacy, AI radiology PACS integration, and the full de-identification radiology AI pipeline around the algorithm actually function as specified.
The largest published peer-reviewed worldwide evaluation of this kind was conducted by Cohen et al. on the Rayvolve® AI Suite. Investigators analyzed 258,373 X-rays collected from 100 medical centers across 26 countries and 5 continents over a 3-year period, with every examination annotated through dual-reader consensus.³²
Conducting an evaluation of this size is only possible if the AI pipeline operates consistently across distinct privacy regimes, from HIPAA-governed United States sites to GDPR-governed European centers to LGPD-governed Brazilian institutions, while preserving clinical data integrity through the entire imaging workflow.
The second piece of evidence is of considerable value because the study constitutes an independent analysis of 3 commercially available AI tools for fracture, dislocation, and effusion detection, published by Luiken et al. at the Technical University of Munich.
In October 2025, the authors published the results of a prospective head-to-head comparison of 3 AI tools developed for fracture, dislocation, and effusion detection using real-world clinical data.³³
Their analysis was performed on 2,926 radiographs obtained from 1,037 adults across 22 anatomical regions at University Hospital Rechts der Isar, using radiologist reports and CT adjudication as the reference standard.
The results indicated that Rayvolve® achieved the highest area under the curve for all fractures at 84.88%, the highest sensitivity for all fractures at 79.48%, and the highest performance in dislocation detection.
Independent, peer-reviewed, prospective head-to-head studies comparing multiple AI tools are rare and provide much greater weight than single-vendor validation studies when a department must choose between multiple products.
Peer-reviewed published studies of AI tool performance are corroborated by real-world operational evidence. At SimonMed Imaging, a nationwide outpatient imaging network in the United States, a before-and-after operational review covering more than 330,000 examinations found mean turnaround time for fracture-positive cases fell from 48 hours without AI assistance to 8.3 hours when Rayvolve® was integrated into the radiologist workflow.³⁴
In the UK, the National Institute for Health and Care Excellence (NICE) completed an Early Value Assessment, now published as HTG739 after migration from HTE20, and estimated implementation costs at approximately £1 per scan. NICE also concluded that AI technologies such as AZtrauma may improve fracture detection on X-rays in urgent care when used with evidence generation.³⁵
The short version
A radiology AI tool built to meet the highest audit standards provides a greater level of auditability than does the older, paper-and-film era it replaces. There are now more logged events, encrypted data flows, and specific regulatory frameworks governing each stage of the radiology AI workflow, including DICOM routing, de-identification radiology AI processing, inference, result delivery, retention, and deletion.
Radiology AI data privacy is not a feature added on top of an algorithm. It is the architecture.
A reporting clinician who is evaluating a tool for use in their practice has the right to demand complete and unambiguous transparency at every stage of the tool's workflow. Therefore, they should request a data flow diagram illustrating how data is processed; where inference occurs; which regulatory regime governs each component of data processing; the written retention policy and deletion pathway; the ability to export audit logs; and independent peer-reviewed validation, which includes head-to-head comparisons.
A tool that can produce all 5 is one where the pipeline matches the promise. A tool that cannot is one where the algorithm may be the smallest thing worth worrying about, and where radiology AI data privacy, AI radiology PACS integration, and de-identification radiology AI all collapse into claims rather than engineered reality.
References
1. U.S. Food and Drug Administration. Artificial Intelligence-Enabled Medical Devices. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices. Washington Post. AI is infiltrating medicine, but not enough doctors understand it. 2025. https://www.washingtonpost.com/health/2025/04/05/ai-machine-learning-radiology-software/
2. Whang JS, Baker SR, Patel R, Luk L, Castro A 3rd. The causes of medical malpractice suits against radiologists in the United States. Radiology. 2013;266(2):548–554. https://doi.org/10.1148/radiol.12110971
3. Kuo RYL, Harrison C, Curran TA, et al. Artificial intelligence in fracture detection: a systematic review and meta-analysis. Radiology. 2022;304(1):50–62. https://doi.org/10.1148/radiol.211785
4. U.S. Food and Drug Administration. 510(k) Premarket Notification K240845, Rayvolve®. Decision date: July 17, 2024. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K240845. 510(k) Summary: https://www.accessdata.fda.gov/cdrh_docs/pdf24/K240845.pdf
5. DICOM Standards Committee. DICOM PS3.1 Introduction and Overview. National Electrical Manufacturers Association. https://www.dicomstandard.org/current
6. Allen B, Dreyer K, Stibolt R Jr, et al. Evaluation and real-world performance monitoring of artificial intelligence models in clinical practice: try it, buy it, check it. J Am Coll Radiol. 2021;18(11):1489–1496. https://doi.org/10.1016/j.jacr.2021.08.022
7. Genereaux BW, O’Donnell K, Bialecki B, et al. Integrating and adopting AI in the radiology workflow: a primer for standards and Integrating the Healthcare Enterprise (IHE) profiles. Radiology. 2024;311(3):e232653. https://doi.org/10.1148/radiol.232653
8. American College of Radiology Data Sharing Workgroup. Data sharing of imaging in an evolving health care world: Report of the ACR Data Sharing Workgroup, Part 1. J Am Coll Radiol. 2021;18(12):1701–1710. https://doi.org/10.1016/j.jacr.2021.07.014
9. U.S. Department of Health and Human Services, Office for Civil Rights. Guidance Regarding Methods for De-identification of Protected Health Information in Accordance with the HIPAA Privacy Rule. https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/index.html
10. Aryanto KYE, Oudkerk M, van Ooijen PMA. Free DICOM de-identification tools in clinical research: functioning and safety of patient privacy. Eur Radiol. 2015;25(12):3685–3695. https://doi.org/10.1007/s00330-015-3794-0
11. Rutherford M, Mun SK, Levine B, et al. A DICOM dataset for evaluation of medical image de-identification. Sci Data. 2021;8(1):183. https://doi.org/10.1038/s41597-021-00967-y
12. Schwarz CG, Kremers WK, Therneau TM, et al. Identification of anonymous MRI research participants with face-recognition software. N Engl J Med. 2019;381(17):1684–1686. https://doi.org/10.1056/NEJMc1908881
13. European Parliament and Council. Regulation (EU) 2016/679, General Data Protection Regulation, Article 28. Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2016/679/oj
14. Kaissis GA, Makowski MR, Rückert D, Braren RF. Secure, privacy-preserving and federated machine learning in medical imaging. Nat Mach Intell. 2020;2(6):305–311. https://doi.org/10.1038/s42256-020-0186-1
15. U.S. Department of Health and Human Services. HIPAA Security Rule, 45 CFR Part 164, Subpart C. https://www.hhs.gov/hipaa/for-professionals/security/index.html
16. National Institute of Standards and Technology. AI Risk Management Framework (AI RMF 1.0). 2023. https://www.nist.gov/itl/ai-risk-management-framework
17. Foley & Lardner LLP. HIPAA Compliance for AI in Digital Health: What Privacy Officers Need to Know. 2025. https://www.foley.com/insights/publications/2025/05/hipaa-compliance-ai-digital-health-privacy-officers-need-know/
18. U.S. Department of Health and Human Services, Office for Civil Rights. HIPAA Security Rule Notice of Proposed Rulemaking. Federal Register. January 6, 2025. https://www.federalregister.gov/documents/2025/01/06/2024-30983/
19. European Parliament and Council. Regulation (EU) 2016/679, General Data Protection Regulation, full text. https://eur-lex.europa.eu/eli/reg/2016/679/oj
20. Biblioteca del Congreso Nacional de Chile. Ley 21.719, Regula la protección y el tratamiento de los datos personales y crea la Agencia de Protección de Datos Personales. 2024. https://www.bcn.cl/leychile/navegar?idNorma=1208650
21. Presidência da República, Brasil. Lei Geral de Proteção de Dados Pessoais, Lei nº 13.709/2018. http://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/l13709.htm
22. Medicines and Healthcare products Regulatory Agency. International Recognition Procedure for Medical Devices. 2025. https://www.gov.uk/government/publications/international-recognition-procedure
23. International Medical Device Regulators Forum. Medical Device Single Audit Program (MDSAP). https://www.imdrf.org/consultations/medical-device-single-audit-program
24. AZmed. Rayvolve AI Suite, regulatory clearances and deployment overview. https://www.azmed.co/
25. Clunie DA. DICOM Structured Reporting and Secondary Capture. PixelMed Publishing. http://www.dclunie.com/pixelmed/DICOMSR.book.pdf
26. Kahn CE Jr, Langlotz CP, Burnside ES, et al. Toward best practices in radiology reporting. Radiology. 2009;252(3):852–856. https://doi.org/10.1148/radiol.2523081992
27. Health Level Seven International. HL7 Version 2.5.1 Observation Result Message, ORU^R01. https://www.hl7.org/implement/standards/
28. AZmed. RIS-integrated report pre-fill, Rayvolve AI Suite documentation. https://www.azmed.co/news-post/azmed-launches-ris-integrated-report-prefill-powered-by-the-rayvolve-ai-suite
29. AZmed. Rayvolve AI Suite technical specifications. https://www.azmed.co/our-products
30. European Parliament and Council. Regulation (EU) 2016/679, Article 17, right to erasure. https://gdpr-info.eu/art-17-gdpr/
31. U.S. Department of Health and Human Services, Office for Civil Rights. HIPAA Security Rule to Strengthen the Cybersecurity of Electronic Protected Health Information. Federal Register. January 6, 2025. https://www.federalregister.gov/documents/2025/01/06/2024-30983/
32. Cohen A, Ouertani MS, Beaumel P, et al. Performance of a complete AI radiographic suite across 258,373 X-rays from 26 countries: a worldwide evaluation. Radiography. 2026. https://doi.org/10.1016/j.radi.2026.103361
33. Luiken I, Lemke T, Komenda A, et al. Evaluation of commercial AI algorithms for the detection of fractures, effusions, and dislocations on real-world clinical data: a prospective registry study. Radiography. 2025;31(6):103189. https://doi.org/10.1016/j.radi.2025.103189
34. AZmed. Impact of AI on Fracture Detection in Radiology, SimonMed Imaging operational case study. https://www.azmed.co/news-post/impact-of-ai-on-fracture-detection-in-radiology
35. National Institute for Health and Care Excellence. Artificial intelligence technologies to help detect fractures on X-rays in urgent care: early value assessment, HTG739. 2025. https://www.nice.org.uk/guidance/htg739
Regulatory information
US - Medical device Class II according to the 510K clearances. Rayvolve: is a computer-assisted detection and diagnosis (CAD) software device to assist radiologists and emergency physicians in detecting fractures during the review of radiographs of the musculoskeletal system. Rayvolve is indicated for adult and pediatric population (≥ 2 years).
Rayvolve PTX/PE: is a radiological computer-assisted triage and notification software that analyzes chest x-ray images of patients 18 years of age or older for the presence of pre-specified suspected critical findings (pleural effusion and/or pneumothorax). Rayvolve LN: is a computer-aided detection software device to assist radiologists to identify and mark regions in relation to suspected pulmonary nodules from 6 to 30mm size of patients of 18 years of age or older
EU - Rayvolve: Medical Device Class IIa in Europe (CE 2797) in compliance with the Medical Device Regulation (2017/745). Rayvolve is a computer-aided diagnosis tool, intended to help radiologists and emergency physicians to detect and localize abnormalities on standard X-rays.
Caution: The data mentioned are sourced from internal documents, internal studies and literature reviews. It is for distribution to Health Care Professionals only and should not be relied upon by any other persons. Carefully read the instructions for use before use. Please refer to our Privacy policy on our website For more information, please contact contact@azmed.co.
AZmed 10 rue d’Uzès, 75002 Paris - www.azmed.co - RCS Laval B 841 673 601
© 2026 AZmed – All rights reserved. MM-26-20



