Reliability of Hip Fracture Classification and Treatment Planning on Smartphones Versus PACS.
Abstract
OBJECTIVES: To compare the reliability of classifying and recommending treatment for femoral neck (FN) and intertrochanteric (IT) fractures when reviewing radiographs on smartphones versus Picture Archiving and Communications Systems (PACS). METHODS: Design: Retrospective cross-sectional radiographic review. SETTING: Private academic medical center. PATIENT SELECTION CRITERIA: Consecutive patients ≥18 years with isolated unilateral FN or IT fractures (AO/OTA 31A or 31B) and complete radiographs (anteroposterior pelvis, anteroposterior and lateral hip, and anteroposterior and lateral femur) who presented from January 1 to December 31, 2021 were included. OUTCOME MEASURES AND COMPARISONS: Three orthopaedic surgeons (one orthopaedic trauma attending and two adult reconstruction attendings) independently evaluated radiographs on two separate occasions, first on their smartphone and then on PACS using their personal laptop. After reviewing patient images, a survey assessing fracture characteristics and treatment plans was completed. Smartphone images were accessed through an embedded URL link within the survey, providing zooming functionality comparable to that of standard text message. The primary outcome was intraobserver agreement (smartphone vs PACS) for fracture classification and treatment recommendations (general and specific) using Cohen's kappa (k). Additional outcomes included interobserver agreement, image quality based on subjective evaluation, and recommendation for advanced imaging. RESULTS: Radiographs from 51 patients (mean age 81.1 years, range: 61-99), consisting of 60.8% females were reviewed. This resulted in 153 PACS and smartphone assessments. Intraobserver agreement was substantial for fracture classification (k = 0.72; 19.6% discordance), general treatment (k = 0.79; 13.1% discordance), and specific treatment (k = 0.76; 19.0% discordance). Image quality was poor/unacceptable in 5.9% of PACS and 7.2% of smartphone images (p=0.8). Advanced imaging was recommended in 11.1% of PACS and 11.8% of smartphone reviews (p=1.0). Interobserver agreement for fracture classification was moderate for PACS (k = 0.58) and smartphone (k = 0.52). Interobserver agreement for general treatment was substantial (k = 0.70 PACS, 0.65 smartphone), while specific treatment agreement was slight for PACS (k = 0.20) and fair for smartphone (k = 0.25). CONCLUSIONS: When compared with PACS, smartphone review of hip fracture radiographs demonstrated substantial intraobserver agreement for fracture classification and general treatment recommendations. Agreement decreased for specific treatment decisions, with a higher frequency of indeterminate classifications and treatment recommendations during smartphone review. LEVEL OF EVIDENCE: Level III.
AI evidence extraction
Main findings
In 51 patients (153 total assessments), intraobserver agreement between smartphone and PACS review was substantial for fracture classification (k=0.72), general treatment (k=0.79), and specific treatment (k=0.76). Image quality rated poor/unacceptable did not differ between PACS (5.9%) and smartphone (7.2%; p=0.8), and advanced imaging recommendations were similar (11.1% vs 11.8%; p=1.0). Interobserver agreement was moderate for fracture classification (k=0.58 PACS; 0.52 smartphone), substantial for general treatment (k=0.70 PACS; 0.65 smartphone), and low for specific treatment (k=0.20 PACS; 0.25 smartphone).
Outcomes measured
- Intraobserver agreement (smartphone vs PACS) for fracture classification
- Intraobserver agreement (smartphone vs PACS) for general treatment recommendations
- Intraobserver agreement (smartphone vs PACS) for specific treatment recommendations
- Interobserver agreement for fracture classification (PACS and smartphone)
- Interobserver agreement for general treatment recommendations (PACS and smartphone)
- Interobserver agreement for specific treatment recommendations (PACS and smartphone)
- Subjective image quality (PACS vs smartphone)
- Recommendation for advanced imaging (PACS vs smartphone)
Limitations
- Retrospective cross-sectional radiographic review design
- Small sample size (51 patients)
- Only three orthopaedic surgeons served as reviewers
- Single-center setting (private academic medical center)
- Smartphone review occurred before PACS review (fixed order), which may introduce order/learning effects
View raw extracted JSON
{
"study_type": "cross_sectional",
"exposure": {
"band": null,
"source": null,
"frequency_mhz": null,
"sar_wkg": null,
"duration": null
},
"population": "Adults (≥18 years) with isolated unilateral femoral neck or intertrochanteric hip fractures presenting in 2021 at a private academic medical center",
"sample_size": 51,
"outcomes": [
"Intraobserver agreement (smartphone vs PACS) for fracture classification",
"Intraobserver agreement (smartphone vs PACS) for general treatment recommendations",
"Intraobserver agreement (smartphone vs PACS) for specific treatment recommendations",
"Interobserver agreement for fracture classification (PACS and smartphone)",
"Interobserver agreement for general treatment recommendations (PACS and smartphone)",
"Interobserver agreement for specific treatment recommendations (PACS and smartphone)",
"Subjective image quality (PACS vs smartphone)",
"Recommendation for advanced imaging (PACS vs smartphone)"
],
"main_findings": "In 51 patients (153 total assessments), intraobserver agreement between smartphone and PACS review was substantial for fracture classification (k=0.72), general treatment (k=0.79), and specific treatment (k=0.76). Image quality rated poor/unacceptable did not differ between PACS (5.9%) and smartphone (7.2%; p=0.8), and advanced imaging recommendations were similar (11.1% vs 11.8%; p=1.0). Interobserver agreement was moderate for fracture classification (k=0.58 PACS; 0.52 smartphone), substantial for general treatment (k=0.70 PACS; 0.65 smartphone), and low for specific treatment (k=0.20 PACS; 0.25 smartphone).",
"effect_direction": "unclear",
"limitations": [
"Retrospective cross-sectional radiographic review design",
"Small sample size (51 patients)",
"Only three orthopaedic surgeons served as reviewers",
"Single-center setting (private academic medical center)",
"Smartphone review occurred before PACS review (fixed order), which may introduce order/learning effects"
],
"evidence_strength": "low",
"confidence": 0.7399999999999999911182158029987476766109466552734375,
"peer_reviewed_likely": "yes",
"keywords": [
"hip fracture",
"femoral neck fracture",
"intertrochanteric fracture",
"radiographs",
"smartphone",
"PACS",
"reliability",
"intraobserver agreement",
"interobserver agreement",
"Cohen's kappa",
"treatment planning"
],
"suggested_hubs": []
}
AI can be wrong. Always verify against the paper.
Comments
Log in to comment.
No comments yet.