Output list
Journal article
Published 12/01/2026
Anesthesiology and perioperative science, 4, 1, 22
Purpose: Earlier, patients developed surgical site infection for 2.0% of cases without Staphylococcus aureus transmission through anesthesia work areas, 11% with S. aureus transmitted susceptible to prophylactic antibiotic, and 18% with transmission of antibiotic-resistant isolates. A randomized trial and an effectiveness study both found that anesthesiologists who used basic preventive measures (e.g., alcohol releasing intravenous caps) and received feedback on colony-forming units per surface area sampled (CFU) had reduced S. aureus transmission and postoperative healthcare-associated infections. We used prospectively collected data to evaluate whether CFU would be a reliable criterion for hospitals to assess anesthesiologists' contributions to postoperative infections.
Methods: During the summer of 2025, reservoirs (e.g., anesthetist's hands at case start/end) were sampled during 81 cesarean delivery cases performed in the same operating room. There were ≤ 15 reservoirs sampled per case.
Results: 52/1016 reservoir samples had S. aureus detected, more often with greater CFU (P = 0.0063). The 159/1016 samples with < 100 CFU had no S. aureus. Total CFU of all reservoirs for each case to total S. aureus isolates was 2.50 × 109 per S. aureus (standard error 0.53 × 109, N = 81 cases). CFU and S. aureus transmission were uncorrelated (all 15 reservoirs' unadjusted P ≥ 0.12, Holm-Bonferroni P > 0.99).
Conclusions: With substantive contamination (≥ 100 CFU), so few isolates are S. aureus that surrogate measures of insufficient disinfection (e.g., ATP bioluminescence) are inaccurate markers both of S. aureus isolation and transmission. The lack of association between contamination and transmission shows that feedback on CFU provides information on the effectiveness of disinfection, not on S. aureus transmission.
Journal article
Published 06/2026
Journal of clinical anesthesia, 112, 112203
The Maximum Surgical Blood Ordering Schedule (MSBOS) is a procedure-specific lookup table for preoperative blood product ordering based on historical institutional transfusion rates.
We developed an optimal MSBOS algorithm that minimizes the probability that an adult patient will need a red blood cell transfusion that exceeds the MSBOS reservation, constrained by a prespecified overall crossmatch-to-transfusion ratio (e.g., 1.50 or 2.00). We provide Microsoft Excel 365 and Stata implementations. We tested the mathematics using 192,822 surgical cases across 2430 procedure codes over 7.8 years at a teaching hospital.
The probability distribution of units of red blood cells transfused did not follow Poisson distributions (i.e., suitability of MSBOS reservations cannot accurately be calculated from each patient's probability of transfusion or not). However, for 100% of procedures, there were monotonic decreases in the probabilities of extra units transfused (e.g., most patients 0 units, some 1-2 units, and few 3-4 units). We used this general shape for calculating the optimal MSBOS. Comparing this priority-based model with an alternative policy based on the mean units transfused per patient, MSBOS reservations were identical for the vast majority of procedures. When differences occurred, the priority-based assignment usually recommended higher reservation levels than the alternative heuristic: 288 procedures higher, and 14 lower, mostly by one unit. With the overall crossmatch-to-transfusion (CT) ratio preset at ≤2.00, the mean (standard deviation) of the ratio across the 135 categories each with at least one crossmatched unit was 1.787 (0.977). In other words, priority-based assignment results in large inequality of the crossmatch to transfusion ratio among procedures. The priority-based assignment maintained the ≤2.00 crossmatch-to-transfusion ratio while achieving a 37.9% decrease in the total hospital blood bank units (surgical and non-surgical) transfused but unreserved. Therefore, inventory par levels can be lower. This approach would have the greatest benefit for hospitals with blood banks using electronic crossmatching that are remotely located from the operating rooms suites and do not have red cell dispensing kiosks.
Our methodology provides an automated, optimal MSBOS for hospital blood bank inventory management that minimizes the probability that a patient will need a red blood cell transfusion that exceeds the reserved units, subject to the desired overall crossmatch to transfusion ratio. The optimal MSBOS can be used with our earlier methodology for automated decision-making for which patients have blood type and antibody screening.
Journal article
Published 06/2026
Perioperative care and operating room management, 43, 100654
Sequencing the cases with the smallest variability in case duration first usually means performing the shortest cases first. In the absence of downstream constraints such as full phase I post-anesthesia care unit beds (PACU), such sequencing reduces both patients’ average tardiness from scheduled start times and waiting times. However, PACU beds are often at capacity. We reviewed studies on case sequencing for operating room and non-operating anesthetizing locations.
Searches were performed in Scopus, using keywords and citations, reflecting the structure of the operating room management field. To find articles relevant to surgical sequencing constrained by PACU bed availability, a multi-step search methodology was employed. Shortest or least variable cases first were ruled out (or in) by reading and consideration of each article’s mathematical model(s) and solution algorithms.
:Twenty-six articles studied surgical case sequencing while incorporating downstream (e.g., PACU) constraints. No articles reported conditions in which the least variable or shortest cases sequenced to be performed first achieved best organizational performance. The three articles that made specific comparisons all found that the shortest cases first strategy performed relatively poorly. Managerial epidemiology studies from multiple hospitals showed that, in reality, current practice was the unsynchronized sequencing of multiple surgeons’ lists of cases. This behavior resulted in a random and thus uniform rate of admission into the PACU, achieving close to minimum peaks in bed and nursing demand, thus minimizing costs.
: The results of this narrative review show that sequencing surgical cases for the least variable and shortest cases to be performed first in operating rooms each workday is counterproductive. Unless a facility uses one of the sophisticated mathematical methods, clinical directors are recommended to change nothing and benefit from the resulting random sequencing.
Journal article
Published 06/2026
Perioperative care and operating room management, 43, 100649
Anesthesiologists are employees, and gender is a protected class. Therefore, we evaluated the effect of the anesthesiologist’s gender on anesthesia residents’ daily evaluations of the quality of their clinical supervision. Simultaneously, we evaluated the impact of the American Board of Anesthesiologists’ certification on the overall quality of supervision, as evaluated by our anesthesia resident physicians.
Evaluations with the de Oliveira Filho et al. supervision scale spanned October 2024 through September 2025 at one residency program. Mixed-effects logistic regression was used to adjust evaluation scores, maximum or not, for raters’ leniency/ severity. Weighted linear regression was performed, with the empirical Bayes predictive posterior mean estimate for each anesthesiologist’s clinical supervision performance as the dependent variable and the inverse of the squared standard errors as the weights.
From 3690 evaluations of 132 ratee anesthesiologists by 45 rating resident physicians, neither gender nor ABA board certification was significantly associated with supervision scores. The women had an estimated odds ratio of 0.78 compared to the men (P = 0.70), with a 98.3 % confidence interval of 0.47 to 1.29. Anesthesiologists without ABA certification had an estimated odds ratio of 0.69 (P = 0.15), with a 98.3 % confidence interval of 0.42 to 1.10. There was no interaction (P = 0.99).
Two earlier studies from different departments, using different approaches, found no effect of gender on faculty anesthesiologists’ evaluations of anesthesia residents. Our results complement these findings by similarly finding no significant effect on the evaluations of the anesthesiologists by the residents.
Journal article
First online publication 04/08/2026
Anesthesia and analgesia
no abstract | Perspective
Abstract
606. An Evaluation of the Efficacy of Regional Anesthesia in Reducing Donor Site Pain
Published 04/06/2026
Journal of burn care & research, 47, Suppl 1, S362 - S362
Introduction
Donor site pain remains problematic for burn patients. While pain treatment is multimodal, opioids remain the mainstay. The efficacy and use of regional anesthesia for donor site pain is understudied. The purpose of this study was to assess the impact of donor site blocks on reported pain and post-operative opioid use.
Methods
Patients admitted to the burn center from 2011-2025 who had thigh donor sites were included in the study. Data including demographics, burn history, hospital course, and regional anesthesia procedure were collected from the burn registry and EMR system. Outcome variables included pain ratings and opioid use in mg morphine equivalents (OMEs). Descriptive statistics were used to determine differences between the block and non-block groups, with additional analysis based on regional anesthesia subgroups. Standardized differences (SMD) larger than 0.25 were considered potentially substantive.
Results
The retrospective cohort study included 524 patients, 164 (31.3%) with donor site blocks and 360 (68.7%) without. Overall, the majority were male (394, 75.2%) with an average TBSA of 8.5%, (0.4-72.5). The groups were well-matched in demographics, burn injury, and hospital course data. Overall preoperative (pre-op) pain was 3.8 ± 2.4 and post operative (post-op) pain at 0-24, 0-48 and 0-72 hours was 5.2 ± 2.1, 4.8 ± 1.9, and 4.8 ± 1.9, respectively. The highest OMEs occurred during hours 0-24, doubling from pre-op (51.3 ± 57.3 pre-op v 105.3 ± 80.7 post-op). There was no difference between pain ratings or OME received between the groups. Within the block subgroups, indwelling catheter patients (87, 53.0%) had higher OMEs (71.4 vs 34.94, SMD 0.68) and pain (4.2 vs 3.4, SMD 0.31) in the preoperative period than patients who received single shots (77, 47%). In patients with catheters, groin catheters (30, 34.3%) remained in place for 42.2 ± 15.2 hours and epidurals (57, 65.6%) remained in place for 47.8 ± 24.2 hours, with groin catheters having higher OMEs and reported pain across multiple periods.
Conclusions
This single center, retrospective study shows no substantive difference in subjective pain control or opioid use with donor site regional anesthesia. However, the subjective and non-specific pain ratings, the absence of uniform analgesia protocols, and the inability to control confounding variables limit findings. As regional anesthesia adds to the cost of care, a prospective study is needed to determine its role in donor site pain analgesia.
Applicability of Research to Practice
The findings of this study also highlight the difficulties in assessing subjective values such as pain and support the need for collaboration between burn and anesthesia providers. This study has the potential to lead to standardized protocols for pain-control in the burn unit, substantially improving the experiences of patients.
Funding for the study
Medical School Funding.
Journal article
Published 04/01/2026
Indian journal of anaesthesia, 70, 4, 516 - 525
Background and Aims: The clinical prediction of post-operative nausea and vomiting (PONV) is mainly based on scoring systems developed more than 2 decades ago. We systematically reviewed machine learning studies of PONV risk prediction. Methods: We searched databases including PubMed, Scopus, Web of Science, and Google Scholar for studies published till 14 September 2025. Using the area under the receiver operating characteristic curve and its standard error, we compared predictive performance with Apfel’s original 4-parameter pre-operative scoring system [area under curve (AUC) 0.68]. We assessed the quality of reporting of the studies using the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis + Artificial Intelligence (TRIPOD+AI) framework. Results: Of 21 eligible studies, 16 were conducted in Asian countries. Three studies of mixed surgical populations reported an estimated AUC (0.714–0.814) numerically exceeding Apfel’s (AUC 0.68). These models included not only pre-operative but also intra-operative variables (e.g., anaesthetic drugs) for model development. None of the studies provided their models sufficient for implementation (e.g., computer code with estimated parameters or a web page for calculations). Furthermore, none specified how the standard errors were calculated, for assessment of their reliability compared with Apfel’s logistic regression model. Secondary analyses found that models for specific surgical populations reported larger observed AUCs than those for mixed populations. Conclusion: Although some ML algorithms reported higher discriminatory power than Apfel’s PONV risk prediction, none satisfied the TRIPOD+AI reporting criteria sufficient for clinical replacement by departments. Future research should prioritise open science principles to ensure that scientific advances can be tested for generalisability and efficacy in reducing PONV. The improved predictive performance may be realised for clinical decision-making soon before the end of surgery rather than prophylaxis chosen pre-operatively.
Journal article
Published 03/2026
Journal of clinical anesthesia, 110, 112136
At surgical suites with long workdays, anesthesia clinicians typically receive lunch breaks. We estimated the percentage relative impact of decision-making processes on 30-min breaks completed during cases' surgical periods and during two-hour windows (e.g., 11:00 AM to 1:00 PM).
Discrete-event simulations of breaks were performed using a retrospective cohort of a large teaching hospital's 15 years of actual dates (N = 5481 days), operating rooms (N = 53) in three surgical suites, surgical times (N = 460,354), and scheduled procedures (N = 30,212).
Giving breaks preferentially to rooms with cases that had the least predicted time left until the end of surgery, provided the end of surgery was expected to be late enough to finish after the break, resulted in 16.5% fewer breaks versus giving preference to longest ongoing cases (P < 0.0001). Pooled lists (i.e., using a single queue) with preferences for the longest ongoing cases resulted in overall 7.2% more complete breaks, with these substantive increases achieved for all three suites (all four P < 0.0001). Using a pooled list and giving preference to the longest ongoing cases first achieved 28.4% more complete breaks than assigning each clinician to serve sequential near-adjacent rooms, and having those clinicians prioritize the cases with the least predicted time until the end of surgery (95% confidence interval 27.7–29.1%). Sensitivity analyses showed that results were insensitive to the specific time windows for breaks. Sensitivity analyses also showed the mechanism. If every case in every room daily was the same surgical procedure, then the strategy of prioritizing cases with the least predicted time left would be comparable to prioritizing cases that have been ongoing the longest. However, in the presence of high coefficients of variability in surgical times, following log-normal distributions, prioritizing cases with the least predicted time left resulted in more incomplete breaks.
For each clinician in a suite giving breaks, assign them to a first room to break. Then, while the first set of breaks is being completed, choose the next set of rooms for breaks with preference to cases having been ongoing the longest.
•Clinical directors working at surgical suites with long workdays assign lunch breaks•Breaks were simulated by using 15 years of actual dates, rooms, and surgical times•Prioritizing breaks to cases with the least predicted time left resulted in fewer breaks•Prioritizing breaks to cases that had been ongoing the longest resulted in more breaks•Single, pooled queues resulted in more successfully completed breaks.•Results were insensitive to the specific times for the breaks (e.g., 11–1)
Journal article
Published 02/2026
Anesthesia and analgesia, 142, 2, 393 - 402
Human studies of awakening from general anesthesia inform understanding of neural mechanisms underlying recovery of consciousness. Probability distributions of times for emergence from anesthesia provide mechanistic information on whether putative biological models are generalizable. Previously reported distributions involved nonhomogenous groups, unsuitable for scientific comparisons. We used a retrospective cohort to identify surgeon-procedure combinations of homogeneous groups of patients and anesthetics to assess the probability distribution of extubation times to inform scientific studies of awakening from anesthesia. We hypothesized an acceptable fit to a log-normal distribution.
Extubation times were recorded by anesthesia practitioners using an event button in the electronic health record. From 2011 through 2023, there were 182,374 cases with general anesthesia, not positioned prone, tracheal intubation after operating room entrance, interval from start to end of surgery ≥1 hour, and inhalational agent mean minimum alveolar concentration (MAC) fraction measured from case start through surgery end ≥0.6. We applied joint criteria of the same primary surgeon, surgical procedure, MAC fraction of each inhalational agent in 0.1 increments, and binary categories of adult, trainee finishing the anesthetic, bispectral index (BIS) monitor, N2O, sugammadex, and neostigmine. We considered all combinations of categories with ≥40 cases. We used Gas Man simulation to infer the probability distribution of volatile agent concentrations in the vessel-rich group (ie, brain).
There were 48 cases among patients having oral surgery extractions by 1 surgeon, without anesthesia trainees, sevoflurane anesthesia with 0.3 MAC fraction at surgery end, without N2O, BIS monitor, or neuromuscular block reversal. Their extubation times followed a log-normal distribution (Shapiro-Wilk W = 0.98, P = .68). For the computer simulations, we assumed that patients differed solely in their binary threshold of vessel-rich group sevoflurane concentration at awakening (eg, patients with an awakening threshold of 0.26% would be unconscious for 0.1 to 14.8 minutes as sevoflurane is exhaled but the concentration remains ≥0.26%, and abruptly transition to consciousness at 15 minutes when the concentration reaches 0.25%). Expected awakening times would appear to be a log-normal distribution.
A homogeneous patient population had a log-normal distribution of extubation times. Generalizable models of awakening should have that distribution. Clinicians change awakening times by their choice of agent and its MAC fraction at surgery end. Simulation suggests that the normal distribution in the log time scale for awakening, among patients with similar conditions, can represent a relatively uniform distribution among patients in the vessel-rich group (brain) partial pressure when the abrupt transition to consciousness occurs.
Journal article
Published 02/2026
British journal of anaesthesia : BJA, 136, 2, 525 - 533
Studies in anaesthesiology frequently use generalised linear models to identify quantitatively important 'independent predictors' of log-normally distributed outcomes, such as surgical and anaesthesia times. However, the performance of common multiple-comparison procedures at preventing type I and II errors is unknown for these problems.
We conducted Monte Carlo simulations to evaluate methods for controlling the familywise error rate (FWER) and false discovery rate (FDR). Simulated datasets had log-normal outcomes and three binary predictors, with varying correlation among them (independent, strong positive, or moderate negative). We applied four FWER (Bonferroni, Šidák, Holm-Bonferroni, and Hochberg) and two FDR (Benjamini-Hochberg and Benjamini-Yekutieli) procedures to the P-values derived from the generalised linear models.
Without adjustment for multiple comparisons, the FWER was large (12.6-14.8% instead of the correct [nominal] 5.0%). Among FWER methods, the Bonferroni adjustment was the most accurate, with rates consistently close to the nominal 5.0% level across all correlation scenarios (5.2-5.3%). For FDR control, the Benjamini-Yekutieli procedure was effective for independent and negatively correlated predictors (4.5-5.1%) but failed to control the FDR under strong positive predictor correlation (6.0-9.5%).
When using generalised linear models to identify predictors of log-normal outcomes, the simplest approach, Bonferroni adjustment, provided reliable control of the FWER. The Benjamini-Yekutieli procedure is the most suitable for controlling the FDR, but our findings show it can be anti-conservative (i.e. unreliable) when potential predictors of the anaesthesia times are positively correlated (i.e. precisely the conditions that would generally hold for these problems).