Our paper, “Be Intentional About Fairness!: Fairness, Size, and Multiplicity in the Rashomon Set” (Dai, Ravishankar, Yuan, Black, and Neill), was awarded Best Paper Honorable Mention at EAAMO 2025!
Uncategorized
reporting bias in 311 calls
Our paper, Estimating Reporting Bias in 311 Complaint Data, was featured on NPR’s The Academic Minute.
Congratulations Dr. Boxer!
Congratulations to Kate Boxer for the successful defense of her PhD thesis, “Fair and Explainable Machine Learning: Estimating Bias, Detecting Disparities, and Designing for Algorithmic Recourse”!
using health data to predict poor-quality housing
Our paper, Housing-Sensitive Health Conditions Can Predict Poor-Quality Housing, is now out (open access) in the February 2024 issue of Health Affairs. Thanks to Health Affairs for highlighting our work in their Feb. 6 special issue briefing and Feb. 26 Health Affairs Insider Journal Club. Thanks also to David Brand (Gothamist), Steve Scott (WCBS 880 Newsradio), and Robert Polner (NYU) for their wonderful press coverage!
2 AAAI paper accepts!
The ML4G Lab had two papers accepted to AAAI 2023:
Katie Rosman and Daniel B. Neill. Detecting anomalous networks of opioid prescribers and dispensers in prescription drug data. Proc. 37th AAAI Conf. on Artificial Intelligence, 2023, in press.
Pavan Ravishankar, Qingyu Mo, Edward McFowland III, and Daniel B. Neill. Provable detection of propagating sampling bias in prediction models. Proc. 37th AAAI Conf. on Artificial Intelligence, 2023, in press.
Congratulations all!