CS Colloquium Series | Eric Nalisnick
Towards a Statistical Foundation for Human-AI Collaboration
Artificial intelligence is being deployed in ever more consequential settings such as healthcare and autonomous driving. Thus, we must ensure that these systems are safe and trustworthy. One near-term solution is to involve a human in the decision-making process and enable the system to ask for help in difficult or high-risk scenarios. I will present recent advances in the “learning to defer” paradigm: decision-making responsibility is allocated to either a human or model, depending on who is more likely to take the correction action. Specifically, I will present novel formulations that better model the human collaborator’s expertise and that can support multiple human decision makers. I will also describe paths for future work, including improvements to data efficiency and applications to language models.