Enhanced Computing Support for Multidisciplinary Medical Team Meetings
Research Field:Computer Science
Lead PI:Dr. Saturnino Luz
Abstract:Multiparty meeting browsing and retrieval is an active research area. One problem is how to segment multimodal recordings automatically, and offer reference points on topic change. A novel solution based on content-free features is proposed in this study.A solution to general meeting browsing is difficult, because the meeting structures vary, and there is no uniform definition on topic. The research proposed here is confined to multidisciplinary medical team meetings (MDTMs). MDTMs have become an established practice in many hospitals, and they have relatively predictable structure. The meeting participants are clinical specialists working together, the discussion topics are divided by patient cases, and for each patient, the discussionis organized by certain steps. All of these structures can be used to design an algorithm, in order to automatically locate the reference points in audiovisual recordings.The most salient event in MDTM is the patient case discussion (PCD). A PCD at an MDTM is a highly structured event and therefore its vocalization patterns should be amenable to automatic generalization. This research focuses on automatic PCD segmentation. It has the potential that the solutions in MDTMs can be extended to more general meeting segmentation tasks.The initial steps in this study are speaker turn detection, speaker segmentation and speaker clustering. Once vocalization turns are segmented automatically, the research turns into the main question: to perform topic segmentation (PCD segmentation).In further steps, the PCDs can even be segmented into discussion stages. This study leads to good understanding of meetings and finally multimodal meeting data can be incorporated into electrical patient records.