PaperTalker , which uses a pipeline of "builders" (Slide, Subtitle, Cursor, and Talker) to create synchronized academic presentations.

The Paper2Video benchmark contains 101 research papers paired with author-created presentation videos and metadata.

Paper2Video: Automatic Video Generation from Scientific Papers

The video file is a sample from the Paper2Video dataset.

It is part of the research presented in the paper . This work introduces PaperTalker , a multi-agent framework designed to automatically transform academic papers into presentation videos featuring slides, narration, and a talking-head avatar. Key Paper Details

Paper2Video: Automatic Video Generation from Scientific Papers

You can find the dataset, code, and project details on the Official Project Page or the GitHub Repository .