Volumetric Imaging & Video: From Pixels to Froxels

The representation of (captured) images and video has remained unchanged since it's infancy: Images are represented as pixels per line and lines per image, videos simply are a series of images. But we've entered an era of severe changes in capturing content: Multicamera-arrays even in mobile devices, depth sensing via Time of Flight or Gated Imaging, volumetric capture via LIDAR and new capture paradigms like event-cameras show, that computational imaging significantly differs from classical film-based capture and hence new forms of representing images and videos are needed. The (pro-)seminar will review a palette of approaches for representing volumetric content. From multiview video plus depth (MVD) over point clouds and voxels we will introduce and discuss neural representations like neural radiance fields and neural surfaces as well as volumetric representations that consider the capture setup (Froxels) and we'll compare them to representations like point clouds or voxels.

Requirements: It's advantageous but not strictly required to have passed basic courses on Image Processing and/or Computer Graphic. You should be ready to read and review three scientific papers.

Places: 12 (7 Seminar + 5 Proseminar)

Dates:

  • Kick-Off-Meeting: April 23th, 2024, 14:15 in C6.3 Room 9.05
  • Initial Presentation (Planning, global introduction of topic): June 4th, 2024, 14:15 in C6.3 Room 9.05
  • Final Presentation: July 16th, 2024, 14:15 in C6.3 Room 9.05
You should be ready to read and review three scientific papers.

Tutor: Robin Kremer (kremer@nt.uni-saarland.de)