Neural Radiance Fields (NeRFs) represent the appearance and geometry of scene as an implicit function, stored in the weights of a neural network. They are trained from images with known camera intrinsics and extrinsics, without necessarily having any information about the 3D structure of the scene (although having information about the 3D structure of the scene can help during training). NeRFs are rapidly gaining in popularity due to their ability to produce photo-realistic renderings of scene from novel viewpoints. In particular, they are able to precisely model view-dependent effects such as reflections and refractions. This tutorial gives a brief introduction into Neural Radiance Fields, discussing the original formulation as well as selected state-of-the-art variants. In addition, we will have a brief look at modern NeRF software packages.
Lecture by Torsten Sattler from Czech Technical University in Prague.
0:00 Introduction
1:05 Various Scene Representations
2:40 Signed Distance Function
3:22 Neural Radiance Fields
4:47 Volume Rendering
10:35 Learning a Volumetric Representation
21:25 Plenoxels
29:37 Multiresolution Hash Encoding
33:05 Instant Neural Graphics Primitives (demo)
41:25 Why is extraction of geometry from NeRFs difficult?
44:43 Processing highly variable data -- photos taken "in the wild"
48:20 Large scale scenes
49:33 Extraction of 3D geometry
56:20 Depth and normal cues
1:02:03 nerfstudio (demo)
1:14:49 NeRF synthesis - generative NeRFs
1:17:36 Wrap up