Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes


https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/e816bdf0-b213-4f28-418a-513f26f04950.png


Figure 1. Given a set of posed sparse-view images for a large-scale scene, we reconstruct global illumination and SVBRDFs. The recovered properties are able to produce convincing results for several mixed-reality applications such as material editing, editable novel view synthesis and relighting. Note that we change roughness of all walls, and albedo of all floors. The detailed specular reflectance shows that our method successfully decomposes physically-reasonable SVBRDFs and lighting. Please refer to supplementary videos for more animations.



https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/bc8829bf-ee4f-4b4b-8f45-3b6dcf20cb41.png


Figure 2. Overview of our inverse rendering pipeline. Given sparse calibrated HDR images for a large-scale scene, we reconstruct the geometry and HDR textures as our lighting representation. PBR material textures of the scene, including albedo and roughness, are optimized by differentiable rendering (DR). The ambiguity between materials is disentangled by the semantics prior and the room segmentation prior. Gradient flows in Green Background.


https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/cd3b3ef7-9e9a-480c-ff7e-9783019d7992.png

Figure 3. Visualization of TBL (left) and precomputed irradiance (right). For any surface point x, the incident radiance from direction −ωi can be queried from the HDR texture of the point x′, which is the intersection point between the geometry and the ray r(t) = x + tωi. The irradiance can be directly queried from the precomputed irradiance of x via NIrF or IrT.




https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/1a945456-21b6-4839-80b1-280ebf5fe6fa.png


Table 1. Quantitative comparison on our synthetic dataset. Our method significantly outperforms the state-of-the-arts in roughness estimation. NeILF∗ [58] denotes source method with their implicit lighting representation.



https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/2625d3ea-293b-4461-a4e2-f799842c0497.png


Figure 4. Qualitative comparison on synthetic dataset. Our method is able to produce realistic specular reflectance. NeILF∗ [58] denotes source method with their implicit lighting representation.



https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/6256a9dd-4d7e-4c9d-fed3-ed072e22bd18.png


Figure 5. Qualitative comparison in the 3D view on challenging real dataset. This sample is Scene 1. Our method reconstructs globallyconsistent and physically-reasonable SVBRDFs while other approaches struggle to produce consistent results and disentangle ambiguity of materials. Note that the low roughness (around 0.15 in ours) leads to the strong highlights, which are similar to GT.



https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/bf7920c4-7780-49a3-5798-bec591cac11d.png


Table 2. Quantitative comparison of re-rendered images on our real dataset.



https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/2aa9f6c7-e54f-476d-4aac-000485fd2cf7.png


Figure 6. Qualitative comparison in the image view on challenging real dataset. From left to right: Scene 8 and Scene 9. Red denotes the Ground Truth image. Our physically-reasonable materials are able to render similar appearance to GT. Note that Invrender [65] and NeILF [58] do not produce correct highlights, and NVDIFFREC [41] fails to distinguish the ambiguity between albedo and roughness.



https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/78c504e9-c860-4e37-64e8-08bb000761e2.png


Table 3. Ablation study of roughness estimation on synthetic dataset.


https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/7f3023bf-cb1d-4eb1-f201-147011dfd610.png


Figure 9. Ablation study of our material optimization strategy in the 3D mesh view on challenging real dataset. This sample is Scene 11. In baseline, we jointly optimize albedo and roughness.



https://vrlab-public.ljcdn.com/release/vrsaas/work/tag/894f7a7b-9a4a-4f78-44df-ec1e54aba6bb.png


Figure 10. Editable novel view synthesis. In Scene 8, we edit the albedo of the wall, and edit the roughness of the floor. In Scene 9, we edit the albedo of the floor and the wall. Our method produces convincing results (see the lighting effects in the floor and wall).

来咨询如视专家

有疑问?来咨询如视专家

购买咨询:400-897-9658

工作日 9:00-19:00(北京时间)

售后服务:400-060-8206

工作日 9:00-19:00(北京时间)

扫码可以添加如视专家好友

添加企业微信,如视专家为您提供解答