如视
  • VR空间
  • 服务与支持
  • 商城
  • 如视产品
    覆盖全场景,获得高画质高精度数字孪生空间
    • 采集产品
      • 伽罗华 Galois热门
        3D 激光扫描仪,满足工厂、商业等各类大小空间采集需求
      • 庞加莱 Poincare
        手持实景扫描仪,超大空间三维重建解决方案
      • Realsee G1热门
        VR 采集云台,专为中小空间极速采集而生
      • 全景相机
        主流相机一键直连,2D 全景图自动转 3D VR
      • 手机拍 VR
        只需一部手机,随时随地记录空间,一键分享
    • 应用产品
      • VR 浏览
        让用户在 VR 场景内自由行走,全面了解空间信息
      • 虚拟展厅
        沉浸式 VR 漫游互动展示空间
      • 设计云
        家装设计师工作平台,以 AI 实现设计辅助与影视级渲染
      • 纷享家
        空间设计 AIGC 营销工具,智能的设计效果生成助手
  • 解决方案
    为你提供量身定制的数字空间综合解决方案
    • 场景类
      • 营销宣传
        VR 重构营销决策链,让每个像素都产生商业价值
      • 多维式培训
        真 3D 复刻,突破传统培训局限
      • 导航导览
        实景无界 重构空间探索
    • 行业类
      • 房产租售热门
        打通消费者租房、购房卡点,促进商机转化
      • 商业零售
        为品牌降本提效的同时,提供场景化购物新体验
      • 展厅展馆热门
        构建线上观展体验,打破时空局限
      • 工厂园区
        数字孪生工厂大幅提高作业效率
      • 家装
        VR 记录装修全过程,AI 设计一键出方案,赋能企业营销
      • 家居家电
        构建"场景化"营销闭环,打通线上家居家电销售最后一公里
      • 公共事务
        全息数字化勘查,为社会公正提供证据支撑
      • 智慧量房NEW
        庞加莱智慧量房,4 分钟测量,15 分钟直出 CAD
  • 合作与生态
    携手如视,共同构建基于数字空间的全新商业形态
    • 开放能力
      • 开放平台
        共建三维生态,互联大千世界
      • 开发手册
        开发者必读,实现快速接入
    • 商务合作
      • 联系我们热门
        专业顾问 1对1 全程服务,为您答疑解惑
      • 成为经销商
        成为如视生态合作伙伴
  • 关于我们
    为您呈现最真实的如视,了解我们、加⼊我们
    • 了解如视
      • 如视介绍
        数字空间综合解决方案引领者,致力以数字空间推动产业进化
      • 新闻媒体热门
        企业动态实时呈现,重要信息权威发布
去下载 App去制作 VR
切换语言
简体中文
English
登录注册
咨询方式
400-897-9658(工作日 9:00-19:00(北京时间))
mkt@realsee.com
如视
产品
  • 如视产品
    覆盖全场景,获得高画质高精度数字孪生空间
    • 采集产品
      • 伽罗华 Galois热门
        3D 激光扫描仪,满足工厂、商业等各类大小空间采集需求
      • 庞加莱 Poincare
        手持实景扫描仪,超大空间三维重建解决方案
      • Realsee G1热门
        VR 采集云台,专为中小空间极速采集而生
      • 全景相机
        主流相机一键直连,2D 全景图自动转 3D VR
      • 手机拍 VR
        只需一部手机,随时随地记录空间,一键分享
    • 应用产品
      • VR 浏览
        让用户在 VR 场景内自由行走,全面了解空间信息
      • 虚拟展厅
        沉浸式 VR 漫游互动展示空间
      • 设计云
        家装设计师工作平台,以 AI 实现设计辅助与影视级渲染
      • 纷享家
        空间设计 AIGC 营销工具,智能的设计效果生成助手
如视商城
  • Galois
  • Realsee G1
解决方案
  • 解决方案
    为你提供量身定制的数字空间综合解决方案
    • 场景类
      • 营销宣传
        VR 重构营销决策链,让每个像素都产生商业价值
      • 多维式培训
        真 3D 复刻,突破传统培训局限
      • 导航导览
        实景无界 重构空间探索
    • 行业类
      • 房产租售热门
        打通消费者租房、购房卡点,促进商机转化
      • 商业零售
        为品牌降本提效的同时,提供场景化购物新体验
      • 展厅展馆热门
        构建线上观展体验,打破时空局限
      • 工厂园区
        数字孪生工厂大幅提高作业效率
      • 家装
        VR 记录装修全过程,AI 设计一键出方案,赋能企业营销
      • 家居家电
        构建"场景化"营销闭环,打通线上家居家电销售最后一公里
      • 公共事务
        全息数字化勘查,为社会公正提供证据支撑
      • 智慧量房NEW
        庞加莱智慧量房,4 分钟测量,15 分钟直出 CAD
如视商城
  • Galois
  • Realsee G1
VR空间
合作与生态
  • 合作与生态
    携手如视,共同构建基于数字空间的全新商业形态
    • 开放能力
      • 开放平台
        共建三维生态,互联大千世界
      • 开发手册
        开发者必读,实现快速接入
    • 商务合作
      • 联系我们热门
        专业顾问 1对1 全程服务,为您答疑解惑
      • 成为经销商
        成为如视生态合作伙伴
如视商城
  • Galois
  • Realsee G1
服务与支持
关于我们
  • 关于我们
    为您呈现最真实的如视,了解我们、加⼊我们
    • 了解如视
      • 如视介绍
        数字空间综合解决方案引领者,致力以数字空间推动产业进化
      • 新闻媒体热门
        企业动态实时呈现,重要信息权威发布
如视商城
  • Galois
  • Realsee G1
商城
  • 联系我们
  • 简体中文
    • 简体中文
    • English
  • 去制作 VR
学术成果与能力解读

Structure-aware Indoor Scene Reconstruction via Two Levels of Abstraction

更新于 2024年05月08日

Fig. 1. Goal of our approach. Our framework starts from a raw mesh as input data (a). The indoor scene is reconstructed as a watertight and compact structure mesh (b) and a detailed scene mesh (c) preserving different levels of abstraction. Note that the texture map can be attached to the scene mesh for visualization use with method of Waechter et al., 2014 (d).

Fig. 2. Overview of our approach. Our algorithm starts from a dense triangular raw mesh generated from the point cloud of indoor scene (a). Then the whole scene is abstracted by 225 planar primitives to represent all parts of the input mesh (b). Among them, 109 planar primitives are selected that best approximate the structure objects of the indoor scene (c) and 27 isolated objects are extracted from nonstructure parts (e). After that, the 109 structure planar primitives are assembled together to form a structure mesh of all structure objects (d). Finally, the scene mesh is the union of the structure mesh and all the 27 non-structure objects which are repaired and simplified (f). Note that the back faces of mesh in (a), (d) and (f) are not shown, and ceiling planes are eliminated in (c) to better visualize the inside environment. Planar primitives in (b) and (c) are approximated by alpha-shape of corresponding triangular facets, each primitive is illustrated by a random color.

Fig. 3. Scene decomposition. First, the input mesh (a) is over-segmented into a large number of planar primitives (b). After that, all the pairs of adjacent quasicoplanr primitives are merged to a bigger one iteratively until a meaningful plane configuration is attained (c). Next, ceiling and floor planes (d), wall planes (e), as well as small structure planes like yellow ones in (f) are detected in a hierarchical manner and compose the structure planes. Finally, isolated non-structure objects are extracted by detecting connected triangular facets in the original mesh.

Fig. 9. Qualitative comparisons with shape approximation methods on RGBD (left) and LIDAR (right) scenes. With the similar number of facets (about 1200), simplified meshes returned by QEM, VSA and Structure preserve most of the large planar structures inside the indoor environment. However, these simplified models shrink at small structures since the existence of noise retained in the input raw meshes (see the cropped region). In contrast, our method produces more compact and structure-aware models where most of these small but important structures are successfully reconstructed.

Fig. 10. Quantitative comparisons with shape approximation methods on complete (left) and partial scenes (right). For complete scene, Structure produces the model that are closest to input raw mesh (see the colored points). While in case of large missing data, our method is robust enough to output a watertight mesh with the best geometric accuracy, while all the three shape approximation methods are disable to repair the holes. Besides, it takes dozens of seconds for our method to process a whole scene which is faster than Structure by one order of magnitude.

Fig. 12. Qualitative comparisons with FloorPlan generation method FloorSP on LIDAR (row 1–2) and RGBD (row 3–5) data. In case of noisy and strong non- Manhattan scenes, FloorSP generates non-manifold (row 3) and self-intersection (row 4 and 5) models. Besides, some walls are also miss-detected (row 1) and incorrectly aligned (row 2, 3 and 5). In contrast, our method is more robust to recover most of the wall structures even for rooms with curvature walls (row 2 and 4).

Fig. 13. Quantitative comparisons against FloorSP on RGBD (top) and LIDAR (middle and bottom) data. Our method produces 3D models that are closer to input wall points (see the colored points) than 2.5D models assembled by walls with a virtual thickness (10 cm) of FloorPlans by FloorSP. In particular, our method exhibits a lower error by recovering small structure details contained in the original mesh such as two close walls.

Fig. 16. Ablation study. While turning off the scene decomposition step (top row), all the detected planes are considered as structure ones and are employed for structure-aware reconstruction. This choice makes the structure mesh contain both structure and non-structure parts. While turning off the local primitive slicing strategy (middle row), all the structure primitives are sliced everywhere inside the bounding box. This method increases the computational time and the size of polyhedral cells exponentially, and leads to a non-compact model with lots of protrusions. In contrast, turning on both of these two ingredients (bottom row), a compact and structure-aware model is reconstructed within an acceptable time. In addition, our scene mesh reveals the best geometric accuracy thanks to the separation of non-structure objects from structure parts.

Fig. 17. Performance on scalable scenes. Given the input raw mesh (top left), our pipeline generates two models with different levels of abstraction: a compact structure mesh ℳs (top right) and a detailed scene mesh ℳt (bottom left). The textured scene mesh is also presented in practice (bottom right).


Acknowledgements

This work was supported in parts by NSFC (U2001206), GD Science and Technology Program (2020A0505100064, 2015A030312015), GD Talent Program (2019JC05X328), DEGP Key Project (2018KZDXM058), Shenzhen Science and Technology Program (RCJC20200714114435012) and Beike fund. The authors would like to thank Beike for providing various types of indoor scenes, Jiacheng Chen for their code and datasets, Liangliang Nan for the comparison tools, as well as Jing Zhao and Mofang Cheng for technical advices.

 

Bibtex

@article{P2M21,

title={Structure-aware Indoor Scene Reconstruction via Two Levels of Abstraction},

author={Hao Fang and Cihui Pan and Hui Huang},

journal={ISPRS Journal of Photogrammetry and Remote Sensing},

volume={178},

pages={155--170},

year={2021},

}

  • 产品
    伽罗华 Galois庞加莱 PoincareRealsee G1手机拍 VR
  • 解决方案
    展厅展馆商业零售工厂园区房产租售
  • 关于我们
    如视介绍新闻媒体联系我们
  • 快速链接
    管理后台开放平台法律协议维修售后
  • 联系我们
    电话:400-897-9658时间:工作日 9:00-19:00(北京时间)邮箱:mkt@realsee.com地址:北京市海淀区上地六街弘源首著大厦
  • 关注我们
    关注我们微信
    关注我们微信

    如视公众号

    关注我们微博
如视Realsee
如你之视(北京)科技有限公司|地址:北京市海淀区信息路7号弘源首著大厦1号楼8层|电话:400-897-9658
©Copyright2024 realsee.com版权所有|营业执照|ICP|京ICP备2022009190号-3
违法和不良信息举报电话:010-8644 0676|违法和不良信息举报邮箱:complaint@realsee.com|京公网安备 11010802039437号,logo京公网安备 11010802039437号

更多 学术成果与能力解读

Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes

Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes

2024年04月25日
 PhyIR: Physics-Based Inverse Rendering for Panoramic Indoor Images

PhyIR: Physics-Based Inverse Rendering for Panoramic Indoor Images

2024年04月25日
Floorplan Generation from 3D Point Clouds: A Space Partitioning Approach

Floorplan Generation from 3D Point Clouds: A Space Partitioning Approach

2024年04月25日
查看更多