如视 真实如你所视,数字空间与空间智能综合解决方案引领者 - 如视
下载APP联系我们
  • VR空间
  • 商城
下载客户端联系我们
如视 真实如你所视,数字空间与空间智能综合解决方案引领者 - 如视
产品
  • 平台与应用
    • 一站式 VR 全景制作平台
      一站式 VR 全景制作平台
      热门
      从创作到管理,你的一站式 VR 管家
    • VR 浏览
      VR 浏览
      沉浸式 VR 漫游,让空间细节尽在掌握
  • 自研采集产品
    • 伽罗华 P4
      伽罗华 P4
      热门
      升级款 3D 激光扫描仪,极精秒速采集
    • 伽罗华 M2
      伽罗华 M2
      3D 激光扫描仪,适配多种空间需求
    • 庞加莱 R1
      庞加莱 R1
      NEW
      小巧轻便,手持实景扫描仪新标杆
    • 庞加莱 S1
      庞加莱 S1
      手持实景扫描仪,超大空间三维重建
    • Realsee G2
      Realsee G2
      热门
      激光 VR 采集云台,室内空间采集优选
  • 兼容采集方式
    • 全景图生成 VR
      全景图生成 VR
      上传全景图,全流程自动化生成 VR
    • 手机拍 VR
      手机拍 VR
      拿起手机拍 VR,随时随地记录空间
    • 全景相机拍 VR
      全景相机拍 VR
      主流相机一键连,2D 全景自动转 VR
  • 能力展示
    • 如视空间数据
      海量空间数据,丰富产物,立即下载了解
    • Argus 大模型
      图片一键还原立体世界,业内首创大模型
  • 下载 如视VR App
    下载 如视VR App
    超越想象的简单,手机也能轻松拍 VR ,随时随地体验空间采集
  • 上门采集服务
    上门采集服务
    专属摄影师上门,覆盖全国 100+ 主要城市,支持灵活调整方案
解决方案
  • 场景类
    • 营销宣传
      营销宣传
      让每一处场景都成获客入口
    • 导航导览
      导航导览
      3D 精准定位,沉浸式导览新体验
    • 孪生底座
      孪生底座
      空间智能 AI,高效全场景数字孪生
    • 多维式培训
      多维式培训
      1:1 实景还原,多维赋能实战培训
    • 远程协作
      远程协作
      提升效率,大幅降低沟通成本
    • 现场踏勘
      现场踏勘
      全维度记录,让踏勘作业更便捷
  • 行业类
    • 房产租售
      房产租售
      热门
      沉浸 VR 带看,真信任,促成交
    • 特展临展
      特展临展
      热门
      省心落地,高效数字化临展
    • 餐饮酒旅
      餐饮酒旅
      餐厅 VR 上线美团,预定转化快人一步
    • 商业零售
      商业零售
      降本提效,打造场景化购物新体验
    • 家装
      家装
      VR 记录全流程,AI 设计一键赋能营销
    • 家居家电
      家居家电
      场景闭环,打通线上销售最后 1 公里
    • 智慧量房
      智慧量房
      NEW
      4 分钟测量,15 分钟直出 CAD
    • 工厂园区
      工厂园区
      数字孪生工厂大幅提高作业效率
    • 展厅展馆
      展厅展馆
      构建线上观展体验,打破时空局限
    • 公共事务
      公共事务
      三维现勘,为社会公正提供证据支撑
    • 道路事故快勘
      道路事故快勘
      精准固证数字化留痕,现场快勘快处快撤
  • 联系我们
    联系我们
    专业顾问N对1全程服务,为您答疑解惑
  • 精彩案例
    精彩案例
    行业落地项目集锦,直观展现应用价值
VR空间
合作与支持
  • 服务与支持
    • 帮助中心
      常见问题与使用指南,快速获取答案
    • 学习中心
      系列视频课程,轻松学会 VR 制作
    • 开发者手册
      开发者必读,实现快速接入
  • 合作生态
    • 开放平台
      共建三维生态,互联大千世界
    • 渠道与生态
      携手如视,成为如视生态合作伙伴
  • 了解我们
    • 新闻中心
      权威发布企业动态,实时呈现行业热点
    • 案例研究
      沉淀客户实践,赋能业务增长
    • 产品功能与更新
      掌握产品更新,高效落地数字化需求
  • 如视介绍
    如视介绍
    数字空间综合解决方案引领者
  • 联系我们
    联系我们
    专业顾问N对1全程服务,为您答疑解惑
商城
  • 去制作 VR
  • 登录
  • 联系我们
学术成果与能力解读

Structure-aware Indoor Scene Reconstruction via Two Levels of Abstraction

更新于 2024年05月08日

Fig. 1. Goal of our approach. Our framework starts from a raw mesh as input data (a). The indoor scene is reconstructed as a watertight and compact structure mesh (b) and a detailed scene mesh (c) preserving different levels of abstraction. Note that the texture map can be attached to the scene mesh for visualization use with method of Waechter et al., 2014 (d).

Fig. 2. Overview of our approach. Our algorithm starts from a dense triangular raw mesh generated from the point cloud of indoor scene (a). Then the whole scene is abstracted by 225 planar primitives to represent all parts of the input mesh (b). Among them, 109 planar primitives are selected that best approximate the structure objects of the indoor scene (c) and 27 isolated objects are extracted from nonstructure parts (e). After that, the 109 structure planar primitives are assembled together to form a structure mesh of all structure objects (d). Finally, the scene mesh is the union of the structure mesh and all the 27 non-structure objects which are repaired and simplified (f). Note that the back faces of mesh in (a), (d) and (f) are not shown, and ceiling planes are eliminated in (c) to better visualize the inside environment. Planar primitives in (b) and (c) are approximated by alpha-shape of corresponding triangular facets, each primitive is illustrated by a random color.

Fig. 3. Scene decomposition. First, the input mesh (a) is over-segmented into a large number of planar primitives (b). After that, all the pairs of adjacent quasicoplanr primitives are merged to a bigger one iteratively until a meaningful plane configuration is attained (c). Next, ceiling and floor planes (d), wall planes (e), as well as small structure planes like yellow ones in (f) are detected in a hierarchical manner and compose the structure planes. Finally, isolated non-structure objects are extracted by detecting connected triangular facets in the original mesh.

Fig. 9. Qualitative comparisons with shape approximation methods on RGBD (left) and LIDAR (right) scenes. With the similar number of facets (about 1200), simplified meshes returned by QEM, VSA and Structure preserve most of the large planar structures inside the indoor environment. However, these simplified models shrink at small structures since the existence of noise retained in the input raw meshes (see the cropped region). In contrast, our method produces more compact and structure-aware models where most of these small but important structures are successfully reconstructed.

Fig. 10. Quantitative comparisons with shape approximation methods on complete (left) and partial scenes (right). For complete scene, Structure produces the model that are closest to input raw mesh (see the colored points). While in case of large missing data, our method is robust enough to output a watertight mesh with the best geometric accuracy, while all the three shape approximation methods are disable to repair the holes. Besides, it takes dozens of seconds for our method to process a whole scene which is faster than Structure by one order of magnitude.

Fig. 12. Qualitative comparisons with FloorPlan generation method FloorSP on LIDAR (row 1–2) and RGBD (row 3–5) data. In case of noisy and strong non- Manhattan scenes, FloorSP generates non-manifold (row 3) and self-intersection (row 4 and 5) models. Besides, some walls are also miss-detected (row 1) and incorrectly aligned (row 2, 3 and 5). In contrast, our method is more robust to recover most of the wall structures even for rooms with curvature walls (row 2 and 4).

Fig. 13. Quantitative comparisons against FloorSP on RGBD (top) and LIDAR (middle and bottom) data. Our method produces 3D models that are closer to input wall points (see the colored points) than 2.5D models assembled by walls with a virtual thickness (10 cm) of FloorPlans by FloorSP. In particular, our method exhibits a lower error by recovering small structure details contained in the original mesh such as two close walls.

Fig. 16. Ablation study. While turning off the scene decomposition step (top row), all the detected planes are considered as structure ones and are employed for structure-aware reconstruction. This choice makes the structure mesh contain both structure and non-structure parts. While turning off the local primitive slicing strategy (middle row), all the structure primitives are sliced everywhere inside the bounding box. This method increases the computational time and the size of polyhedral cells exponentially, and leads to a non-compact model with lots of protrusions. In contrast, turning on both of these two ingredients (bottom row), a compact and structure-aware model is reconstructed within an acceptable time. In addition, our scene mesh reveals the best geometric accuracy thanks to the separation of non-structure objects from structure parts.

Fig. 17. Performance on scalable scenes. Given the input raw mesh (top left), our pipeline generates two models with different levels of abstraction: a compact structure mesh ℳs (top right) and a detailed scene mesh ℳt (bottom left). The textured scene mesh is also presented in practice (bottom right).


Acknowledgements

This work was supported in parts by NSFC (U2001206), GD Science and Technology Program (2020A0505100064, 2015A030312015), GD Talent Program (2019JC05X328), DEGP Key Project (2018KZDXM058), Shenzhen Science and Technology Program (RCJC20200714114435012) and Beike fund. The authors would like to thank Beike for providing various types of indoor scenes, Jiacheng Chen for their code and datasets, Liangliang Nan for the comparison tools, as well as Jing Zhao and Mofang Cheng for technical advices.

 

Bibtex

@article{P2M21,

title={Structure-aware Indoor Scene Reconstruction via Two Levels of Abstraction},

author={Hao Fang and Cihui Pan and Hui Huang},

journal={ISPRS Journal of Photogrammetry and Remote Sensing},

volume={178},

pages={155--170},

year={2021},

}

上一篇

Floorplan Generation from 3D Point Clouds: A Space Partitioning Approach

下一篇

3D物品检测算法及应用
  • 产品
    • 伽罗华 Galois P4
    • Realsee G2
    • 庞加莱 Poincare
    • 手机拍 VR
  • 解决方案
    • 展厅展馆
    • 商业零售
    • 工厂园区
    • 房产租售
  • 关于我们
    • 如视介绍
    • 新闻媒体
    • 联系我们
  • 快速链接
    • 管理后台
    • 开放平台
    • 法律协议
    • 维修售后
  • 联系我们
    • 电话:400-897-9658
    • 时间:工作日 9:00-19:00(北京时间)
    • 邮箱:mkt@realsee.com
    • 地址:北京市海淀区上地六街弘源首著大厦
  • 关注我们
    扫描下方二维码关注我们公众号关注我们微博
    扫码关注我们的微信公众号
    微信公众号
如视Realsee
如你之视(北京)科技有限公司|地址:北京市海淀区信息路7号弘源首著大厦1号楼8层|电话:400-897-9658
©Copyright2026 realsee.com版权所有|营业执照|ICP|京ICP备2022009190号-3
违法和不良信息举报电话:010-8644 0676|违法和不良信息举报邮箱:complaint@realsee.com|京公网安备 11010802039437号,logo京公网安备 11010802039437号

更多「学术成果与能力解读」

EDM: Efficient Deep Feature Matching

EDM: Efficient Deep Feature Matching

2025年08月07日
Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes

Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes

2024年04月25日
 PhyIR: Physics-Based Inverse Rendering for Panoramic Indoor Images

PhyIR: Physics-Based Inverse Rendering for Panoramic Indoor Images

2024年04月25日
查看更多