Semantic point cloud segmentation is vital for 3D scene understanding, which provides fundamental information for further applications like augmented reality and mobile robots. Recent developments in deep learning based point clouds analysis methods have made considerable progress in 3D semantic segmentation[11, 6, 10, 24]. However, 3D semantic segmentation requires point-level labels, which are much more expensive and time-consuming than the labels of 3D classification and detection tasks.
Recently, many efforts are put into weakly supervised semantic segmentation (WSSS) on 2D images and achieves remarkable results. However, despite dense labeling on point clouds is even more expensive than dense image labeling, only a few works have been done on 3D WSSS and yet remains a huge performance gap with the state-of-the-art fully supervised methods.
The existing 3D WSSS methods form the problem in different directions.  utilize dense 2D segmentation labels to supervise the training in 3D by projecting the 3D predictions onto the corresponding 2D labels. However, each 3D sample is projected to 2D in several views and each projected 2D image needs pixel-level labeling. Thus, this method still requires a large amount of manual labeling.  proposes to generate pseudo point-level label using 3D class activation map from subcloud-level annotation, which is similar to the 2D WSSS methods using image-level labels.  directly trains a point cloud segmentation network with 10 time fewer labels, which is close to point supervision and scribble supervision[17, 25] in 2D WSSS methods. We adopt the 3D WSSS setting from  that taking only a small fraction of points to be labeled.
In this work, we aim to better utilize the limited annotations by densely propagate the supervision signals from labeled points to unlabeled points. Concretely, we first adopt the state-of-the-art point cloud segmentation network KPConv as our basic model. The basic model can be regarded as a normal point cloud segmentation network trained with partial labels. To address the problems stated above, we propose a two stage training strategy with our cross-sample feature reallocating module (CSFR) and intra-sample feature reallocating (ISFR) module respectively.
In the first training stage, inspired by [22, 18, 9], we choose two samples with at least one common classes as an input pair. A cross-sample feature reallocating module is trained to transfer similar features across two samples. However, different from [22, 18, 9] which directly add the warped features from another sample to find more activated region, we reconstruct the features from another sample using the point correlation between the data pair instead of adding them to the current feature. The each point in reconstructed feature can be considered as a weighted sum of all points from the other sample. Then, we use a cross regularization loss to softly pose the weak supervision on the reallocated features. Therefore, we can re-route the gradients to densely propagate supervision from the labeled points in one sample to the unlabeled points in the other sample. In the second training stage, we propagate the supervision from labeled points to unlabeled points within each sample with our intra-sample feature reallocating module. Similarly, we reallocate the features based on point correlation. Therefore, we can densely propagate the supervision from the labeled points to unlabeled points within the sample. Since there may exist uncommon classes between input pairs, which could bring some noisy during supervision propagation, we train the network with CSFR module to learn general representations first and train the network with ISFR module as finetuning in the second stage. Since the two modules are based on point correlations, supervision signals can be propagate to the unlabeled points with similar features of the labeled points. Noted that these two modules are only used to help the training of the basic network. We do not need these modules to generate the final segmentation predictions during inference. The two modules implicitly guide the training of the basic segmentation network, and will be discarded at test time.
In summary, our contributions are:
We propose a cross-sample feature reallocating module to reconstruct features and re-route gradients across the input pair based on point correlation. Hence, the supervision signals from labeled points can be propagated to unlabeled points across samples.
We propose an intra-sample feature reallocating module to reallocate features within each sample based on point correlation. Then, the supervision signals can be propagated from labeled points to unlabeled points within each point cloud sample.
We propose a two stage training strategy so the cross-sample feature reallocating module and the intra-sample feature reallocating module can both contribute to the performance without interfering each other.
Our weakly supervised methods with only 10% and 1% of the points being labeled can produce compatible results with their fully supervised counterpart in S3DIS and ScanNet dataset.
2 Related Works
2.1 Semantic Segmentation on 3D Point clouds
There are mainly three categories for 3D semantic segmentation methods: projection-based methods, voxel-based methods and point-based methods. Multi-view projection based methods[15, 34, 8] project the 3D data into 2D from multiple viewpoints, therefore they can easily process the projected data on 2D convolution networks. However, these methods suffer from occlusion, view-point selection, misalignment and other defects that may limit the performance. Voxel-besed methods like Submanifold Sparse Convolution, MinkowskiNet and Occuseg first quantize the point cloud data into voxels and perform sparse 3D convolution on the voxels. These methods can often achieve good segmentation performance but severely suffer from heavy memory and time consumption. Point based methods directly take raw point cloud data as network input. PointNet and PointNet++ process point cloud data with stacked MLPs. DGCNN proposes EdgeConv to capture local geometry by dynamically generating graphs for points with their neighbors. PointCNN and PointConv formulate convolution operations in 3D using KNNs for each point. KPConv uses filters and kernel points in euclidean space to formulate convolution operation with distances of points within the filter.
2.2 Weakly supervised 2D semantic segmentation
Most 2D WSSS methods are using image-level labels. Based on class activation map(CAM), many methods[29, 31, 13, 21, 21, 1, 14, 22, 18, 9] refines the CAM generated from a classification network to generate pseudo pixel-level labels. Then, segmentation networks are trained using the pseudo pixel-level labels. Besides image-level label, there other kinds of weak labels like point supervision and scribble supervision[17, 25] which is similar to the weak setting in this work. Points and scribble supervision methods usually constrain the unlabeled points using label consistency with local smoothness.
2.3 Weakly supervised 3D semantic segmentation
Existing 3D WSSS methods utilize different kinds of weak supervisions. utilize dense 2D segmentation labels to supervise the training in 3D by projecting the 3D predictions onto the corresponding 2D labels.  proposes to generate pseudo point-level label using 3D class activation map from subcloud-level annotation.  directly trains a point cloud segmentation network with 10 time fewer labels.
As illustrated in Fig.2, we propose a two stage training strategy. In the first training stage, we train the basic segmentation network with the cross-sample feature reallocating module. And in the second stage, we train the the basic segmentation network with the intra-sample feature reallocating module. We will explain each module in detail.
3.1 Basic segmentation network
The basic component of our method is a point cloud segmentation network. We take the state-of-the-art deep learning architecture Kernel Point Convolution(KPConv) segmentation network as our backbone network. We denote each input point cloud sample as where is the input point number and is the input feature dimension. After feeding the to the encoder, we get the encoded feature map , where is the down-sampled point numbers for sample and is the feature dimension. The encoded feature is then fed into the decoder to get the final prediction , where is the number of classes. The one-hot label of the sample is denoted as . However, as we only have labels for a small fraction of the points, we define a binary mask to indicate whether a point is being labeled, the mask is 1 for labeled points and 0 for unlabeled points. Then, a weakly supervised softmax cross-entropy loss is calculated on the labeled points as:
where is a normalization factor.
3.2 Cross-sample feature reallocating
We take two point cloud samples and with at least one common label as an input pair. We feed the input pair into the KPConv encoder in the basic network and get the feature maps and .
The feature reallocating is achieved by warping point features using the point correlation between the input samples. We first calculate a cross affinity matrixbetween the two features. With a learnable matrix , the affinity matrix is derived as:
The cross affinity matrix describes the point-wise similarity between all pair of points in two samples. We then normalize the cross affinity matrix row-wise and column-wise to guide the feature reallocating for each point:
Then, we use the normalized affinity to guide the feature reallocating:
The warped feature maps and have the same size and spatial shape with the original feature maps and . However, all the features in are collected from sample and each point feature in can be seen as a weighted sum of all point features from , which means we constructed a new feature in the shape of using the features from and vice versa.
Unlike the previous methods[22, 18, 9] which utilize common semantics from other samples and directly discover more object regions by generating pseudo labels. We feed the reallocated and original feature maps into the decoders with shared weights respectively. For sample side, we feed the original feature map and the reallocated feature map from sample into the decoder network and get the outputs and .
Here we have the weak labels for sample and each point in can be seen as a weighted sum of all the points from sample . Therefore it is possible to densely propagate the supervision from the labeled points in sample to the unlabeled points in sample as we re-routed the gradients using the feature reallocating module. However, there may exist uncommon classes between the input samples, directly apply a weakly segmentation loss on the reallocated branch output may introduce noise into the training process. Thus, we do not calculate segmentation loss for the reallocated branch. Instead, inspired by [23, 4, 5], we first calculate a weak segmentation loss as in Equation 1 on the output of the original feature and propose a cross regularization(CR) loss to maximize the agreement between and :
Therefore, we can softly propagate the supervision signal from the labeled in sample to the unlabeled points in sample and vice versa. In this way, each labeled points can transfer the supervision signals to those points with similar features in the other sample.
3.3 Intra-sample feature reallocating
In this section, we propose an intra-sample feature propagation(ISFR) module to further propagate supervision from labeled points to unlabeled points within each sample and finetuning the network against the possible noisy introduced from CSFR module. For each input sample , we can get the feature map from the same encoder network. Similar to the previous module, we can calculate a self affinity matrix for the feature:
where is a learnable matrix. The self affinity matrix describes the point-wise correlation between all points within the input sample. We then normalize the self affinity matrix to guide the feature reallocating for each point:
Then, we use the normalized affinity to guide the feature warping:
Unlike the classical self-attention
, we remove the residual connection to retain the same activation intensity. Here each point incan be considered as a weighted sum of all the points in the sample and the features are reallocated with respect to the point correlation. We separately feed the reallocated feature and the original feature to the decoder. Thus, the decoder outputs from the two branches are and .
Similar to the cross regularization loss, we use a similar self regularization(SR) loss on the two outputs to regularize the reallocated feature with the original output:
Since the features are reallocated within the samples and will not introduce supervision from absent classes like CSFR, we apply another weak segmentation loss to the self reallocating branch:
where is a normalization factor. Therefore, the supervision signals on the labeled points can be transferred to the unlabeled points with similar features.
As shown in Fig.2, we use a two stage training strategy to avoid interference between the two modules during training. In stage one, we train the basic segmentation network with the cross-sample feature reallocating module. In this stage, for each sample, the network learns from the weak labels of this sample and the supervision propagated from the weak labels of another sample. The overall learning objective in stage one can be expressed as:
In the second stage, we train the basic segmentation network with the intra-sample feature reallocating module. The second stage propagate supervision from labeled points to unlabeled points. The overall training objectives in stage two is:
We argue that joint training of the two modules would hamper the performance of the basic segmentation network since the cross regularization loss in CSFR module and the self regularization loss in ISFR module may interfere the training of each other as the CSFR module may bring wrong supervision from uncommon classes to the other sample, while the two stage training can avoid this issue and improve the performance by taking advantage of both the modules. We will further discuss this through experiments in section 4.3.
The two modules implicitly guide the optimization of basic network only at training time. In inference, we can simply discard the two modules and use the basic segmentation network as a normal point cloud segmentation network to get the segmentation predictions. Therefore, no extra memory and computational resources are introduced at test time.
|modules||10% label||1% label|
|One stage training|
|Two stage training|
Dataset: We conduct our experiments on the popular public dataset S3DIS. S3DIS covers 6 areas of entire floor from 3 different buildings with a total of 215 million points and covers over 6000. The dataset is annotated with 13 classes. We follow the common practice that use Area 5 as test scene to measure the generalization ability. We also perform experiments on ScanNet which contains 1513 scenes and annotated with 20 classes.
Weak labels: We follow  to annotate only 10% of the points. We first sample 4% of the points from the original data as the network inputs. Then, we randomly label 10% of points in each class for the sampled input point clouds. The final predictions will be back projected to the original point clouds. Therefore, only 10% of the network input training data and only 0.4% of the original point cloud data are labeled. We also perform experiments with fewer labels that only 1% of the input points are labeled.
4.2 Implementation details
We use the KPConv segmentation model KPFCNN as our backbone network. The network is an encoder-decoder fully convolutional network with skip connections. The encoder is composed by bottleneck ResNet blocks
with KP convolution layers. The decoder part is composed by nearest upsampling layers with unary convolution layers. We put the CSFR and ISFR module after the first upsampling layer for larger spacial resolution. Due to the limitation in computational resources, we use ball query to sample point cloud as input samples, the sample radius is set to 2m. We use a Momentum SGD optimizer, the initial learning rate is set to 0.01 and the momentum is set to 0.98. We train the first stage for 600 epochs and the second stage for another 600 epochs.
|setting||basic branch||cross branch||intra branch|
4.3 Ablation studies
Two stage training versus joint training: Table 1 compares one stage training with two stage training performances trained with 10% and 1% labels. For one stage training, we perform experiments with only CSFR module or ISFR module, each of the module produces performance gain over the baseline method for both 10% and 1% label case. However, when we jointly train CSFR and ISFR modules in one stage, we observe a performance drop, producing results lower than solely training one module and even lower than the baseline method in both cases. From the experiments, we argue that the training losses in the two module may interfere each other during optimization. Since the CR loss and SR loss from the two modules both trying to pull the segmentation output closer to the outputs of the two branches, the overall training objective may be deviated from our real target, better segmentation performance. With the losses from two modules being optimised together, the model may fail to find the global optimum and being optimized toward wrong directions. Therefore, by separate the training into two stages, we can avoid the interference between the modules during training and take advantage of both the modules.
Training orders: We also compare the training orders of the two modules in Table 1. ISFR-CSFR means we train the network with ISFR module in the first stage and CSFR module in the second stage. CSFR-ISFR means that we train the network with CSFR module in the first stage and ISFR module in the second stage. We observe that CSFR-ISFR outperforms ISFR-CSFR under both 10% and 1% supervision. We argue that the CSFR module can help the network to learn more generalized coarse representations across samples while ISFR module can act like a finetuning module that finetune the learned representations to and impose constraints on unlabeled points within the samples. Therefore if we use the opposite training order, we can get worse results comparing to even each single modules.
Effects of different losses: We evaluate the effectiveness of each losses in Table 2 within in each module. We compare results with different combinations of the losses for a single stage training with only one module each. For the CSFR module, as shown in Table 2(a), we suppose is a segmentation loss appended to the decoder of the cross propagated branch which we did not include in our architecture. We can observe that training with reaches best performance for the CSFR module for both the label settings which means the cross regularization loss helps the segmentation performance. We also observe that training with produces even lower score than solely using , which means adding another segmentation loss on the cross propagated decoder branch would harm the performance. This supports our statement in section 3.2 that additional supervision on the propagate branch would introduce noise into the training process since the inherent difference between two sample.
Table 2(b) shows that in the ISFR branch, training with all three losses produces better results than using and solely using the segmentation loss on the original branch. This result supports our statement in section 3.3 that the ISFR module imposes constraints on unlabeled points within the sample. Since the feature propagation is processed inside each sample, no noises would be introduced to the training. Thus, calculating a segmentation loss on the propagated feature branch would improve the training results.
|2D dense label||GPFN||50.8||-||-||-||-||-||-||-||-||-||-||-||-||-|
Performance of different branches: Table 3 compares the segmentation performance of different decoder branches in the two stage settings. In both settings, the cross branches produce poorest segmentation results the features of this branch are propagated from the other sample. However, the cross branch can still produce reasonable scores as the network is learned to only propagate features from same category for each point. The intra branch produces better results than the cross branch, but still lower than the basic branch. This supports our arguement that the ISFR module imposes more constraints to unlabeled points as the features of unlabeled points can be propagated to the labeled points in same category. But during inference, as the network already learned the representations, propagating the features is not helping the predictions. This is also a difference between our ISFR mudule with the self-attention module in many vision applications.
We also compare the results between the two settings. The outputs from all three branches in CSFR-ISFR performs better than those in ISFR-CSFR, which also supports our arguement in section 3.4. From the results, if ISFR module is trained first, the network can overfit within samples and not generalize across sample. Thus the cross branch performance is worse in ISFR-CSFR. Then, with the CSFR module trained in the second stage, the information from other samples may affect as noises. Therefore the performance from the basic branch is even lower than individually training with the two modules shown in Table 1.
4.4 Qualitative results
Fig.4(a) shows the affinity learned in CSFR between the input pairs. Fig.4(b) shows the affinity learned in ISFR in each sample. We show the affinity map on the left to the selected point on the right. The point clouds are sparse since we perform CSFR and ISFR in down-sampled features. Obviously, both the modules have successfully learned to propagate features from the same class.
We provide visualizations of our final segmentation predictions in Fig.5. We compare our results with the ground truth and the predictions from fully supervised counterpart. We can see that our method performs even better in label consistency and in hard examples like the fourth row.
4.5 Segmentation results
In Table 4, we compare our proposed method with state-of-the-art methods, other weakly supervised methods and our baseline method.
Comparison with baseline method: The baseline method is only the basic segmentation network trained with the weak labels. Ours method is the full model with cross- and intra-sample feature propagation modules and trained with the two stage strategy. The experiments show that our proposed method improves the performance of the baseline method with both 10% and 1% labels.
Comparison with existing 3D WSSS methods: We compare our proposed method with existing 3D WSSS methods[26, 33].  utilizes 2D dense labels on 2D projections of the 3D point clouds and  utilize the same weak supervision technique by annotating 10% of the points.  can also produce close or better results to its fully supervised counterpart, but the overall performance still remain a huge gap with current state-of-the-art methods. Our proposed method surpass both of them by a large margin.
Comparison with fully supervised methods: We compare our weakly supervised method with some fully supervised state-of-the-art methods[19, 20, 28, 16, 6, 16, 24] on the public dataset S3DIS Area-5. Also, we compare our method under weak supervision against full supervision. Our method produce even slightly higher results using 10% label than trained under full supervision by 0.1%. We believe that supervision propagation is not that useful under full supervision as each point can directly get supervision information from its own label but it can still benefit from the self-attention like architecture. Therefore our 10% label results can produce similar results to the fully supervised counterpart.
Results on ScanNet: Table 5 shows the segmentation results in mIoU(%) in ScanNet validation set. Our method outperforms the previous method MPRM by a large margin. However, with 10% points labeled, the baseline results is already very close to the fully supervised results. We argue that ScanNet is a very large scale dataset, 10% of the label can already provide strong supervision. Our method is effective with only 1% labeled points, where we outperform the baseline method by 1.9% mIoU. In practice, it is not easy to collect a large scale dataset like ScanNet, we believe our method is meaningful in real-world applications.
|KPConv-basic||1 % label||68.1|
|DSP (Ours)||1 % label||70.0|
|KPConv-basic||10 % label||71.1|
|DSP (Ours)||10 % label||71.6|
In this paper, we propose a weakly supervised point cloud segmentation method with only 10% or 1% percent of the points being labeled. We developed cross- and intra-sample feature reallocating modules to densely propagate supervision from labeled points to unlabeled points. Our proposed method with 10% and 1% of the points being labeled can produce compatible results with its fully supervised counterpart in S3DIS and ScanNet dataset.
-  Jiwoon Ahn and Suha Kwak. Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In , pages 4981–4990, 2018.
-  Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
-  Amy Bearman, Olga Russakovsky, Vittorio Ferrari, and Li Fei-Fei. What’s the point: Semantic segmentation with point supervision. In European conference on computer vision, pages 549–565. Springer, 2016.
-  Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
-  Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029, 2020.
Christopher Choy, JunYoung Gwak, and Silvio Savarese.
4d spatio-temporal convnets: Minkowski convolutional neural networks.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3075–3084, 2019.
-  Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5828–5839, 2017.
-  Angela Dai and Matthias Nießner. 3dmv: Joint 3d-multi-view prediction for 3d semantic scene segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 452–468, 2018.
Junsong Fan, Zhaoxiang Zhang, Tieniu Tan, Chunfeng Song, and Jun Xiao.
Cian: Cross-image affinity net for weakly supervised semantic
Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 10762–10769, 2020.
-  Benjamin Graham, Martin Engelcke, and Laurens Van Der Maaten. 3d semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9224–9232, 2018.
-  Lei Han, Tian Zheng, Lan Xu, and Lu Fang. Occuseg: Occupancy-aware 3d instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2940–2949, 2020.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. corr abs/1512.03385 (2015), 2015.
-  Zilong Huang, Xinggang Wang, Jiasi Wang, Wenyu Liu, and Jingdong Wang. Weakly-supervised semantic segmentation network with deep seeded region growing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7014–7023, 2018.
-  Alexander Kolesnikov and Christoph H Lampert. Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In European conference on computer vision, pages 695–711. Springer, 2016.
-  Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, and Caroline Pantofaru. Virtual multi-view fusion for 3d semantic segmentation. arXiv preprint arXiv:2007.13138, 2020.
-  Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. In Advances in neural information processing systems, pages 820–830, 2018.
-  Di Lin, Jifeng Dai, Jiaya Jia, Kaiming He, and Jian Sun. Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3159–3167, 2016.
-  Weide Liu, Chi Zhang, Guosheng Lin, Tzu-Yi HUNG, and Chunyan Miao. Weakly supervised segmentation with maximum bipartite graph matching. In Proceedings of the 28th ACM International Conference on Multimedia, pages 2085–2094, 2020.
-  Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652–660, 2017.
-  Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pages 5099–5108, 2017.
-  Tong Shen, Guosheng Lin, Chunhua Shen, and Ian Reid. Bootstrapping the performance of webly supervised semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1363–1371, 2018.
-  Guolei Sun, Wenguan Wang, Jifeng Dai, and Luc Van Gool. Mining cross-image semantics for weakly supervised semantic segmentation. In ECCV, 2020.
-  Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780, 2017.
-  Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6411–6420, 2019.
-  Bin Wang, Guojun Qi, Sheng Tang, Tianzhu Zhang, Yunchao Wei, Linghui Li, and Yongdong Zhang. Boundary perception guidance: A scribble-supervised semantic segmentation approach. In IJCAI, pages 3663–3669, 2019.
-  Haiyan Wang, Xuejian Rong, Liang Yang, Jinglun Feng, Jizhong Xiao, and Yingli Tian. Weakly supervised semantic segmentation in 3d graph-structured point clouds of wild scenes. arXiv preprint arXiv:2004.12498, 2020.
Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He.
Non-local neural networks.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7794–7803, 2018.
-  Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5):1–12, 2019.
-  Yude Wang, Jie Zhang, Meina Kan, Shiguang Shan, and Xilin Chen. Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12275–12284, 2020.
-  Jiacheng Wei, Guosheng Lin, Kim-Hui Yap, Tzu-Yi Hung, and Lihua Xie. Multi-path region mining for weakly supervised 3d semantic segmentation on point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4384–4393, 2020.
-  Yunchao Wei, Huaxin Xiao, Honghui Shi, Zequn Jie, Jiashi Feng, and Thomas S Huang. Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7268–7277, 2018.
-  Wenxuan Wu, Zhongang Qi, and Li Fuxin. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9621–9630, 2019.
-  Xun Xu and Gim Hee Lee. Weakly supervised semantic point cloud segmentation: Towards 10x fewer labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13706–13715, 2020.
-  Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, and Kai Xu. Fusion-aware point convolution for online semantic 3d scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4534–4543, 2020.
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba.
Learning deep features for discriminative localization.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921–2929, 2016.