Skip navigation
Por favor, use este identificador para citar o enlazar este ítem: http://repositorio.unb.br/handle/10482/46526
Ficheros en este ítem:
No hay ficheros asociados a este ítem.
Título : Rethinking panoptic segmentation in remote sensing : a hybrid approach using semantic segmentation and non-learning methods
Autor : Carvalho, Osmar Luiz Ferreira de
Carvalho Júnior, Osmar Abílio de
Albuquerque, Anesmar Olino de
Santana, Níckolas Castro
Borges, Díbio Leandro
metadata.dc.identifier.orcid: https://orcid.org/0000-0002-5619-8525
https://orcid.org/0000-0002-0346-1684
https://orcid.org/0000-0003-1561-7583
https://orcid.org/0000-0001-6133-6753
https://orcid.org/0000-0002-4868-0629
metadata.dc.contributor.affiliation: University of Brasilia, Department of Computer Science
University of Brasilia, Department of Geography
University of Brasilia, Department of Geography
University of Brasilia, Department of Geography
University of Brasilia, Department of Computer Science
Assunto:: Sensoriamento remoto
Segmentação semântica
Aprendizagem profunda
Segmentação de imagens
Segmentação panótica
Fecha de publicación : 3-may-2022
Editorial : IEEE
Citación : CARVALHO, Osmar L. F. de Carvalho et al. Rethinking panoptic segmentation in remote sensing: a hybrid approach using semantic segmentation and non-learning methods. IEEE Geoscience and Remote Sensing Letters, [S.l.], v. 19, art. n. 3512105, p. 1-5, 2022, DOI: 10.1109/LGRS.2022.3172207. Disponível em: https://ieeexplore.ieee.org/document/9766343.
Abstract: This letter proposes a novel method to obtain panoptic predictions by extending the semantic segmentation task with a few non-learning image processing steps, presenting the following benefits: 1) annotations do not require a specific format [e.g., common objects in context (COCO)]; 2) fewer parameters (e.g., single loss function and no need for object detection parameters); and 3) a more straightforward sliding windows implementation for large image classification (still unexplored for panoptic segmentation). Semantic segmentation models do not individualize touching objects, as their predictions can merge; i.e., a single polygon represents many targets. Our method overcomes this problem by isolating the objects using borders on the polygons that may merge. The data preparation requires generating a one-pixel border, and for unique object identification, we create a list with the isolated polygons, attribute a different value to each one, and use the expanding border (EB) algorithm for those with borders. Although any semantic segmentation model applies, we used the U-Net with three backbones (EfficientNet-B5, EfficientNet-B3, and EfficientNet-B0). The results show that the following hold: 1) the EfficientNet-B5 had the best results with 70% mean intersection over union (mIoU); 2) the EB algorithm presented better results for better models; 3) the panoptic metrics show a high capability of identifying things and stuff with 65 panoptic quality (PQ); and 4) the sliding windows on a 2560×2560 -pixel area has shown promising results, in which the ratio of merged objects by correct predictions was lower than 1% for all classes.
metadata.dc.description.unidade: Instituto de Ciências Exatas (IE)
Departamento de Ciência da Computação (IE CIC)
Instituto de Ciências Humanas (ICH)
Departamento de Geografia (ICH GEA)
metadata.dc.relation.publisherversion: https://ieeexplore.ieee.org/document/9766343
Aparece en las colecciones: Artigos publicados em periódicos e afins

Mostrar el registro Dublin Core completo del ítem " class="statisticsLink btn btn-primary" href="/jspui/handle/10482/46526/statistics">



Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.