Volumetric Weak Supervision for Semantic Segmentation
MetadataShow full item record
Semantic segmentation is a popular task in computer vision. Fully supervised methods are data hungry, they require pixel precise annotations for thousands of images. To reduce user annotation efforts, weak supervision for semantic segmentation is becoming an area of increasing interest. Weak supervision can take many forms: bounding boxes, scribbles, image level labels. Image level supervision is the least annotation demanding, as the user is asked to just name the object classes present in the image. In this thesis, we propose a new type of weak supervision which is a generalization of image level supervision: Volumetric Supervision. In addition to providing the object classes present in the image, the user also provides the approximate size of each object class present in the images. This type of annotation is still very undemanding on the users time. To incorporate volumetric information into weakly supervised segmentation, we develop two volumetric loss functions that penalize deviation from the object size annotated by the user. Almost any semantic segmentation method with image level weak supervision can be transformed into a segmentation method with volumetric supervision using volumetric loss functions. To show the usefulness of volumetric supervision, we chose four popular methods for image level weak supervision and transform them into volumetric supervision methods. For evaluation, we create a simulated dataset that contains size information for the object classes. We also test the sensitivity of our approach to the possible mistakes in the size information dataset. Our experimental evaluation shows that volumetric supervision gives a significant improvement over image level supervision, however it is sensitive to mistakes in the size information provided by the user.
Cite this version of the work
Sharhad Bashar (2022). Volumetric Weak Supervision for Semantic Segmentation. UWSpace. http://hdl.handle.net/10012/18321