Scene Parsing With Global Context Embedding
ICCV 2017 | We present a scene-parsing method that utilizes global context information based on both parametric and non-parametric models. Compared to previous methods, which only exploit the local relationship between objects, we train a context network based on scene similarities to generate feature representations for global contexts. We show that the proposed method can eliminate false positives that are not compatible with the global context representations.
Collaborators: Wei-Chih Hung, Yi-Hsuan Tsai, Xiaohui Shen, Zhe Lin, Kalyan Sunkavalli, Xin Lu, Ming-Hsuan Yang