Press "Enter" to skip to content

Start Searching the Answers

The Internet has many places to ask questions about anything imaginable and find past answers on almost everything.

Which is the latest version of CCNet PyTorch?

2019/08: The new version CCNet is released on branch Pytorch-1.1 which supports Pytorch 1.0 or later and distributed multiprocessing training and testing This current code is a implementation of the experiments on Cityscapes in the CCNet ICCV version . We implement our method based on open source pytorch segmentation toolbox.

Which is the best training set for CCNet?

We train all the models on fine training set and use the single scale for testing. The trained model with R=2 79.74 can also achieve about 79.01 mIOU on cityscape test set with single scale testing (for saving time, we use the whole image as input).

Is the CCNet released under the MIT License?

CCNet is released under the MIT License (refer to the LICENSE file for details). If you find CCNet useful in your research, please consider citing: To install PyTorch==0.4.0 or 0.4.1, please refer to https://github.com/pytorch/pytorch#installation.

Who are the authors of the CCNet project?

By Zilong Huang, Xinggang Wang, Yunchao Wei, Lichao Huang, Chang Huang, Humphrey Shi, Wenyu Liu and Thomas S. Huang. 2021/02: The pure python implementation of CCNet is released in the branch pure-python.

How does criss cross attention work in CCNet?

The proposed recurrent criss-cross attention takes as input feature maps H and output feature maps H” which obtain rich and dense contextual information from all pixels. Recurrent criss-cross attention module can be unrolled into R=2 loops, in which all Criss-Cross Attention modules share parameters.

How does CCNet capture long-range dependencies from all pixels?

Concretely, for each pixel, our CCNet can harvest the contextual information of its surrounding pixels on the criss-cross path through a novel criss-cross attention module. By taking a further recurrent operation, each pixel can finally capture the long-range dependencies from all pixels.