See Scene Text Detection for leaderboards in this task.
PaddlePaddle/PaddleOCR • • 21 Jul 2015
In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition.
clovaai/deep-text-recognition-benchmark • • ICCV 2019
Many new proposals for scene text recognition (STR) models have been introduced in recent years.
jiangxiluning/FOTS.PyTorch • • CVPR 2018
Incidental scene text spotting is considered one of the most difficult and valuable challenges in the document analysis community.
PaddlePaddle/PaddleOCR • • 2 Nov 2018
Recognizing irregular text in natural scene images is challenging due to the large variance in text appearance, such as curvature, orientation and distortion.
wenwenyu/MASTER-pytorch • • 7 Oct 2019
Attention-based scene text recognizers have gained huge success, which leverages a more compact intermediate representation to learn 1d- or 2d- attention by a RNN-based encoder-decoder architecture.
Canjie-Luo/MORAN_v2 • • 10 Jan 2019
It decreases the difficulty of recognition and enables the attention-based sequence recognition network to more easily read irregular text.
microsoft/unilm • • 21 Sep 2021
Text recognition is a long-standing research problem for document digitalization.
PaddlePaddle/PaddleOCR • • CVPR 2016
We show that the model is able to recognize several types of irregular text, including perspective text and curved text.
Canjie-Luo/Scene-Text-Image-Transformer • • 21 Dec 2019
To remedy this issue, we propose a decoupled attention network (DAN), which decouples the alignment operation from using historical decoding results.
open-mmlab/mmocr • • ECCV 2020
Theoretically, our proposed method, dubbed \emph, decodes individual characters with dynamic ratio between context and positional clues, and utilizes more positional ones when the decoding sequences with scarce context, and thus is robust and practical.
hello@paperswithcode.com . Papers With Code is a free resource with all data licensed under CC-BY-SA.