DocVQA Challenge 2020

Deadline: Finished challenge
Prologue

First edition of the DocVQA challenge was organized in the context of CVPR 2020 workshop on Text and Documents in the Deep Learning Era
The challenge is hosted at Robust Reading Challenge (RRC) platform. The challenge comprises two tasks.

Task 1 - VQA on Document Images

thumbnail

A typical VQA task, where natural language questions are defined over single document images, and an answer needs to be generated by interpreting the image.

Evaluation Metirc

We will be using Average Normalized Levenshtein Similarity (ANLS) as the evaluation metric. For more details on the metric please see the metric used for Task 3 for scene text VQA challenge .


  • Answers are not case sensitive
  • Answers are space sensitive
  • Answers or tokens comprising answers are not limited to a fixed size dictionary. It could be any word/token which is present in the document.
Task 2 - VQA on Document Images Collection

thumbnail

A retrieval-style task where given a question, the aim is to identify and retrieve all the documents in a large document collection that are relevant to answering this question.

Evaluation Metric

The methods will be ranked according to the correctness of the evidences provided evaluated through the * Mean Average Precision (MAP)*. If the submission contains the answers to the questions, it will be also evaluated and the precision and recall metrics will be provided. However, these metrics will not be used to rank the methods in the competition.
More details of the tasks can be found under the Tasks tab of the comeptition page in RRC portal

Note: Although the challenge was organized as part of the workshop in CVPR 2020, the challenge is open to submissions post the challenge period. In the leaderboard on the RRC platform, we use different color to highlight the challenge entries.

Winners of the 2020 Challenge

Below are the winners of the 2020 edition of the DocVQA challenge. The first prize winners under each task were awarded a cash prize of USD 1000, sponsored by Amazon AWS.

Task 1 Winners


  • Winner - PingAn-OneConnect-Gammalab-DQA team of OneConnect GammaLab
    • Team - Han Qiu, Guoqiang Xu, Chenjie Cao, Chao Gao, Dexun Wang, Fengxin Yang, Xiao Xie, Yu Qiu and Yu Qiu
  • Runner up - Structural LM team from DAMO NLP
    • Team - Chenliang Li, Bin Bi, Ming Yan, Wei Wang and Songfang Huang
Task 2 Winners


  • Winner - PingAn-OneConnect-Gammalab-DQA team of OneConnect GammaLab
    • Team - Han Qiu, Guoqiang Xu, Chenjie Cao, Chao Gao, Dexun Wang, Fengxin Yang, Xiao Xie, Yu Qiu and Yu Qiu
  • Runner up - iFLYTEK-DOCR of iFlytek
    • Team - Chenyu Liu, Fengren Wang, Jiajia Wu, Jinshui Hu, Bing Yin, Cong Liu

DocVQA Challenge 2021

The 2021 challenge will take place in the context of ICDAR 2021. Thus edition of the challenge feature two tasks - the task 2 from 2020 edition continues in this edition as well and a new task 3 is introduced. The task3 is a challenge on VQA on infographics

Infographic VQA

The objective of infographic VQA task is to answer questions asked on an infographic image. Infographics (or information graphic) is a visual representation of information or data in the form of charts, diagrams, etc. so that it is easy for humans to understand. Therefore, visual information is much more relevant than in previous tasks.

Unlike DocVQA task1 which is a pure "extractive QA" task, infographic VQA allows answers which are not explicitly extracted from the given image. The answer for a question in this can be any of the following types

  • Answer is a piece contiguous text from the image (single span of text)
  • Answer is a list of "items" , where each item is a piece of text from the image ( multiple spans)
  • Answer is a contiguous piece of text from the question itself (a span from the question)
  • Answer is a number ( for example "2", "2.5", "2%", " 2/3" etc..). For example there are questions asking for count of something or cases where answer is sum of two values given in the image.


Challenge Dates


  • [Dec 2020] Released train split for the new infographic VQA
  • [Jan 10 2020] Release validation and test splits for Infographic VQA
  • [March 31 2021] Submission deadline
  • [5-10 Sept, 2021] Results Presentation at ICDAR 2021