DocVQA
Document Visual Question Answering
Overview
Document Visual Question Answering (DocVQA) seeks to inspire a “purpose-driven” point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this information. To this end we organize a series of challenges and release datasets to enable machines "understand" document images and thereby answer questions asked on them.
News
[Aug 2021] Invited speakers for DocVQA workshop are finalized . Check the workshop webpage
[April 2021] InfographicVQA arXiv preprint available now
[April 2021] 2021 Edition of the DocVQA challenge concludes and leaderboards are public now
[November 2020] 2021 Edition of DocVQA challenge begins
[June 2020] Presentation of competition summary and overview of DocVQA 2020 challenge and announcement of prizes at the CVPR 2020 workshop
[May 2020] - DocVQA 2020 Challenge ends and results are published
Acknolwedgement
DocVQA is supported by MeitY, Government of India , CERCA programme and AWS Machine Learning Research Award (2019) form Amazon.
We would like to thank Kerala Women in Nano Startups (KWINS) team of Kerala Startup Mission for helping us connect with an amazing group of women freelancers who helped us with the annotation of DocVQA and InfographicVQA datasets.
People
MInesh Mathew
IIIT Hyderabad
Rubén Pérez Tito
CVC, University of Barcelona
Dimostheins Karatzas
CVC, University of Barcelona
R. Manmatha
Amazon
C. V. Jawhar
IIIT Hyderbad
Contact
Please feel free to contact us for any queries, suggestions or feedback.
Email ID: docvqa@cvc.uab.es