diff --git a/README.md b/README.md index b29b598..97c70dd 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ ![CI](https://github.com/fcakyon/midv500/workflows/CI/badge.svg) ## Download and convert MIDV-500 datasets into COCO instance segmentation format -Automatically download/unzip [MIDV-500](https://arxiv.org/abs/1807.05786) and [MIDV-2019](https://arxiv.org/abs/1910.04009) datasets and convert the annotations into COCO instance segmentation format. +Automatically download/unzip [MIDV-500](https://doi.org/10.18287/2412-6179-2019-43-5-818-824) and [MIDV-2019](https://doi.org/10.1117/12.2558438) datasets and convert the annotations into COCO instance segmentation format. Then, dataset can be directly used in the training of Yolact, Detectron type of models. @@ -14,9 +14,9 @@ MIDV-500 consists of 500 video clips for 50 different identity document types in You can find more detail on papers: -[MIDV-500: A Dataset for Identity Documents Analysis and Recognition on Mobile Devices in Video Stream](https://arxiv.org/abs/1807.05786) +[MIDV-500: A Dataset for Identity Documents Analysis and Recognition on Mobile Devices in Video Stream](https://doi.org/10.18287/2412-6179-2019-43-5-818-824) -[MIDV-2019: Challenges of the modern mobile-based document OCR](https://arxiv.org/abs/1910.04009) +[MIDV-2019: Challenges of the modern mobile-based document OCR](https://doi.org/10.1117/12.2558438) ## Getting started diff --git a/midv500/__init__.py b/midv500/__init__.py index 48badcb..74fb61a 100644 --- a/midv500/__init__.py +++ b/midv500/__init__.py @@ -1,6 +1,6 @@ from __future__ import absolute_import -__version__ = "0.2.0" +__version__ = "0.2.1" from midv500.convert_dataset import convert as convert_to_coco diff --git a/midv500/download_dataset.py b/midv500/download_dataset.py index 779570b..08432f0 100644 --- a/midv500/download_dataset.py +++ b/midv500/download_dataset.py @@ -114,8 +114,8 @@ def download_dataset(download_dir: str, dataset_name: str = "midv500"): """ This script downloads the MIDV-500 dataset with extra files and unzips the folders. dataset_name: str - "midv500": https://arxiv.org/abs/1807.05786 - "midv2019": https://arxiv.org/abs/1910.04009 + "midv500": https://doi.org/10.18287/2412-6179-2019-43-5-818-824 + "midv2019": https://doi.org/10.1117/12.2558438 "all": midv500 + midv2019 """