Skip to content

Image List Page

Runtian edited this page Jan 6, 2021 · 2 revisions

Quick Annotator (QA) currently does not support reading whole slides images (WSI) directly, but QA does provide a script that divides WSIs into smaller image tiles:

Open cli folder and use extract_tiles_from_wsi_openslide.py to divide WSI into smaller tiles. Here is a basic usage tutorial.

E:\Study\Research\QA\GithubQA\QuickAnnotator\cli>python extract_tiles_from_wsi_openslide.py --help
usage: extract_tiles_from_wsi_openslide.py [-h] [-p PATCHSIZE]
                                           [-l OPENSLIDELEVEL] [-o OUTDIR]
                                           [-b]
                                           [input_pattern [input_pattern ...]]

Convert image and mask into non-overlapping patches

positional arguments:
  input_pattern         Input filename pattern (try: *.png), or txt file
                        containing list of files

optional arguments:
  -h, --help            show this help message and exit
  -p PATCHSIZE, --patchsize PATCHSIZE
                        Patchsize, default 256
  -l OPENSLIDELEVEL, --openslidelevel OPENSLIDELEVEL
                        openslide level to use
  -o OUTDIR, --outdir OUTDIR
                        Target output directory
  -b, --bgremoved       Don't save patches which are considered background,
                        useful for TMAs

After dividing WSIs into smaller image tiles, the user uploads them into QA in 1. Drop files here or click to upload. In Image List Page, the user could view the thumbnail images in the projects together with meta information.

Notification Window:

The red boxed rectangle in the image below is a Notification Window. It displays notification messages and status updates. This feature will be on all webpages in QA except Project Page.

Image_List_Page_UI

The user follows instructions in the numerical order of buttons shown on the Image List Page.

Make Patches

Images tiles are sub-divided into 256 x 256 patches, amenable to deep learning. The user can decide whether they want to remove white backgrounds when dividing these image tiles. This function is useful when there are large areas of white backgrounds. ("E.g. TMA spot")

(Re)Train Model 0

Model 0 is also known as autoencoder. A u-net consisting of a block depth of 5 layers and 113,306 parameters was trained on these patches in an auto-encoding fashion. Model weights were optimized to reproduce the input as output with high fidelity. This helps create an initialized base model that was subsequently used in the downstream supervised training.

Embed Patches

A UMAP embedding plot is generated by Embed Patches with the current latest model. The user can View Embedding plot in the embedding page, which helps to select patches for annotation which are dispersed in the model space. The embedding page is a 2D representation of all patches in the system, where patches perceived to be similar by the model are plotted closely together.

Quick Annotator Wiki

QA's Wiki is complete documentation that explains to user how to use this tool and the reasons behind. Here is the catalogue for QA's wiki page:

Home:

  1. Quick Annotator Pages
  1. User Guide
  1. Frequently Asked Questions
Clone this wiki locally