Releases: open-mmlab/mmdetection
MMDetection v3.3.0 releases
MM Grounding DINO
An Open and Comprehensive Pipeline for Unified Object Grounding and Detection
Grounding-DINO is a state-of-the-art open-set detection model that tackles multiple vision tasks including Open-Vocabulary Detection (OVD), Phrase Grounding (PG), and Referring Expression Comprehension (REC). Its effectiveness has led to its widespread adoption as a mainstream architecture for various downstream applications. However, despite its significance, the original Grounding-DINO model lacks comprehensive public technical details due to the unavailability of its training code. To bridge this gap, we present MM-Grounding-DINO, an open-source, comprehensive, and user-friendly baseline, which is built with the MMDetection toolbox. It adopts abundant vision datasets for pre-training and various detection and grounding datasets for fine-tuning. We give a comprehensive analysis of each reported result and detailed settings for reproduction. The extensive experiments on the benchmarks mentioned demonstrate that our MM-Grounding-DINO-Tiny outperforms the Grounding-DINO-Tiny baseline. We release all our models to the research community.
Detail: https://github.com/open-mmlab/mmdetection/tree/main/configs/mm_grounding_dino
MMDetection v3.2.0 Release
Highlight
v3.2.0 was released in 12/10/2023:
1. Detection Transformer SOTA Model Collection
(1) Supported four updated and stronger SOTA Transformer models: DDQ, CO-DETR, AlignDETR, and H-DINO.
(2) Based on CO-DETR, MMDet released a model with a COCO performance of 64.1 mAP.
(3) Algorithms such as DINO support AMP/Checkpoint/FrozenBN
, which can effectively reduce memory usage.
2. Comprehensive Performance Comparison between CNN and Transformer
RF100 consists of a dataset collection of 100 real-world datasets, including 7 domains. It can be used to assess the performance differences of Transformer models like DINO and CNN-based algorithms under different scenarios and data volumes. Users can utilize this benchmark to quickly evaluate the robustness of their algorithms in various scenarios.
3. Support for GLIP and Grounding DINO fine-tuning, the only algorithm library that supports Grounding DINO fine-tuning
The Grounding DINO algorithm in MMDet is the only library that supports fine-tuning. Its performance is one point higher than the official version, and of course, GLIP also outperforms the official version.
We also provide a detailed process for training and evaluating Grounding DINO on custom datasets. Everyone is welcome to give it a try.
Model | Backbone | Style | COCO mAP | Official COCO mAP |
---|---|---|---|---|
Grounding DINO-T | Swin-T | Zero-shot | 48.5 | 48.4 |
Grounding DINO-T | Swin-T | Finetune | 58.1(+0.9) | 57.2 |
Grounding DINO-B | Swin-B | Zero-shot | 56.9 | 56.7 |
Grounding DINO-B | Swin-B | Finetune | 59.7 | |
Grounding DINO-R50 | R50 | Scratch | 48.9(+0.8) | 48.1 |
4. Support for the open-vocabulary detection algorithm Detic and multi-dataset joint training.
5. Training detection models using FSDP and DeepSpeed.
ID | AMP | GC of Backbone | GC of Encoder | FSDP | Peak Mem (GB) | Iter Time (s) |
---|---|---|---|---|---|---|
1 | 49 (A100) | 0.9 | ||||
2 | √ | 39 (A100) | 1.2 | |||
3 | √ | 33 (A100) | 1.1 | |||
4 | √ | √ | 25 (A100) | 1.3 | ||
5 | √ | √ | 18 | 2.2 | ||
6 | √ | √ | √ | 13 | 1.6 | |
7 | √ | √ | √ | 14 | 2.9 | |
8 | √ | √ | √ | √ | 8.5 | 2.4 |
6. Support for the V3Det dataset, a large-scale detection dataset with over 13,000 categories.
亮点
v3.2.0 版本已经在 2023.10.12 发布:
1. 检测 Transformer SOTA 模型大合集
(1) 支持了 DDQ、CO-DETR、AlignDETR 和 H-DINO 4 个更新更强的 SOTA Transformer 模型
(2) 基于 CO-DETR, MMDet 中发布了 COCO 性能为 64.1 mAP 的模型
(3) DINO 等算法支持 AMP/Checkpoint/FrozenBN,可以有效降低显存
2. 提供了全面的 CNN 和 Transformer 的性能对比
RF100 是由 100 个现实收集的数据集组成,包括 7 个域,可以验证 DINO 等 Transformer 模型和 CNN 类算法在不同场景不同数据量下的性能差异。用户可以用这个 Benchmark 快速验证自己的算法在不同场景下的鲁棒性。
3. 支持了 GLIP 和 Grounding DINO 微调,全网唯一支持 Grounding DINO 微调
MMDet 中的 Grounding DINO 是全网唯一支持微调的算法库,且性能高于官方 1 个点,当然 GLIP 也比官方高。
我们还提供了详细的 Grounding DINO 在自定义数据集上训练评估的流程,欢迎大家试用。
Model | Backbone | Style | COCO mAP | Official COCO mAP |
---|---|---|---|---|
Grounding DINO-T | Swin-T | Zero-shot | 48.5 | 48.4 |
Grounding DINO-T | Swin-T | Finetune | 58.1(+0.9) | 57.2 |
Grounding DINO-B | Swin-B | Zero-shot | 56.9 | 56.7 |
Grounding DINO-B | Swin-B | Finetune | 59.7 | |
Grounding DINO-R50 | R50 | Scratch | 48.9(+0.8) | 48.1 |
4. 支持开放词汇检测算法 Detic 并提供多数据集联合训练可能
5. 轻松使用 FSDP 和 DeepSpeed 训练检测模型
ID | AMP | GC of Backbone | GC of Encoder | FSDP | Peak Mem (GB) | Iter Time (s) |
---|---|---|---|---|---|---|
1 | 49 (A100) | 0.9 | ||||
2 | √ | 39 (A100) | 1.2 | |||
3 | √ | 33 (A100) | 1.1 | |||
4 | √ | √ | 25 (A100) | 1.3 | ||
5 | √ | √ | 18 | 2.2 | ||
6 | √ | √ | √ | 13 | 1.6 | |
7 | √ | √ | √ | 14 | 2.9 | |
8 | √ | √ | √ | √ | 8.5 | 2.4 |
6. 支持了 V3Det 1.3w+ 类别的超大词汇检测数据集
MMDetection v3.1.0 Release
Highlights
- Supports tracking algorithms including multi-object tracking (MOT) algorithms SORT, DeepSORT, StrongSORT, OCSORT, ByteTrack, QDTrack, and video instance segmentation (VIS) algorithm MaskTrackRCNN, Mask2Former-VIS.
- Support ViTDet
- Supports inference and evaluation of multimodal algorithms GLIP and XDecoder, and also supports datasets such as COCO semantic segmentation, COCO Caption, ADE20k general segmentation, and RefCOCO. GLIP fine-tuning will be supported in the future.
- Provides a gradio demo for image type tasks of MMDetection, making it easy for users to experience.
Exciting Features
GLIP inference and evaluation
s multimodal vision algorithms continue to evolve, MMDetection has also supported such algorithms. This section demonstrates how to use the demo and eval scripts corresponding to multimodal algorithms using the GLIP algorithm and model as the example. Moreover, MMDetection integrated a gradio_demo project, which allows developers to quickly play with all image input tasks in MMDetection on their local devices. Check the document for more details.
Preparation
Please first make sure that you have the correct dependencies installed:
# if source
pip install -r requirements/multimodal.txt
# if wheel
mim install mmdet[multimodal]
MMDetection has already implemented GLIP algorithms and provided the weights, you can download directly from urls:
cd mmdetection
wget https://download.openmmlab.com/mmdetection/v3.0/glip/glip_tiny_a_mmdet-b3654169.pth
Inference
Once the model is successfully downloaded, you can use the demo/image_demo.py
script to run the inference.
python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts bench
Demo result will be similar to this:
If users would like to detect multiple targets, please declare them in the format of xx . xx .
after the --texts
.
python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts 'bench . car .'
And the result will be like this one:
You can also use a sentence as the input prompt for the --texts
field, for example:
python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts 'There are a lot of cars here.'
The result will be similar to this:
Evaluation
The GLIP implementation in MMDetection does not have any performance degradation, our benchmark is as follows:
Model | official mAP | mmdet mAP |
---|---|---|
glip_A_Swin_T_O365.yaml | 42.9 | 43.0 |
glip_Swin_T_O365.yaml | 44.9 | 44.9 |
glip_Swin_L.yaml | 51.4 | 51.3 |
Users can use the test script we provided to run evaluation as well. Here is a basic example:
# 1 gpu
python tools/test.py configs/glip/glip_atss_swin-t_fpn_dyhead_pretrain_obj365.py glip_tiny_a_mmdet-b3654169.pth
# 8 GPU
./tools/dist_test.sh configs/glip/glip_atss_swin-t_fpn_dyhead_pretrain_obj365.py glip_tiny_a_mmdet-b3654169.pth 8
The result will be similar to this:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.428
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.594
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.466
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.300
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.477
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.534
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.634
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.634
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.634
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.473
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.690
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.789
XDecoder
Installation
# if source
pip install -r requirements/multimodal.txt
# if wheel
mim install mmdet[multimodal]
How to use it?
For convenience, you can download the weights to the mmdetection
root dir
wget https://download.openmmlab.com/mmdetection/v3.0/xdecoder/xdecoder_focalt_last_novg.pt
wget https://download.openmmlab.com/mmdetection/v3.0/xdecoder/xdecoder_focalt_best_openseg.pt
The above two weights are directly copied from the official website without any modification. The specific source is https://github.com/microsoft/X-Decoder
For convenience of demonstration, please download the folder and place it in the root directory of mmdetection.
(1) Open Vocabulary Semantic Segmentation
cd projects/XDecoder
python demo.py ../../images/animals.png configs/xdecoder-tiny_zeroshot_open-vocab-semseg_coco.py --weights ../../xdecoder_focalt_last_novg.pt --texts zebra.giraffe
(2) Open Vocabulary Instance Segmentation
cd projects/XDecoder
python demo.py ../../images/owls.jpeg configs/xdecoder-tiny_zeroshot_open-vocab-instance_coco.py --weights ../../xdecoder_focalt_last_novg.pt --texts owl
(3) Open Vocabulary Panoptic Segmentation
cd projects/XDecoder
python demo.py ../../images/street.jpg configs/xdecoder-tiny_zeroshot_open-vocab-panoptic_coco.py --weights ../../xdecoder_focalt_last_novg.pt --text car.person --stuff-text tree.sky
(4) Referring Expression Segmentation
cd projects/XDecoder
python demo.py ../../images/fruit.jpg configs/xdecoder-tiny_zeroshot_open-vocab-ref-seg_refcocog.py --weights ../../xdecoder_focalt_last_novg.pt --text "The larger watermelon. The front white flower. White tea pot."
(5) Image Caption
cd projects/XDecoder
python demo.py ../../images/penguin.jpeg configs/xdecoder-tiny_zeroshot_caption_coco2014.py --weights ../../xdecoder_focalt_last_novg.pt
(6) Referring Expression Image Caption
cd projects/XDecoder
python demo.py ../../images/fruit.jpg configs/xdecoder-tiny_zeroshot_ref-caption.py --weights ../../xdecoder_focalt_last_novg.pt --text 'White tea pot'
(7) Text Image Region Retrieval
cd projects/XDecoder
python demo.py ../../images/coco configs/xdecoder-tiny_zeroshot_text-image-retrieval.py --weights ../../xdecoder_focalt_last_novg.pt --text 'pizza on the plate'
The image that best matches the given text is ../../images/coco/000.jpg and probability is 0.998
We have also prepared a gradio program in the projects/gradio_demo
directory, which you can run interactively all the inference supported by mmdetection in your browser.
Models and results
Semantic segmentation on ADE20K
Prepare your dataset according to the docs.
Test Command
Since semantic segmentation is a pixel-level task, we don't need to use a threshold to filter out low-confidence predictions. So we set model.test_cfg.use_thr_for_mc=False
in the test command.
./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-semseg_ade20k.py xdecoder_focalt_best_openseg.pt 8 --cfg-options model.test_cfg.use_thr_for_mc=False
Model | mIoU | mIOU(official) | Config |
---|---|---|---|
xdecoder_focalt_best_openseg.pt |
... |
MMDetection releases v3.0.0
v3.0.0 (6/4/2023)
We have released the official version of MMDetection v3.0.0
Highlights
- Support Semi-automatic annotation Base Label-Studio (#10039)
- Support EfficientDet in projects (#9810)
New Features
Bug Fixes
- Fix benchmark script (#9865)
- Fix the crop method of PolygonMasks (#9858)
- Fix Albu augmentation with the mask shape (#9918)
- Fix
RTMDetIns
prior generator device error (#9964) - Fix
img_shape
in data pipeline (#9966) - Fix cityscapes import error (#9984)
- Fix
solov2_r50_fpn_ms-3x_coco.py
config error (#10030) - Fix Conditional DETR AP and Log (#9889)
- Fix accepting an unexpected argument local-rank in PyTorch 2.0 (#10050)
- Fix
common/ms_3x_coco-instance.py
config error (#10056) - Fix compute flops error (#10051)
- Delete
data_root
inCocoOccludedSeparatedMetric
to fix bug (#9969) - Unifying metafile.yml (#9849)
Improvements
- Added BoxInst r101 config (#9967)
- Added config migration guide (#9960)
- Added more social networking links (#10021)
- Added RTMDet config introduce (#10042)
- Added visualization docs (#9938, #10058)
- Refined data_prepare docs (#9935)
- Added support for setting the cache_size_limit parameter of dynamo in PyTorch 2.0 (#10054)
- Updated coco_metric.py (#10033)
- Update type hint (#10040)
Contributors
A total of 19 developers contributed to this release.
Thanks @IRONICBo, @vansin, @RangeKing, @ghlerrix, @okotaku, @JosonChan1998, @zgzhengSEU, @bobo0810, @yechenzhi, @Zheng-LinXiao, @LYMDLUT, @yarkable, @xiejiajiannb, @chhluo, @BIGWangYuDong, @RangiLyu, @zwhus, @hhaAndroid, @ZwwWayne
MMDetection V2.28.2 Release
New Features and Improvements
Bug Fixes
- Fix
WIDERFace SSD
loss for Nan problem (#9734) - Fix missing API documentation in Readthedoc (#9729)
- Fix the configuration file and log path of CenterNet (#9791)
New Contributors
Contributors
A total of 4 developers contributed to this release.
Thanks @co63oc, @Ginray, @vansin, @RangiLyu
Full Changelog: v2.28.1...v2.28.2
MMDetection V3.0.0rc6 Release
Highlights
- Support Boxinst, Objects365 Dataset, and Separated and Occluded COCO metric
- Support ConvNeXt-V2, DiffusionDet, and inference of EfficientDet and Detic in
Projects
- Refactor DETR series and support Conditional-DETR, DAB-DETR, and DINO
- Support
DetInferencer
for inference, Test Time Augmentation, and automatically importing modules from registry - Support RTMDet-Ins ONNXRuntime and TensorRT deployment
- Support calculating FLOPs of detectors
New Features
- Support Boxinst (#9525)
- Support Objects365 Dataset (#9600)
- Support ConvNeXt-V2 in
Projects
(#9619) - Support DiffusionDet in
Projects
(#9639, #9768) - Support Detic inference in
Projects
(#9645) - Support EfficientDet inference in
Projects
(#9645) - Support Separated and Occluded COCO metric (#9710)
- Support auto import modules from registry (#9143)
- Refactor DETR series and support Conditional-DETR, DAB-DETR and DINO (#9646)
- Support
DetInferencer
for inference (#9561) - Support Test Time Augmentation (#9452)
- Support calculating FLOPs of detectors (#9777)
Bug Fixes
- Fix deprecating old type alias due to new version of numpy (#9625, #9537)
- Fix VOC metrics (#9784)
- Fix the wrong link of RTMDet-x log (#9549)
- Fix RTMDet link in README (#9575)
- Fix MMDet get flops error (#9589)
- Fix
use_depthwise
in RTMDet (#9624) - Fix
albumentations
augmentation post process with masks (#9551) - Fix DETR series Unit Test (#9647)
- Fix
LoadPanopticAnnotations
bug (#9703) - Fix
isort
CI (#9680) - Fix amp pooling overflow (#9670)
- Fix docstring about noise in DINO (#9747)
- Fix potential bug in
MultiImageMixDataset
(#9764)
Improvements
- Replace NumPy transpose with PyTorch permute to speed-up (#9762)
- Deprecate
sklearn
(#9725) - Add RTMDet-Ins deployment guide (#9823)
- Update RTMDet config and README (#9603)
- Replace the models used in the tutorial document with RTMDet (#9843)
- Adjust the minimum supported python version to 3.7 (#9602)
- Support modifying palette through configuration (#9445)
- Update README document in
Project
(#9599) - Replace
github
withgitee
in.pre-commit-config-zh-cn.yaml
file (#9586) - Use official
isort
in.pre-commit-config.yaml
file (#9701) - Change MMCV minimum version to
2.0.0rc4
fordev-3.x
(#9695) - Add Chinese version of single_stage_as_rpn.md and test_results_submission.md (#9434)
- Add OpenDataLab download link (#9605, #9738)
- Add type hints of several layers (#9346)
- Add typehint for
DarknetBottleneck
(#9591) - Add dockerfile that is easier to use in China (#9659)
- Add twitter, discord, medium, and youtube link (#9775)
- Prepare for merging refactor-detr (#9656)
- Add metafile to ConditionalDETR, DABDETR and DINO (#9715)
- Support to modify
non_blocking
parameters (#9723) - Comment repeater visualizer register (#9740)
- Update user guide:
finetune.md
andinference.md
(#9578)
New Contributors
- @NoFish-528 made their first contribution in #9346
- @137208 made their first contribution in #9434
- @lyviva made their first contribution in #9625
- @zwhus made their first contribution in #9589
- @zylo117 made their first contribution in #9670
- @chg0901 made their first contribution in #9740
- @DanShouzhu made their first contribution in #9578
Contributors
A total of 27 developers contributed to this release.
Thanks @JosonChan1998, @RangeKing, @NoFish-528, @likyoo, @Xiangxu-0103, @137208, @PeterH0323, @tianleiSHI, @wufan-tb, @lyviva, @zwhus, @jshilong, @Li-Qingyun, @sanbuphy, @zylo117, @triple-Mu, @KeiChiTse, @LYMDLUT, @nijkah, @chg0901, @DanShouzhu, @zytx121, @vansin, @BIGWangYuDong, @hhaAndroid, @RangiLyu, @ZwwWayne
Full Changelog: v3.0.0rc5...v3.0.0rc6
MMDetection V2.28.1 Release
Bug Fixes
- Enable to set float mlp_ratio in SwinTransformer (#8670)
- Fix import error that causes training failure (#9694)
- Fix isort version in lint (#9685)
- Fix init_cfg of YOLOF (#8243)
Contributors
A total of 4 developers contributed to this release.
Thanks @triple-Mu, @i-aki-y, @twmht, @RangiLyu
Full Changelog: v2.28.0...v2.28.1
MMDetection V2.28.0 Release
Highlights
- Support Objects365 Dataset and Separated and Occluded COCO metric
- Support acceleration of RetinaNet and SSD on Ascend
- Deprecate the support of Python 3.6
New Features and Improvements
- Support Objects365 Dataset (#7525)
- Support Separated and Occluded COCO metric (#9574)
- Support acceleration of RetinaNet and SSD on Ascend with documentation (#9648, #9614)
- Added missing
-
to--format-only
in documentation.
Deprecations
- Upgrade the minimum Python version to 3.7, the support of Python 3.6 is no longer guaranteed (#9604)
Bug Fixes
- Fix validation loss logging by (#9663)
- Fix inconsistent float precision between mmdet and mmcv (#9570)
- Fix argument name for fp32 in
DeformableDETRHead
(#9607) - Fix typo of all config file path in Metafile.yml (#9627)
Contributors
A total of 11 developers contributed to this release.
Thanks @eantono, @akstt, @@lpizzinidev, @RangiLyu, @kbumsik, @tianleiSHI, @nijkah, @BIGWangYuDong, @wangjiangben-hw, @@jamiechoi1995, @ZwwWayne
New Contributors
- @kbumsik made their first contribution in #9627
- @akstt made their first contribution in #9614
- @lpizzinidev made their first contribution in #9649
- @eantono made their first contribution in #9663
Full Changelog: v2.27.0...v2.28.0
MMDetection V2.27.0 Release
Highlights
- Support receptive field search of CNN models(TPAMI 2022: RF-Next) (#8191)
Bug Fixes
- Fix deadlock issue related with MMDetWandbHook (#9476)
Improvements
- Add minimum GitHub token permissions for workflows (#8928)
- Delete compatible code for parrots in roi extractor (#9503)
- Deprecate np.bool Type Alias (#9498)
- Replace numpy transpose with torch permute to speed-up data pre-processing (#9533)
Documents
- Fix typo in docs/zh_cn/tutorials/config.md (#9416)
- Fix Faster RCNN FP16 config link in README (#9366)
Contributors
A total of 12 developers contributed to this release.
Thanks @Min-Sheng, @gasvn, @lzyhha, @jbwang1997, @zachcoleman, @chenyuwang814, @MilkClouds, @Fizzez, @boahc077, @apatsekin, @zytx121, @DonggeunYu
New Contributors
- @DonggeunYu made their first contribution in #9416
- @apatsekin made their first contribution in #9366
- @boahc077 made their first contribution in #8928
- @MilkClouds made their first contribution in #9476
- @chenyuwang814 made their first contribution in #9503
- @zachcoleman made their first contribution in #9498
- @Min-Sheng made their first contribution in #9533
Full Changelog: v2.26.0...v2.27.0
MMDetection V3.0.0rc5 Release
Highlights
- Support RTMDet instance segmentation models. The technical report of RTMDet is on arxiv
- Support SSHContextModule in paper SSH: Single Stage Headless Face Detector.
New Features
- Support RTMDet instance segmentation models and improve RTMDet test config (#9494)
- Support SSHContextModule in paper SSH: Single Stage Headless Face Detector (#8953)
- Release CondInst pre-trained model (#9406)
Bug Fixes
- Fix CondInst predict error when
batch_size
is greater than 1 in inference (#9400) - Fix the bug of visualization when the dtype of the pipeline output image is not uint8 in browse dataset (#9401)
- Fix
analyze_logs.py
to plot mAP and calculate train time correctly (#9409) - Fix backward inplace error with
PAFPN
(#9450) - Fix config import links in model converters (#9441)
- Fix
DeformableDETRHead
object has no attributeloss_single
(#9477) - Fix the logic of pseudo bboxes predicted by teacher model in SemiBaseDetector (#9414)
- Fix demo API in instance segmentation tutorial (#9226)
- Fix
analyze_results
(#9380) - Fix the error that Readthedocs API cannot be displayed (#9510)
Improvements
- Remove legacy
builder.py
(#9479) - Make sure the pipeline argument shape is in
(width, height)
order (#9324) - Add
.pre-commit-config-zh-cn.yaml
file (#9388) - Refactor dataset metainfo to lowercase (#9469)
- Add PyTorch 1.13 checking in CI (#9478)
- Adjust
FocalLoss
andQualityFocalLoss
to allow different kinds of targets (#9481) - Refactor
setup.cfg
(#9370) - Clip saturation value to valid range
[0, 1]
(#9391) - Only keep meta and state_dict when publishing model (#9356)
- Add segm evaluator in ms-poly_3x_coco_instance config (#9524)
- Update deployment guide (#9527)
- Update zh_cn
faq.md
(#9396) - Update
get_started
(#9480) - Update the zh_cn user_guides of
useful_tools.md
anduseful_hooks.md
(#9453) - Add type hints for
bfp
andchannel_mapper
(#9410) - Add type hints of several losses (#9397)
- Add type hints and update docstring for task modules (#9468)
Contributors
A total of 20 developers contributed to this release.
Thanks @liuyanyi, @RangeKing, @lihua199710, @MambaWong, @sanbuphy, @Xiangxu-0103, @twmht, @JunyaoHu, @Chan-Sun, @tianleiSHI, @zytx121, @kitecats, @QJC123654, @JosonChan1998, @lvhan028, @Czm369, @BIGWangYuDong, @RangiLyu, @hhaAndroid, @ZwwWayne
New Contributors
- @lihua199710 made their first contribution in #9388
- @twmht made their first contribution in #9450
- @tianleiSHI made their first contribution in #9453
- @kitecats made their first contribution in #9481
- @QJC123654 made their first contribution in #9468
Full Changelog: v3.0.0rc4...v3.0.0rc5