This repository is a paper digest of recent advances in collaborative / cooperative / multi-agent perception for V2I / V2V / V2X autonomous driving scenario.
-
Updated
Jun 2, 2024
This repository is a paper digest of recent advances in collaborative / cooperative / multi-agent perception for V2I / V2V / V2X autonomous driving scenario.
[ICLR 2022] Linking Emergent and Natural Languages via Corpus Transfer
A curated list of recent monocular depth estimation papers
[MedIA] Accompanying paper list and source code for survey "A comprehensive survey on deep active learning in medical image analysis"
Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)
[ICLR 2024] Official implementation of Spiking Graph Contrastive Learning (0️⃣1️⃣ SpikeGCL)
[ICLR 2024] Beyond Accuracy: Evaluating Self-Consistency of Code Large Language Models with IdentityChain
⏰ Collaboratively track deadlines of conferences recommended by CCF (Website, Python Cli, Wechat Applet) / If you find it useful, please star this project, thanks~
[ICLR 2023 Oral] Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model
[ICLR 2024] FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices Using a Computing Power-Aware Scheduler
Smart Meter Data Analytics Tutorial @ 11th International Conference on Learning Representations (ICLR 2023)
[ICLR 2024] Continuous-Multiple Image Outpainting in One-Step via Positional Query and A Diffusion-based Approach Link: https://arxiv.org/abs/2401.15652
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
[ICLR 2024] Dynamic Neural Response Tuning
This repository contains all the papers accepted in top conference of computer vision, with convenience to search related papers.
It is a comprehensive resource hub compiling all graph papers accepted at the International Conference on Learning Representations (ICLR) in 2024.
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
For deep RL and the future of AI.
①[ICLR2024 Spotlight] (GPT-4V/Gemini-Pro/Qwen-VL-Plus+16 OS MLLMs) A benchmark for multi-modality LLMs (MLLMs) on low-level vision and visual quality assessment.
Add a description, image, and links to the iclr topic page so that developers can more easily learn about it.
To associate your repository with the iclr topic, visit your repo's landing page and select "manage topics."