paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
float64
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/omnieval-a-benchmark-for-evaluating-omni
2506.20960
null
null
OmniEval: A Benchmark for Evaluating Omni-modal Models with Visual, Auditory, and Textual Inputs
In this paper, we introduce OmniEval, a benchmark for evaluating omni-modality models like MiniCPM-O 2.6, which encompasses visual, auditory, and textual inputs. Compared with existing benchmarks, our OmniEval has several distinctive features: (i) Full-modal collaboration: We design evaluation tasks that highlight the strong coupling between audio and video, requiring models to effectively leverage the collaborative perception of all modalities; (ii) Diversity of videos: OmniEval includes 810 audio-visual synchronized videos, 285 Chinese videos and 525 English videos; (iii) Diversity and granularity of tasks: OmniEval contains 2617 question-answer pairs, comprising 1412 open-ended questions and 1205 multiple-choice questions. These questions are divided into 3 major task types and 12 sub-task types to achieve comprehensive evaluation. Among them, we introduce a more granular video localization task named Grounding. Then we conduct experiments on OmniEval with several omni-modality models. We hope that our OmniEval can provide a platform for evaluating the ability to construct and understand coherence from the context of all modalities. Codes and data could be found at https://omnieval.github.io/.
null
https://arxiv.org/abs/2506.20960v1
https://arxiv.org/pdf/2506.20960v1.pdf
null
[ "Yiman Zhang", "Ziheng Luo", "Qiangyu Yan", "wei he", "Borui Jiang", "Xinghao Chen", "Kai Han" ]
[ "Diversity", "Multiple-choice" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/evidence-based-diagnostic-reasoning-with
2506.20964
null
null
Evidence-based diagnostic reasoning with multi-agent copilot for human pathology
Pathology is experiencing rapid digital transformation driven by whole-slide imaging and artificial intelligence (AI). While deep learning-based computational pathology has achieved notable success, traditional models primarily focus on image analysis without integrating natural language instruction or rich, text-based context. Current multimodal large language models (MLLMs) in computational pathology face limitations, including insufficient training data, inadequate support and evaluation for multi-image understanding, and a lack of autonomous, diagnostic reasoning capabilities. To address these limitations, we introduce PathChat+, a new MLLM specifically designed for human pathology, trained on over 1 million diverse, pathology-specific instruction samples and nearly 5.5 million question answer turns. Extensive evaluations across diverse pathology benchmarks demonstrated that PathChat+ substantially outperforms the prior PathChat copilot, as well as both state-of-the-art (SOTA) general-purpose and other pathology-specific models. Furthermore, we present SlideSeek, a reasoning-enabled multi-agent AI system leveraging PathChat+ to autonomously evaluate gigapixel whole-slide images (WSIs) through iterative, hierarchical diagnostic reasoning, reaching high accuracy on DDxBench, a challenging open-ended differential diagnosis benchmark, while also capable of generating visually grounded, humanly-interpretable summary reports.
null
https://arxiv.org/abs/2506.20964v1
https://arxiv.org/pdf/2506.20964v1.pdf
null
[ "Chengkuan Chen", "Luca L. Weishaupt", "Drew F. K. Williamson", "Richard J. Chen", "Tong Ding", "Bowen Chen", "Anurag Vaidya", "Long Phi Le", "Guillaume Jaume", "Ming Y. Lu", "Faisal Mahmood" ]
[ "Diagnostic", "whole slide images" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/dfvedit-conditional-delta-flow-vector-for
2506.20967
null
null
DFVEdit: Conditional Delta Flow Vector for Zero-shot Video Editing
The advent of Video Diffusion Transformers (Video DiTs) marks a milestone in video generation. However, directly applying existing video editing methods to Video DiTs often incurs substantial computational overhead, due to resource-intensive attention modification or finetuning. To alleviate this problem, we present DFVEdit, an efficient zero-shot video editing method tailored for Video DiTs. DFVEdit eliminates the need for both attention modification and fine-tuning by directly operating on clean latents via flow transformation. To be more specific, we observe that editing and sampling can be unified under the continuous flow perspective. Building upon this foundation, we propose the Conditional Delta Flow Vector (CDFV) -- a theoretically unbiased estimation of DFV -- and integrate Implicit Cross Attention (ICA) guidance as well as Embedding Reinforcement (ER) to further enhance editing quality. DFVEdit excels in practical efficiency, offering at least 20x inference speed-up and 85\% memory reduction on Video DiTs compared to attention-engineering-based editing methods. Extensive quantitative and qualitative experiments demonstrate that DFVEdit can be seamlessly applied to popular Video DiTs (e.g., CogVideoX and Wan2.1), attaining state-of-the-art performance on structural fidelity, spatial-temporal consistency, and editing quality.
null
https://arxiv.org/abs/2506.20967v1
https://arxiv.org/pdf/2506.20967v1.pdf
null
[ "Lingling Cai", "Kang Zhao", "Hangjie Yuan", "Xiang Wang", "Yingya Zhang", "Kejie Huang" ]
[ "Video Editing", "Video Generation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/from-cradle-to-cane-a-two-pass-framework-for
2506.20977
null
null
From Cradle to Cane: A Two-Pass Framework for High-Fidelity Lifespan Face Aging
Face aging has become a crucial task in computer vision, with applications ranging from entertainment to healthcare. However, existing methods struggle with achieving a realistic and seamless transformation across the entire lifespan, especially when handling large age gaps or extreme head poses. The core challenge lies in balancing age accuracy and identity preservation--what we refer to as the Age-ID trade-off. Most prior methods either prioritize age transformation at the expense of identity consistency or vice versa. In this work, we address this issue by proposing a two-pass face aging framework, named Cradle2Cane, based on few-step text-to-image (T2I) diffusion models. The first pass focuses on solving age accuracy by introducing an adaptive noise injection (AdaNI) mechanism. This mechanism is guided by including prompt descriptions of age and gender for the given person as the textual condition. Also, by adjusting the noise level, we can control the strength of aging while allowing more flexibility in transforming the face. However, identity preservation is weakly ensured here to facilitate stronger age transformations. In the second pass, we enhance identity preservation while maintaining age-specific features by conditioning the model on two identity-aware embeddings (IDEmb): SVR-ArcFace and Rotate-CLIP. This pass allows for denoising the transformed image from the first pass, ensuring stronger identity preservation without compromising the aging accuracy. Both passes are jointly trained in an end-to-end way. Extensive experiments on the CelebA-HQ test dataset, evaluated through Face++ and Qwen-VL protocols, show that our Cradle2Cane outperforms existing face aging methods in age accuracy and identity consistency.
null
https://arxiv.org/abs/2506.20977v1
https://arxiv.org/pdf/2506.20977v1.pdf
null
[ "Tao Liu", "Dafeng Zhang", "Gengchen Li", "Shizhuo Liu", "Yongqi Song", "Senmao Li", "Shiqi Yang", "Boqian Li", "Kai Wang", "Yaxing Wang" ]
[ "Denoising" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/3d-scene-camera-representation-with-joint
2506.20979
null
null
3D Scene-Camera Representation with Joint Camera Photometric Optimization
Representing scenes from multi-view images is a crucial task in computer vision with extensive applications. However, inherent photometric distortions in the camera imaging can significantly degrade image quality. Without accounting for these distortions, the 3D scene representation may inadvertently incorporate erroneous information unrelated to the scene, diminishing the quality of the representation. In this paper, we propose a novel 3D scene-camera representation with joint camera photometric optimization. By introducing internal and external photometric model, we propose a full photometric model and corresponding camera representation. Based on simultaneously optimizing the parameters of the camera representation, the proposed method effectively separates scene-unrelated information from the 3D scene representation. Additionally, during the optimization of the photometric parameters, we introduce a depth regularization to prevent the 3D scene representation from fitting scene-unrelated information. By incorporating the camera model as part of the mapping process, the proposed method constructs a complete map that includes both the scene radiance field and the camera photometric model. Experimental results demonstrate that the proposed method can achieve high-quality 3D scene representations, even under conditions of imaging degradation, such as vignetting and dirt.
null
https://arxiv.org/abs/2506.20979v1
https://arxiv.org/pdf/2506.20979v1.pdf
null
[ "Weichen Dai", "Kangcheng Ma", "Jiaxin Wang", "Kecen Pan", "Yuhang Ming", "Hua Zhang", "Wanzeng Kong" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/rethink-sparse-signals-for-pose-guided-text
2506.20983
null
null
Rethink Sparse Signals for Pose-guided Text-to-image Generation
Recent works favored dense signals (e.g., depth, DensePose), as an alternative to sparse signals (e.g., OpenPose), to provide detailed spatial guidance for pose-guided text-to-image generation. However, dense representations raised new challenges, including editing difficulties and potential inconsistencies with textual prompts. This fact motivates us to revisit sparse signals for pose guidance, owing to their simplicity and shape-agnostic nature, which remains underexplored. This paper proposes a novel Spatial-Pose ControlNet(SP-Ctrl), equipping sparse signals with robust controllability for pose-guided image generation. Specifically, we extend OpenPose to a learnable spatial representation, making keypoint embeddings discriminative and expressive. Additionally, we introduce keypoint concept learning, which encourages keypoint tokens to attend to the spatial positions of each keypoint, thus improving pose alignment. Experiments on animal- and human-centric image generation tasks demonstrate that our method outperforms recent spatially controllable T2I generation approaches under sparse-pose guidance and even matches the performance of dense signal-based methods. Moreover, SP-Ctrl shows promising capabilities in diverse and cross-species generation through sparse signals. Codes will be available at https://github.com/DREAMXFAR/SP-Ctrl.
This fact motivates us to revisit sparse signals for pose guidance, owing to their simplicity and shape-agnostic nature, which remains underexplored.
https://arxiv.org/abs/2506.20983v1
https://arxiv.org/pdf/2506.20983v1.pdf
null
[ "Wenjie Xuan", "Jing Zhang", "Juhua Liu", "Bo Du", "DaCheng Tao" ]
[ "Image Generation", "Pose-Guided Image Generation", "Text to Image Generation", "Text-to-Image Generation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "OpenPose", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Pose Estimation Models", "parent": null }, "name": "OpenPose", "source_title": "OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields", "source_url": "https://arxiv.org/abs/1812.08008v2" } ]
https://paperswithcode.com/paper/eva-mixture-of-experts-semantic-variant
2506.20986
null
null
EVA: Mixture-of-Experts Semantic Variant Alignment for Compositional Zero-Shot Learning
Compositional Zero-Shot Learning (CZSL) investigates compositional generalization capacity to recognize unknown state-object pairs based on learned primitive concepts. Existing CZSL methods typically derive primitives features through a simple composition-prototype mapping, which is suboptimal for a set of individuals that can be divided into distinct semantic subsets. Moreover, the all-to-one cross-modal primitives matching neglects compositional divergence within identical states or objects, limiting fine-grained image-composition alignment. In this study, we propose EVA, a Mixture-of-Experts Semantic Variant Alignment framework for CZSL. Specifically, we introduce domain-expert adaption, leveraging multiple experts to achieve token-aware learning and model high-quality primitive representations. To enable accurate compositional generalization, we further present semantic variant alignment to select semantically relevant representation for image-primitives matching. Our method significantly outperforms other state-of-the-art CZSL methods on three popular benchmarks in both closed- and open-world settings, demonstrating the efficacy of the proposed insight.
null
https://arxiv.org/abs/2506.20986v1
https://arxiv.org/pdf/2506.20986v1.pdf
null
[ "Xiao Zhang", "Yongqiang Ma", "Haodong Jing", "Nanning Zheng" ]
[ "Compositional Zero-Shot Learning", "Mixture-of-Experts", "Zero-Shot Learning" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/segment-anything-in-pathology-images-with
2506.20988
null
null
Segment Anything in Pathology Images with Natural Language
Pathology image segmentation is crucial in computational pathology for analyzing histological features relevant to cancer diagnosis and prognosis. However, current methods face major challenges in clinical applications due to limited annotated data and restricted category definitions. To address these limitations, we propose PathSegmentor, the first text-prompted segmentation foundation model designed specifically for pathology images. We also introduce PathSeg , the largest and most comprehensive dataset for pathology segmentation, built from 17 public sources and containing 275k image-mask-label triples across 160 diverse categories. With PathSegmentor, users can perform semantic segmentation using natural language prompts, eliminating the need for laborious spatial inputs such as points or boxes. Extensive experiments demonstrate that PathSegmentor outperforms specialized models with higher accuracy and broader applicability, while maintaining a compact architecture. It significantly surpasses existing spatial- and text-prompted models by 0.145 and 0.429 in overall Dice scores, respectively, showing strong robustness in segmenting complex structures and generalizing to external datasets. Moreover, PathSegmentor's outputs enhance the interpretability of diagnostic models through feature importance estimation and imaging biomarker discovery, offering pathologists evidence-based support for clinical decision-making. This work advances the development of explainable AI in precision oncology.
null
https://arxiv.org/abs/2506.20988v1
https://arxiv.org/pdf/2506.20988v1.pdf
null
[ "Zhixuan Chen", "Junlin Hou", "Liqi Lin", "Yihui Wang", "Yequan Bie", "Xi Wang", "Yanning Zhou", "Ronald Cheong Kin Chan", "Hao Chen" ]
[ "Diagnostic", "Feature Importance", "Image Segmentation", "Prognosis", "Segmentation", "Semantic Segmentation" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/tsdaseg-a-two-stage-model-with-direct
2506.20991
null
null
TSDASeg: A Two-Stage Model with Direct Alignment for Interactive Point Cloud Segmentation
The rapid advancement of 3D vision-language models (VLMs) has spurred significant interest in interactive point cloud processing tasks, particularly for real-world applications. However, existing methods often underperform in point-level tasks, such as segmentation, due to missing direct 3D-text alignment, limiting their ability to link local 3D features with textual context. To solve this problem, we propose TSDASeg, a Two-Stage model coupled with a Direct cross-modal Alignment module and memory module for interactive point cloud Segmentation. We introduce the direct cross-modal alignment module to establish explicit alignment between 3D point clouds and textual/2D image data. Within the memory module, we employ multiple dedicated memory banks to separately store text features, visual features, and their cross-modal correspondence mappings. These memory banks are dynamically leveraged through self-attention and cross-attention mechanisms to update scene-specific features based on prior stored data, effectively addressing inconsistencies in interactive segmentation results across diverse scenarios. Experiments conducted on multiple 3D instruction, reference, and semantic segmentation datasets demonstrate that the proposed method achieves state-of-the-art performance.
null
https://arxiv.org/abs/2506.20991v1
https://arxiv.org/pdf/2506.20991v1.pdf
null
[ "Chade Li", "Pengju Zhang", "Yihong Wu" ]
[ "cross-modal alignment", "Interactive Segmentation", "Point Cloud Segmentation", "Segmentation", "Semantic Segmentation" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dbmovi-gs-dynamic-view-synthesis-from-blurry
2506.20998
null
null
DBMovi-GS: Dynamic View Synthesis from Blurry Monocular Video via Sparse-Controlled Gaussian Splatting
Novel view synthesis is a task of generating scenes from unseen perspectives; however, synthesizing dynamic scenes from blurry monocular videos remains an unresolved challenge that has yet to be effectively addressed. Existing novel view synthesis methods are often constrained by their reliance on high-resolution images or strong assumptions about static geometry and rigid scene priors. Consequently, their approaches lack robustness in real-world environments with dynamic object and camera motion, leading to instability and degraded visual fidelity. To address this, we propose Motion-aware Dynamic View Synthesis from Blurry Monocular Video via Sparse-Controlled Gaussian Splatting (DBMovi-GS), a method designed for dynamic view synthesis from blurry monocular videos. Our model generates dense 3D Gaussians, restoring sharpness from blurry videos and reconstructing detailed 3D geometry of the scene affected by dynamic motion variations. Our model achieves robust performance in novel view synthesis under dynamic blurry scenes and sets a new benchmark in realistic novel view synthesis for blurry monocular video inputs.
null
https://arxiv.org/abs/2506.20998v1
https://arxiv.org/pdf/2506.20998v1.pdf
null
[ "Yeon-Ji Song", "Jaein Kim", "Byung-Ju Kim", "Byoung-Tak Zhang" ]
[ "3D geometry", "Novel View Synthesis" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/style-aligned-image-composition-for-robust
2506.21001
null
null
Style-Aligned Image Composition for Robust Detection of Abnormal Cells in Cytopathology
Challenges such as the lack of high-quality annotations, long-tailed data distributions, and inconsistent staining styles pose significant obstacles to training neural networks to detect abnormal cells in cytopathology robustly. This paper proposes a style-aligned image composition (SAIC) method that composes high-fidelity and style-preserved pathological images to enhance the effectiveness and robustness of detection models. Without additional training, SAIC first selects an appropriate candidate from the abnormal cell bank based on attribute guidance. Then, it employs a high-frequency feature reconstruction to achieve a style-aligned and high-fidelity composition of abnormal cells and pathological backgrounds. Finally, it introduces a large vision-language model to filter high-quality synthesis images. Experimental results demonstrate that incorporating SAIC-synthesized images effectively enhances the performance and robustness of abnormal cell detection for tail categories and styles, thereby improving overall detection performance. The comprehensive quality evaluation further confirms the generalizability and practicality of SAIC in clinical application scenarios. Our code will be released at https://github.com/Joey-Qi/SAIC.
This paper proposes a style-aligned image composition (SAIC) method that composes high-fidelity and style-preserved pathological images to enhance the effectiveness and robustness of detection models.
https://arxiv.org/abs/2506.21001v1
https://arxiv.org/pdf/2506.21001v1.pdf
null
[ "Qiuyi Qi", "Xin Li", "Ming Kong", "Zikang Xu", "Bingdi Chen", "Qiang Zhu", "S Kevin Zhou" ]
[ "Attribute", "Cell Detection" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/inverse-scene-text-removal
2506.21002
null
null
Inverse Scene Text Removal
Scene text removal (STR) aims to erase textual elements from images. It was originally intended for removing privacy-sensitiveor undesired texts from natural scene images, but is now also appliedto typographic images. STR typically detects text regions and theninpaints them. Although STR has advanced through neural networksand synthetic data, misuse risks have increased. This paper investi-gates Inverse STR (ISTR), which analyzes STR-processed images andfocuses on binary classification (detecting whether an image has un-dergone STR) and localizing removed text regions. We demonstrate inexperiments that these tasks are achievable with high accuracies, en-abling detection of potential misuse and improving STR. We also at-tempt to recover the removed text content by training a text recognizerto understand its difficulty.
Scene text removal (STR) aims to erase textual elements from images.
https://arxiv.org/abs/2506.21002v1
https://arxiv.org/pdf/2506.21002v1.pdf
null
[ "Takumi Yoshimatsu", "Shumpei Takezaki", "Seiichi Uchida" ]
[ "Binary Classification" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/visionguard-synergistic-framework-for-helmet
2506.21005
null
null
VisionGuard: Synergistic Framework for Helmet Violation Detection
Enforcing helmet regulations among motorcyclists is essential for enhancing road safety and ensuring the effectiveness of traffic management systems. However, automatic detection of helmet violations faces significant challenges due to environmental variability, camera angles, and inconsistencies in the data. These factors hinder reliable detection of motorcycles and riders and disrupt consistent object classification. To address these challenges, we propose VisionGuard, a synergistic multi-stage framework designed to overcome the limitations of frame-wise detectors, especially in scenarios with class imbalance and inconsistent annotations. VisionGuard integrates two key components: Adaptive Labeling and Contextual Expander modules. The Adaptive Labeling module is a tracking-based refinement technique that enhances classification consistency by leveraging a tracking algorithm to assign persistent labels across frames and correct misclassifications. The Contextual Expander module improves recall for underrepresented classes by generating virtual bounding boxes with appropriate confidence scores, effectively addressing the impact of data imbalance. Experimental results show that VisionGuard improves overall mAP by 3.1% compared to baseline detectors, demonstrating its effectiveness and potential for real-world deployment in traffic surveillance systems, ultimately promoting safety and regulatory compliance.
null
https://arxiv.org/abs/2506.21005v1
https://arxiv.org/pdf/2506.21005v1.pdf
null
[ "Lam-Huy Nguyen", "Thinh-Phuc Nguyen", "Thanh-Hai Nguyen", "Gia-Huy Dinh", "Minh-Triet Tran", "Trung-Nghia Le" ]
[ "Classification Consistency" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/detection-of-breast-cancer-lumpectomy-margin
2506.21006
null
null
Detection of Breast Cancer Lumpectomy Margin with SAM-incorporated Forward-Forward Contrastive Learning
Complete removal of cancer tumors with a negative specimen margin during lumpectomy is essential in reducing breast cancer recurrence. However, 2D specimen radiography (SR), the current method used to assess intraoperative specimen margin status, has limited accuracy, resulting in nearly a quarter of patients requiring additional surgery. To address this, we propose a novel deep learning framework combining the Segment Anything Model (SAM) with Forward-Forward Contrastive Learning (FFCL), a pre-training strategy leveraging both local and global contrastive learning for patch-level classification of SR images. After annotating SR images with regions of known maligancy, non-malignant tissue, and pathology-confirmed margins, we pre-train a ResNet-18 backbone with FFCL to classify margin status, then reconstruct coarse binary masks to prompt SAM for refined tumor margin segmentation. Our approach achieved an AUC of 0.8455 for margin classification and segmented margins with a 27.4% improvement in Dice similarity over baseline models, while reducing inference time to 47 milliseconds per image. These results demonstrate that FFCL-SAM significantly enhances both the speed and accuracy of intraoperative margin assessment, with strong potential to reduce re-excision rates and improve surgical outcomes in breast cancer treatment. Our code is available at https://github.com/tbwa233/FFCL-SAM/.
Complete removal of cancer tumors with a negative specimen margin during lumpectomy is essential in reducing breast cancer recurrence.
https://arxiv.org/abs/2506.21006v1
https://arxiv.org/pdf/2506.21006v1.pdf
null
[ "Tyler Ward", "Xiaoqin Wang", "Braxton McFarland", "Md Atik Ahamed", "Sahar Nozad", "Talal Arshad", "Hafsa Nebbache", "Jin Chen", "Abdullah Imran" ]
[ "Contrastive Learning" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": null, "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Contrastive Learning", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "Segment Anything Model", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Segmentation Models", "parent": null }, "name": "SAM", "source_title": "Segment Anything", "source_url": "https://arxiv.org/abs/2304.02643v1" }, { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/the-aging-multiverse-generating-condition
2506.21008
null
null
The Aging Multiverse: Generating Condition-Aware Facial Aging Tree via Training-Free Diffusion
We introduce the Aging Multiverse, a framework for generating multiple plausible facial aging trajectories from a single image, each conditioned on external factors such as environment, health, and lifestyle. Unlike prior methods that model aging as a single deterministic path, our approach creates an aging tree that visualizes diverse futures. To enable this, we propose a training-free diffusion-based method that balances identity preservation, age accuracy, and condition control. Our key contributions include attention mixing to modulate editing strength and a Simulated Aging Regularization strategy to stabilize edits. Extensive experiments and user studies demonstrate state-of-the-art performance across identity preservation, aging realism, and conditional alignment, outperforming existing editing and age-progression models, which often fail to account for one or more of the editing criteria. By transforming aging into a multi-dimensional, controllable, and interpretable process, our approach opens up new creative and practical avenues in digital storytelling, health education, and personalized visualization.
null
https://arxiv.org/abs/2506.21008v1
https://arxiv.org/pdf/2506.21008v1.pdf
null
[ "Bang Gong", "Luchao Qi", "Jiaye Wu", "Zhicheng Fu", "Chunbo Song", "David W. Jacobs", "John Nicholson", "Roni Sengupta" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/user-in-the-loop-view-sampling-with-error
2506.21009
null
null
User-in-the-Loop View Sampling with Error Peaking Visualization
Augmented reality (AR) provides ways to visualize missing view samples for novel view synthesis. Existing approaches present 3D annotations for new view samples and task users with taking images by aligning the AR display. This data collection task is known to be mentally demanding and limits capture areas to pre-defined small areas due to the ideal but restrictive underlying sampling theory. To free users from 3D annotations and limited scene exploration, we propose using locally reconstructed light fields and visualizing errors to be removed by inserting new views. Our results show that the error-peaking visualization is less invasive, reduces disappointment in final results, and is satisfactory with fewer view samples in our mobile view synthesis system. We also show that our approach can contribute to recent radiance field reconstruction for larger scenes, such as 3D Gaussian splatting.
null
https://arxiv.org/abs/2506.21009v1
https://arxiv.org/pdf/2506.21009v1.pdf
null
[ "Ayaka Yasunaga", "Hideo Saito", "Shohei Mori" ]
[ "Novel View Synthesis" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/bridging-video-quality-scoring-and
2506.21011
null
null
Bridging Video Quality Scoring and Justification via Large Multimodal Models
Classical video quality assessment (VQA) methods generate a numerical score to judge a video's perceived visual fidelity and clarity. Yet, a score fails to describe the video's complex quality dimensions, restricting its applicability. Benefiting from the linguistic output, adapting video large multimodal models (LMMs) to VQA via instruction tuning has the potential to address this issue. The core of the approach lies in the video quality-centric instruction data. Previous explorations mainly focus on the image domain, and their data generation processes heavily rely on human quality annotations and proprietary systems, limiting data scalability and effectiveness. To address these challenges, we propose the Score-based Instruction Generation (SIG) pipeline. Specifically, SIG first scores multiple quality dimensions of an unlabeled video and maps scores to text-defined levels. It then explicitly incorporates a hierarchical Chain-of-Thought (CoT) to model the correlation between specific dimensions and overall quality, mimicking the human visual system's reasoning process. The automated pipeline eliminates the reliance on expert-written quality descriptions and proprietary systems, ensuring data scalability and generation efficiency. To this end, the resulting Score2Instruct (S2I) dataset contains over 320K diverse instruction-response pairs, laying the basis for instruction tuning. Moreover, to advance video LMMs' quality scoring and justification abilities simultaneously, we devise a progressive tuning strategy to fully unleash the power of S2I. Built upon SIG, we further curate a benchmark termed S2I-Bench with 400 open-ended questions to better evaluate the quality justification capacity of video LMMs. Experimental results on the S2I-Bench and existing benchmarks indicate that our method consistently improves quality scoring and justification capabilities across multiple video LMMs.
null
https://arxiv.org/abs/2506.21011v1
https://arxiv.org/pdf/2506.21011v1.pdf
null
[ "Qizhi Xie", "Kun Yuan", "Yunpeng Qu", "Jiachao Gong", "Mingda Wu", "Ming Sun", "Chao Zhou", "Jihong Zhu" ]
[ "Video Quality Assessment", "Visual Question Answering (VQA)" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/fedsc-federated-learning-with-semantic-aware
2506.21012
null
null
FedSC: Federated Learning with Semantic-Aware Collaboration
Federated learning (FL) aims to train models collaboratively across clients without sharing data for privacy-preserving. However, one major challenge is the data heterogeneity issue, which refers to the biased labeling preferences at multiple clients. A number of existing FL methods attempt to tackle data heterogeneity locally (e.g., regularizing local models) or globally (e.g., fine-tuning global model), often neglecting inherent semantic information contained in each client. To explore the possibility of using intra-client semantically meaningful knowledge in handling data heterogeneity, in this paper, we propose Federated Learning with Semantic-Aware Collaboration (FedSC) to capture client-specific and class-relevant knowledge across heterogeneous clients. The core idea of FedSC is to construct relational prototypes and consistent prototypes at semantic-level, aiming to provide fruitful class underlying knowledge and stable convergence signals in a prototype-wise collaborative way. On the one hand, FedSC introduces an inter-contrastive learning strategy to bring instance-level embeddings closer to relational prototypes with the same semantics and away from distinct classes. On the other hand, FedSC devises consistent prototypes via a discrepancy aggregation manner, as a regularization penalty to constrain the optimization region of the local model. Moreover, a theoretical analysis for FedSC is provided to ensure a convergence guarantee. Experimental results on various challenging scenarios demonstrate the effectiveness of FedSC and the efficiency of crucial components.
To explore the possibility of using intra-client semantically meaningful knowledge in handling data heterogeneity, in this paper, we propose Federated Learning with Semantic-Aware Collaboration (FedSC) to capture client-specific and class-relevant knowledge across heterogeneous clients.
https://arxiv.org/abs/2506.21012v1
https://arxiv.org/pdf/2506.21012v1.pdf
null
[ "Huan Wang", "Haoran Li", "Huaming Chen", "Jun Yan", "Jiahua Shi", "Jun Shen" ]
[ "Contrastive Learning", "Federated Learning", "Privacy Preserving" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/multimodal-prompt-alignment-for-facial
2506.21017
null
null
Multimodal Prompt Alignment for Facial Expression Recognition
Prompt learning has been widely adopted to efficiently adapt vision-language models (VLMs) like CLIP for various downstream tasks. Despite their success, current VLM-based facial expression recognition (FER) methods struggle to capture fine-grained textual-visual relationships, which are essential for distinguishing subtle differences between facial expressions. To address this challenge, we propose a multimodal prompt alignment framework for FER, called MPA-FER, that provides fine-grained semantic guidance to the learning process of prompted visual features, resulting in more precise and interpretable representations. Specifically, we introduce a multi-granularity hard prompt generation strategy that utilizes a large language model (LLM) like ChatGPT to generate detailed descriptions for each facial expression. The LLM-based external knowledge is injected into the soft prompts by minimizing the feature discrepancy between the soft prompts and the hard prompts. To preserve the generalization abilities of the pretrained CLIP model, our approach incorporates prototype-guided visual feature alignment, ensuring that the prompted visual features from the frozen image encoder align closely with class-specific prototypes. Additionally, we propose a cross-modal global-local alignment module that focuses on expression-relevant facial features, further improving the alignment between textual and visual features. Extensive experiments demonstrate our framework outperforms state-of-the-art methods on three FER benchmark datasets, while retaining the benefits of the pretrained model and minimizing computational costs.
null
https://arxiv.org/abs/2506.21017v1
https://arxiv.org/pdf/2506.21017v1.pdf
null
[ "Fuyan Ma", "Yiran He", "Bin Sun", "Shutao Li" ]
[ "Facial Expression Recognition", "Facial Expression Recognition (FER)", "Large Language Model", "Prompt Learning" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/OpenAI/CLIP", "description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)", "full_name": "Contrastive Language-Image Pre-training", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Representations", "parent": null }, "name": "CLIP", "source_title": "Learning Transferable Visual Models From Natural Language Supervision", "source_url": "https://arxiv.org/abs/2103.00020v1" }, { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" } ]
https://paperswithcode.com/paper/lasfnet-a-lightweight-attention-guided-self
2506.21018
null
null
LASFNet: A Lightweight Attention-Guided Self-Modulation Feature Fusion Network for Multimodal Object Detection
Effective deep feature extraction via feature-level fusion is crucial for multimodal object detection. However, previous studies often involve complex training processes that integrate modality-specific features by stacking multiple feature-level fusion units, leading to significant computational overhead. To address this issue, we propose a new fusion detection baseline that uses a single feature-level fusion unit to enable high-performance detection, thereby simplifying the training process. Based on this approach, we propose a lightweight attention-guided self-modulation feature fusion network (LASFNet), which introduces a novel attention-guided self-modulation feature fusion (ASFF) module that adaptively adjusts the responses of fusion features at both global and local levels based on attention information from different modalities, thereby promoting comprehensive and enriched feature generation. Additionally, a lightweight feature attention transformation module (FATM) is designed at the neck of LASFNet to enhance the focus on fused features and minimize information loss. Extensive experiments on three representative datasets demonstrate that, compared to state-of-the-art methods, our approach achieves a favorable efficiency-accuracy trade-off, reducing the number of parameters and computational cost by as much as 90% and 85%, respectively, while improving detection accuracy (mAP) by 1%-3%. The code will be open-sourced at https://github.com/leileilei2000/LASFNet.
Based on this approach, we propose a lightweight attention-guided self-modulation feature fusion network (LASFNet), which introduces a novel attention-guided self-modulation feature fusion (ASFF) module that adaptively adjusts the responses of fusion features at both global and local levels based on attention information from different modalities, thereby promoting comprehensive and enriched feature generation.
https://arxiv.org/abs/2506.21018v1
https://arxiv.org/pdf/2506.21018v1.pdf
null
[ "Lei Hao", "Lina Xu", "Chang Liu", "Yanni Dong" ]
[ "object-detection", "Object Detection" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/instella-t2i-pushing-the-limits-of-1d
2506.21022
null
null
Instella-T2I: Pushing the Limits of 1D Discrete Latent Space Image Generation
Image tokenization plays a critical role in reducing the computational demands of modeling high-resolution images, significantly improving the efficiency of image and multimodal understanding and generation. Recent advances in 1D latent spaces have reduced the number of tokens required by eliminating the need for a 2D grid structure. In this paper, we further advance compact discrete image representation by introducing 1D binary image latents. By representing each image as a sequence of binary vectors, rather than using traditional one-hot codebook tokens, our approach preserves high-resolution details while maintaining the compactness of 1D latents. To the best of our knowledge, our text-to-image models are the first to achieve competitive performance in both diffusion and auto-regressive generation using just 128 discrete tokens for images up to 1024x1024, demonstrating up to a 32-fold reduction in token numbers compared to standard VQ-VAEs. The proposed 1D binary latent space, coupled with simple model architectures, achieves marked improvements in speed training and inference speed. Our text-to-image models allow for a global batch size of 4096 on a single GPU node with 8 AMD MI300X GPUs, and the training can be completed within 200 GPU days. Our models achieve competitive performance compared to modern image generation models without any in-house private training data or post-training refinements, offering a scalable and efficient alternative to conventional tokenization methods.
null
https://arxiv.org/abs/2506.21022v1
https://arxiv.org/pdf/2506.21022v1.pdf
null
[ "Ze Wang", "Hao Chen", "Benran Hu", "Jiang Liu", "Ximeng Sun", "Jialian Wu", "Yusheng Su", "Xiaodong Yu", "Emad Barsoum", "Zicheng Liu" ]
[ "GPU", "Image Generation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" }, { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/didsee-diffusion-based-depth-completion-for
2506.21034
null
null
DidSee: Diffusion-Based Depth Completion for Material-Agnostic Robotic Perception and Manipulation
Commercial RGB-D cameras often produce noisy, incomplete depth maps for non-Lambertian objects. Traditional depth completion methods struggle to generalize due to the limited diversity and scale of training data. Recent advances exploit visual priors from pre-trained text-to-image diffusion models to enhance generalization in dense prediction tasks. However, we find that biases arising from training-inference mismatches in the vanilla diffusion framework significantly impair depth completion performance. Additionally, the lack of distinct visual features in non-Lambertian regions further hinders precise prediction. To address these issues, we propose \textbf{DidSee}, a diffusion-based framework for depth completion on non-Lambertian objects. First, we integrate a rescaled noise scheduler enforcing a zero terminal signal-to-noise ratio to eliminate signal leakage bias. Second, we devise a noise-agnostic single-step training formulation to alleviate error accumulation caused by exposure bias and optimize the model with a task-specific loss. Finally, we incorporate a semantic enhancer that enables joint depth completion and semantic segmentation, distinguishing objects from backgrounds and yielding precise, fine-grained depth maps. DidSee achieves state-of-the-art performance on multiple benchmarks, demonstrates robust real-world generalization, and effectively improves downstream tasks such as category-level pose estimation and robotic grasping.Project page: https://wenzhoulyu.github.io/DidSee/
null
https://arxiv.org/abs/2506.21034v1
https://arxiv.org/pdf/2506.21034v1.pdf
null
[ "Wenzhou Lyu", "Jialing Lin", "Wenqi Ren", "Ruihao Xia", "Feng Qian", "Yang Tang" ]
[ "Depth Completion", "Pose Estimation", "Semantic Segmentation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/boosting-domain-generalized-and-adaptive
2506.21042
null
null
Boosting Domain Generalized and Adaptive Detection with Diffusion Models: Fitness, Generalization, and Transferability
Detectors often suffer from performance drop due to domain gap between training and testing data. Recent methods explore diffusion models applied to domain generalization (DG) and adaptation (DA) tasks, but still struggle with large inference costs and have not yet fully leveraged the capabilities of diffusion models. We propose to tackle these problems by extracting intermediate features from a single-step diffusion process, improving feature collection and fusion to reduce inference time by 75% while enhancing performance on source domains (i.e., Fitness). Then, we construct an object-centered auxiliary branch by applying box-masked images with class prompts to extract robust and domain-invariant features that focus on object. We also apply consistency loss to align the auxiliary and ordinary branch, balancing fitness and generalization while preventing overfitting and improving performance on target domains (i.e., Generalization). Furthermore, within a unified framework, standard detectors are guided by diffusion detectors through feature-level and object-level alignment on source domains (for DG) and unlabeled target domains (for DA), thereby improving cross-domain detection performance (i.e., Transferability). Our method achieves competitive results on 3 DA benchmarks and 5 DG benchmarks. Additionally, experiments on COCO generalization benchmark demonstrate that our method maintains significant advantages and show remarkable efficiency in large domain shifts and low-data scenarios. Our work shows the superiority of applying diffusion models to domain generalized and adaptive detection tasks and offers valuable insights for visual perception tasks across diverse domains. The code is available at \href{https://github.com/heboyong/Fitness-Generalization-Transferability}{Fitness-Generalization-Transferability}.
We propose to tackle these problems by extracting intermediate features from a single-step diffusion process, improving feature collection and fusion to reduce inference time by 75% while enhancing performance on source domains (i. e., Fitness).
https://arxiv.org/abs/2506.21042v1
https://arxiv.org/pdf/2506.21042v1.pdf
null
[ "Boyong He", "Yuxiang Ji", "Zhuoyue Tan", "Liaoni Wu" ]
[ "Domain Generalization", "Robust Object Detection" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/improving-diffusion-based-image-editing
2506.21045
null
null
Improving Diffusion-Based Image Editing Faithfulness via Guidance and Scheduling
Text-guided diffusion models have become essential for high-quality image synthesis, enabling dynamic image editing. In image editing, two crucial aspects are editability, which determines the extent of modification, and faithfulness, which reflects how well unaltered elements are preserved. However, achieving optimal results is challenging because of the inherent trade-off between editability and faithfulness. To address this, we propose Faithfulness Guidance and Scheduling (FGS), which enhances faithfulness with minimal impact on editability. FGS incorporates faithfulness guidance to strengthen the preservation of input image information and introduces a scheduling strategy to resolve misalignment between editability and faithfulness. Experimental results demonstrate that FGS achieves superior faithfulness while maintaining editability. Moreover, its compatibility with various editing methods enables precise, high-quality image edits across diverse tasks.
null
https://arxiv.org/abs/2506.21045v1
https://arxiv.org/pdf/2506.21045v1.pdf
null
[ "Hansam Cho", "Seoung Bum Kim" ]
[ "Image Generation", "Scheduling" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/class-agnostic-region-of-interest-matching-in
2506.21055
null
null
Class-Agnostic Region-of-Interest Matching in Document Images
Document understanding and analysis have received a lot of attention due to their widespread application. However, existing document analysis solutions, such as document layout analysis and key information extraction, are only suitable for fixed category definitions and granularities, and cannot achieve flexible applications customized by users. Therefore, this paper defines a new task named ``Class-Agnostic Region-of-Interest Matching'' (``RoI-Matching'' for short), which aims to match the customized regions in a flexible, efficient, multi-granularity, and open-set manner. The visual prompt of the reference document and target document images are fed into our model, while the output is the corresponding bounding boxes in the target document images. To meet the above requirements, we construct a benchmark RoI-Matching-Bench, which sets three levels of difficulties following real-world conditions, and propose the macro and micro metrics to evaluate. Furthermore, we also propose a new framework RoI-Matcher, which employs a siamese network to extract multi-level features both in the reference and target domains, and cross-attention layers to integrate and align similar semantics in different domains. Experiments show that our method with a simple procedure is effective on RoI-Matching-Bench, and serves as the baseline for further research. The code is available at https://github.com/pd162/RoI-Matching.
Document understanding and analysis have received a lot of attention due to their widespread application.
https://arxiv.org/abs/2506.21055v1
https://arxiv.org/pdf/2506.21055v1.pdf
null
[ "Demin Zhang", "Jiahao Lyu", "Zhijie Shen", "Yu Zhou" ]
[ "Document Layout Analysis", "document understanding", "Key Information Extraction" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" } ]
https://paperswithcode.com/paper/samurai-shape-aware-multimodal-retrieval-for
2506.21056
null
null
SAMURAI: Shape-Aware Multimodal Retrieval for 3D Object Identification
Retrieving 3D objects in complex indoor environments using only a masked 2D image and a natural language description presents significant challenges. The ROOMELSA challenge limits access to full 3D scene context, complicating reasoning about object appearance, geometry, and semantics. These challenges are intensified by distorted viewpoints, textureless masked regions, ambiguous language prompts, and noisy segmentation masks. To address this, we propose SAMURAI: Shape-Aware Multimodal Retrieval for 3D Object Identification. SAMURAI integrates CLIP-based semantic matching with shape-guided re-ranking derived from binary silhouettes of masked regions, alongside a robust majority voting strategy. A dedicated preprocessing pipeline enhances mask quality by extracting the largest connected component and removing background noise. Our hybrid retrieval framework leverages both language and shape cues, achieving competitive performance on the ROOMELSA private test set. These results highlight the importance of combining shape priors with language understanding for robust open-world 3D object retrieval.
null
https://arxiv.org/abs/2506.21056v1
https://arxiv.org/pdf/2506.21056v1.pdf
null
[ "Dinh-Khoi Vo", "Van-Loc Nguyen", "Minh-Triet Tran", "Trung-Nghia Le" ]
[ "3D Object Retrieval", "Object", "Re-Ranking", "Retrieval" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/posemaster-generating-3d-characters-in
2506.21076
null
null
PoseMaster: Generating 3D Characters in Arbitrary Poses from a Single Image
3D characters play a crucial role in our daily entertainment. To improve the efficiency of 3D character modeling, recent image-based methods use two separate models to achieve pose standardization and 3D reconstruction of the A-pose character. However, these methods are prone to generating distorted and degraded images in the pose standardization stage due to self-occlusion and viewpoints, which further affects the geometric quality of the subsequent reconstruction process. To tackle these problems, we propose PoseMaster, an end-to-end controllable 3D character generation framework. Specifically, we unify pose transformation and 3D character generation into a flow-based 3D native generation framework. To achieve accurate arbitrary-pose control, we propose to leverage the 3D body bones existing in the skeleton of an animatable character as the pose condition. Furthermore, considering the specificity of multi-condition control, we randomly empty the pose condition and the image condition during training to improve the effectiveness and generalizability of pose control. Finally, we create a high-quality pose-control dataset derived from realistic character animation data to make the model learning the implicit relationships between skeleton and skinning weights. Extensive experiments show that PoseMaster outperforms current state-of-the-art techniques in both qualitative and quantitative evaluations for A-pose character generation while demonstrating its powerful ability to achieve precise control for arbitrary poses.
null
https://arxiv.org/abs/2506.21076v1
https://arxiv.org/pdf/2506.21076v1.pdf
null
[ "Hongyu Yan", "Kunming Luo", "Weiyu Li", "Yixun Liang", "Shengming Li", "Jingwei Huang", "Chunchao Guo", "Ping Tan" ]
[ "3D Reconstruction", "Specificity" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/egoadapt-adaptive-multisensory-distillation
2506.21080
null
null
EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception
Modern perception models, particularly those designed for multisensory egocentric tasks, have achieved remarkable performance but often come with substantial computational costs. These high demands pose challenges for real-world deployment, especially in resource-constrained environments. In this paper, we introduce EgoAdapt, a framework that adaptively performs cross-modal distillation and policy learning to enable efficient inference across different egocentric perception tasks, including egocentric action recognition, active speaker localization, and behavior anticipation. Our proposed policy module is adaptable to task-specific action spaces, making it broadly applicable. Experimental results on three challenging egocentric datasets EPIC-Kitchens, EasyCom, and Aria Everyday Activities demonstrate that our method significantly enhances efficiency, reducing GMACs by up to 89.09%, parameters up to 82.02%, and energy up to 9.6x, while still on-par and in many cases outperforming, the performance of corresponding state-of-the-art models.
null
https://arxiv.org/abs/2506.21080v1
https://arxiv.org/pdf/2506.21080v1.pdf
null
[ "Sanjoy Chowdhury", "Subrata Biswas", "Sayan Nag", "Tushar Nagarajan", "Calvin Murdock", "Ishwarya Ananthabhotla", "Yijun Qian", "Vamsi Krishna Ithapu", "Dinesh Manocha", "Ruohan Gao" ]
[ "Action Recognition", "Active Speaker Localization" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/esmstereo-enhanced-shufflemixer-disparity
2506.21091
null
null
ESMStereo: Enhanced ShuffleMixer Disparity Upsampling for Real-Time and Accurate Stereo Matching
Stereo matching has become an increasingly important component of modern autonomous systems. Developing deep learning-based stereo matching models that deliver high accuracy while operating in real-time continues to be a major challenge in computer vision. In the domain of cost-volume-based stereo matching, accurate disparity estimation depends heavily on large-scale cost volumes. However, such large volumes store substantial redundant information and also require computationally intensive aggregation units for processing and regression, making real-time performance unattainable. Conversely, small-scale cost volumes followed by lightweight aggregation units provide a promising route for real-time performance, but lack sufficient information to ensure highly accurate disparity estimation. To address this challenge, we propose the Enhanced Shuffle Mixer (ESM) to mitigate information loss associated with small-scale cost volumes. ESM restores critical details by integrating primary features into the disparity upsampling unit. It quickly extracts features from the initial disparity estimation and fuses them with image features. These features are mixed by shuffling and layer splitting then refined through a compact feature-guided hourglass network to recover more detailed scene geometry. The ESM focuses on local contextual connectivity with a large receptive field and low computational cost, leading to the reconstruction of a highly accurate disparity map at real-time. The compact version of ESMStereo achieves an inference speed of 116 FPS on high-end GPUs and 91 FPS on the AGX Orin.
In the domain of cost-volume-based stereo matching, accurate disparity estimation depends heavily on large-scale cost volumes.
https://arxiv.org/abs/2506.21091v1
https://arxiv.org/pdf/2506.21091v1.pdf
null
[ "Mahmoud Tahmasebi", "Saif Huq", "Kevin Meehan", "Marion McAfee" ]
[ "Disparity Estimation", "Stereo Matching" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/oraclefusion-assisting-the-decipherment-of
2506.21101
null
null
OracleFusion: Assisting the Decipherment of Oracle Bone Script with Structurally Constrained Semantic Typography
As one of the earliest ancient languages, Oracle Bone Script (OBS) encapsulates the cultural records and intellectual expressions of ancient civilizations. Despite the discovery of approximately 4,500 OBS characters, only about 1,600 have been deciphered. The remaining undeciphered ones, with their complex structure and abstract imagery, pose significant challenges for interpretation. To address these challenges, this paper proposes a novel two-stage semantic typography framework, named OracleFusion. In the first stage, this approach leverages the Multimodal Large Language Model (MLLM) with enhanced Spatial Awareness Reasoning (SAR) to analyze the glyph structure of the OBS character and perform visual localization of key components. In the second stage, we introduce Oracle Structural Vector Fusion (OSVF), incorporating glyph structure constraints and glyph maintenance constraints to ensure the accurate generation of semantically enriched vector fonts. This approach preserves the objective integrity of the glyph structure, offering visually enhanced representations that assist experts in deciphering OBS. Extensive qualitative and quantitative experiments demonstrate that OracleFusion outperforms state-of-the-art baseline models in terms of semantics, visual appeal, and glyph maintenance, significantly enhancing both readability and aesthetic quality. Furthermore, OracleFusion provides expert-like insights on unseen oracle characters, making it a valuable tool for advancing the decipherment of OBS.
As one of the earliest ancient languages, Oracle Bone Script (OBS) encapsulates the cultural records and intellectual expressions of ancient civilizations.
https://arxiv.org/abs/2506.21101v1
https://arxiv.org/pdf/2506.21101v1.pdf
null
[ "Caoshuo Li", "Zengmao Ding", "Xiaobin Hu", "Bang Li", "Donghao Luo", "AndyPian Wu", "Chaoyang Wang", "Chengjie Wang", "Taisong Jin", "SevenShu", "Yunsheng Wu", "Yongge Liu", "Rongrong Ji" ]
[ "Decipherment", "Large Language Model", "Multimodal Large Language Model", "Visual Localization" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/pushing-trade-off-boundaries-compact-yet
2506.21109
null
null
Pushing Trade-Off Boundaries: Compact yet Effective Remote Sensing Change Detection
Remote sensing change detection is essential for monitoring urban expansion, disaster assessment, and resource management, offering timely, accurate, and large-scale insights into dynamic landscape transformations. While deep learning has revolutionized change detection, the increasing complexity and computational demands of modern models have not necessarily translated into significant accuracy gains. Instead of following this trend, this study explores a more efficient approach, focusing on lightweight models that maintain high accuracy while minimizing resource consumption, which is an essential requirement for on-satellite processing. To this end, we propose FlickCD, which means quick flick then get great results, pushing the boundaries of the performance-resource trade-off. FlickCD introduces an Enhanced Difference Module (EDM) to amplify critical feature differences between temporal phases while suppressing irrelevant variations such as lighting and weather changes, thereby reducing computational costs in the subsequent change decoder. Additionally, the FlickCD decoder incorporates Local-Global Fusion Blocks, leveraging Shifted Window Self-Attention (SWSA) and Enhanced Global Self-Attention (EGSA) to efficiently capture semantic information at multiple scales, preserving both coarse- and fine-grained changes. Extensive experiments on four benchmark datasets demonstrate that FlickCD reduces computational and storage overheads by more than an order of magnitude while achieving state-of-the-art (SOTA) performance or incurring only a minor (<1\% F1) accuracy trade-off. The implementation code is publicly available at https://github.com/xulsh8/FlickCD.
null
https://arxiv.org/abs/2506.21109v1
https://arxiv.org/pdf/2506.21109v1.pdf
null
[ "Luosheng Xu", "Dalin Zhang", "Zhaohui Song" ]
[ "Change Detection", "Decoder" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ipformer-videollm-enhancing-multi-modal-video
2506.21116
null
null
IPFormer-VideoLLM: Enhancing Multi-modal Video Understanding for Multi-shot Scenes
Video Large Language Models (VideoLLMs) have demonstrated remarkable understanding capabilities, but are found struggling to tackle multi-shot scenarios,e.g., video clips with varying camera angles or scene changes. This challenge can render failures such as instance identity forgetting and key frame negligence. In this work, we first attribute the challenge to the lack of multi-shot annotations among existing datasets and therefore we introduce a new dataset termed MultiClip-Bench, featuring dense descriptions and instruction-based question-answering pairs tailored for multi-shot scenarios. We empirically find that the training set significantly boosts the multi-shot performance, while the testing benchmark provides a reliable measure of the model capability in multi-shot scenarios. By further analyzing and discovering that current models only encode instance features in a discrete or lossy manner, at the risk of missing identity information, we then contribute a new model IPFormer-VideoLLM. Its key idea is the injection of instance-level features as instance prompts through an efficient attention-based connector. This allows for the aggregation of instance-specific information across scenes. Experiments demonstrate that our proposed dataset and model not only enhance the multi-scene video understanding significantly, but also offer distinct advantages across various video benchmarks.
null
https://arxiv.org/abs/2506.21116v1
https://arxiv.org/pdf/2506.21116v1.pdf
null
[ "Yujia Liang", "Jile Jiao", "Zhicheng Wang", "Xuetao Feng", "Zixuan Ye", "YuAn Wang", "Hao Lu" ]
[ "Attribute", "Question Answering", "Video Understanding" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/cl-splats-continual-learning-of-gaussian
2506.21117
null
null
CL-Splats: Continual Learning of Gaussian Splatting with Local Optimization
In dynamic 3D environments, accurately updating scene representations over time is crucial for applications in robotics, mixed reality, and embodied AI. As scenes evolve, efficient methods to incorporate changes are needed to maintain up-to-date, high-quality reconstructions without the computational overhead of re-optimizing the entire scene. This paper introduces CL-Splats, which incrementally updates Gaussian splatting-based 3D representations from sparse scene captures. CL-Splats integrates a robust change-detection module that segments updated and static components within the scene, enabling focused, local optimization that avoids unnecessary re-computation. Moreover, CL-Splats supports storing and recovering previous scene states, facilitating temporal segmentation and new scene-analysis applications. Our extensive experiments demonstrate that CL-Splats achieves efficient updates with improved reconstruction quality over the state-of-the-art. This establishes a robust foundation for future real-time adaptation in 3D scene reconstruction tasks.
null
https://arxiv.org/abs/2506.21117v1
https://arxiv.org/pdf/2506.21117v1.pdf
null
[ "Jan Ackermann", "Jonas Kulhanek", "Shengqu Cai", "Haofei Xu", "Marc Pollefeys", "Gordon Wetzstein", "Leonidas Guibas", "Songyou Peng" ]
[ "3D Scene Reconstruction", "Change Detection", "Continual Learning", "Mixed Reality" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-to-see-in-the-extremely-dark
2506.21132
null
null
Learning to See in the Extremely Dark
Learning-based methods have made promising advances in low-light RAW image enhancement, while their capability to extremely dark scenes where the environmental illuminance drops as low as 0.0001 lux remains to be explored due to the lack of corresponding datasets. To this end, we propose a paired-to-paired data synthesis pipeline capable of generating well-calibrated extremely low-light RAW images at three precise illuminance ranges of 0.01-0.1 lux, 0.001-0.01 lux, and 0.0001-0.001 lux, together with high-quality sRGB references to comprise a large-scale paired dataset named See-in-the-Extremely-Dark (SIED) to benchmark low-light RAW image enhancement approaches. Furthermore, we propose a diffusion-based framework that leverages the generative ability and intrinsic denoising property of diffusion models to restore visually pleasing results from extremely low-SNR RAW inputs, in which an Adaptive Illumination Correction Module (AICM) and a color consistency loss are introduced to ensure accurate exposure correction and color restoration. Extensive experiments on the proposed SIED and publicly available benchmarks demonstrate the effectiveness of our method. The code and dataset are available at https://github.com/JianghaiSCU/SIED.
Learning-based methods have made promising advances in low-light RAW image enhancement, while their capability to extremely dark scenes where the environmental illuminance drops as low as 0. 0001 lux remains to be explored due to the lack of corresponding datasets.
https://arxiv.org/abs/2506.21132v1
https://arxiv.org/pdf/2506.21132v1.pdf
null
[ "Hai Jiang", "Binhao Guan", "Zhen Liu", "Xiaohong Liu", "Jian Yu", "Zheng Liu", "Songchen Han", "Shuaicheng Liu" ]
[ "Denoising", "Exposure Correction", "Image Enhancement" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/tree-based-semantic-losses-application-to
2506.21150
null
null
Tree-based Semantic Losses: Application to Sparsely-supervised Large Multi-class Hyperspectral Segmentation
Hyperspectral imaging (HSI) shows great promise for surgical applications, offering detailed insights into biological tissue differences beyond what the naked eye can perceive. Refined labelling efforts are underway to train vision systems to distinguish large numbers of subtly varying classes. However, commonly used learning methods for biomedical segmentation tasks penalise all errors equivalently and thus fail to exploit any inter-class semantics in the label space. In this work, we introduce two tree-based semantic loss functions which take advantage of a hierarchical organisation of the labels. We further incorporate our losses in a recently proposed approach for training with sparse, background-free annotations. Extensive experiments demonstrate that our proposed method reaches state-of-the-art performance on a sparsely annotated HSI dataset comprising $107$ classes organised in a clinically-defined semantic tree structure. Furthermore, our method enables effective detection of out-of-distribution (OOD) pixels without compromising segmentation performance on in-distribution (ID) pixels.
null
https://arxiv.org/abs/2506.21150v1
https://arxiv.org/pdf/2506.21150v1.pdf
null
[ "Junwen Wang", "Oscar MacCormac", "William Rochford", "Aaron Kujawa", "Jonathan Shapey", "Tom Vercauteren" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/robust-deep-learning-for-myocardial-scar
2506.21151
null
null
Robust Deep Learning for Myocardial Scar Segmentation in Cardiac MRI with Noisy Labels
The accurate segmentation of myocardial scars from cardiac MRI is essential for clinical assessment and treatment planning. In this study, we propose a robust deep-learning pipeline for fully automated myocardial scar detection and segmentation by fine-tuning state-of-the-art models. The method explicitly addresses challenges of label noise from semi-automatic annotations, data heterogeneity, and class imbalance through the use of Kullback-Leibler loss and extensive data augmentation. We evaluate the model's performance on both acute and chronic cases and demonstrate its ability to produce accurate and smooth segmentations despite noisy labels. In particular, our approach outperforms state-of-the-art models like nnU-Net and shows strong generalizability in an out-of-distribution test set, highlighting its robustness across various imaging conditions and clinical tasks. These results establish a reliable foundation for automated myocardial scar quantification and support the broader clinical adoption of deep learning in cardiac imaging.
The accurate segmentation of myocardial scars from cardiac MRI is essential for clinical assessment and treatment planning.
https://arxiv.org/abs/2506.21151v1
https://arxiv.org/pdf/2506.21151v1.pdf
null
[ "Aida Moafi", "Danial Moafi", "Evgeny M. Mirkes", "Gerry P. McCann", "Abbas S. Alatrany", "Jayanth R. Arnold", "Mostafa Mehdipour Ghazi" ]
[ "Data Augmentation" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/geometry-and-perception-guided-gaussians-for
2506.21152
null
null
Geometry and Perception Guided Gaussians for Multiview-consistent 3D Generation from a Single Image
Generating realistic 3D objects from single-view images requires natural appearance, 3D consistency, and the ability to capture multiple plausible interpretations of unseen regions. Existing approaches often rely on fine-tuning pretrained 2D diffusion models or directly generating 3D information through fast network inference or 3D Gaussian Splatting, but their results generally suffer from poor multiview consistency and lack geometric detail. To takle these issues, we present a novel method that seamlessly integrates geometry and perception priors without requiring additional model training to reconstruct detailed 3D objects from a single image. Specifically, we train three different Gaussian branches initialized from the geometry prior, perception prior and Gaussian noise, respectively. The geometry prior captures the rough 3D shapes, while the perception prior utilizes the 2D pretrained diffusion model to enhance multiview information. Subsequently, we refine 3D Gaussian branches through mutual interaction between geometry and perception priors, further enhanced by a reprojection-based strategy that enforces depth consistency. Experiments demonstrate the higher-fidelity reconstruction results of our method, outperforming existing methods on novel view synthesis and 3D reconstruction, demonstrating robust and consistent 3D object generation.
null
https://arxiv.org/abs/2506.21152v1
https://arxiv.org/pdf/2506.21152v1.pdf
null
[ "Pufan Li", "Bi'an Du", "Wei Hu" ]
[ "3D Generation", "3D Reconstruction", "Novel View Synthesis" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/topology-aware-modeling-for-unsupervised
2506.21165
null
null
Topology-Aware Modeling for Unsupervised Simulation-to-Reality Point Cloud Recognition
Learning semantic representations from point sets of 3D object shapes is often challenged by significant geometric variations, primarily due to differences in data acquisition methods. Typically, training data is generated using point simulators, while testing data is collected with distinct 3D sensors, leading to a simulation-to-reality (Sim2Real) domain gap that limits the generalization ability of point classifiers. Current unsupervised domain adaptation (UDA) techniques struggle with this gap, as they often lack robust, domain-insensitive descriptors capable of capturing global topological information, resulting in overfitting to the limited semantic patterns of the source domain. To address this issue, we introduce a novel Topology-Aware Modeling (TAM) framework for Sim2Real UDA on object point clouds. Our approach mitigates the domain gap by leveraging global spatial topology, characterized by low-level, high-frequency 3D structures, and by modeling the topological relations of local geometric features through a novel self-supervised learning task. Additionally, we propose an advanced self-training strategy that combines cross-domain contrastive learning with self-training, effectively reducing the impact of noisy pseudo-labels and enhancing the robustness of the adaptation process. Experimental results on three public Sim2Real benchmarks validate the effectiveness of our TAM framework, showing consistent improvements over state-of-the-art methods across all evaluated tasks. The source code of this work will be available at https://github.com/zou-longkun/TAG.git.
Learning semantic representations from point sets of 3D object shapes is often challenged by significant geometric variations, primarily due to differences in data acquisition methods.
https://arxiv.org/abs/2506.21165v1
https://arxiv.org/pdf/2506.21165v1.pdf
null
[ "Longkun Zou", "KangJun Liu", "Ke Chen", "Kailing Guo", "Kui Jia", "YaoWei Wang" ]
[ "Contrastive Learning", "Domain Adaptation", "Learning Semantic Representations", "Self-Supervised Learning", "Unsupervised Domain Adaptation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": null, "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Contrastive Learning", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "TAM is designed to capture complex temporal relationships both efficiently and flexibly,\r\nIt adopts an adaptive kernel instead of self-attention to capture global contextual information, with lower time complexity \r\nthan GLTR.\r\n\r\nTAM has two branches, a local branch and a global branch. Given the input feature map $X\\in \\mathbb{R}^{C\\times T\\times H\\times W}$, global spatial average pooling $\\text{GAP}$ is first applied to the feature map to ensure TAM has a low computational cost. Then the local branch in TAM employs several 1D convolutions with ReLU nonlinearity across the temporal domain to produce location-sensitive importance maps for enhancing frame-wise features.\r\nThe local branch can be written as\r\n\\begin{align}\r\n s &= \\sigma(\\text{Conv1D}(\\delta(\\text{Conv1D}(\\text{GAP}(X)))))\r\n\\end{align}\r\n\\begin{align}\r\n X^1 &= s X\r\n\\end{align}\r\nUnlike the local branch, the global branch is location invariant and focuses on generating a channel-wise adaptive kernel based on global temporal information in each channel. For the $c$-th channel, the kernel can be written as\r\n\r\n\\begin{align}\r\n \\Theta_c = \\text{Softmax}(\\text{FC}_2(\\delta(\\text{FC}_1(\\text{GAP}(X)_c)))) \r\n\\end{align}\r\n\r\nwhere $\\Theta_c \\in \\mathbb{R}^{K}$ and $K$ is the adaptive kernel size. Finally, TAM convolves the adaptive kernel $\\Theta$ with $ X_\\text{out}^1$:\r\n\\begin{align}\r\n Y = \\Theta \\otimes X^1\r\n\\end{align}\r\n\r\nWith the help of the local branch and global branch,\r\nTAM can capture the complex temporal structures in video and \r\nenhance per-frame features at low computational cost.\r\nDue to its flexibility and lightweight design,\r\nTAM can be added to any existing 2D CNNs.", "full_name": "Temporal Adaptive Module", "introduced_year": 2000, "main_collection": { "area": "General", "description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.", "name": "Attention Mechanisms", "parent": "Attention" }, "name": "TAM", "source_title": "TAM: Temporal Adaptive Module for Video Recognition", "source_url": "https://arxiv.org/abs/2005.06803v3" } ]
https://paperswithcode.com/paper/task-aware-kv-compression-for-cost-effective
2506.21184
null
null
Task-Aware KV Compression For Cost-Effective Long Video Understanding
Long-video understanding (LVU) remains a severe challenge for existing multimodal large language models (MLLMs), primarily due to the prohibitive computational cost. Recent approaches have explored KV compression to mitigate this issue, but they often suffer from significant information loss at high compression ratios. In this paper, we introduce Video-X^2L, which flexibly preserves critical video information for each LVU task. Video-X^2L involves two key operations. The first one is called bi-level KV compression. During the MLLM's pre-filling stage, Video-X^2L generates two types of compressed KVs: low-compression KVs (L-KVs) to capture fine-grained video details and high-compression KVs (H-KVs) to offer compact video representations. The second one is called selective KV re-loading. During the MLLM's decoding stage, Video-X^2L selectively re-loads L-KVs for the most critical video chunks while using H-KVs for other less important ones. This allows the MLLM to fully utilize task-specific information while maintaining the overall compactness. Video-X^2L is simple yet effective: it is free from additional training and directly compatible with existing KV-compressible MLLMs. We evaluate Video-X^2L with a variety of popular LVU benchmarks, including VideoMME, MLVU, LongVideoBench, and VNBench. Our experiment result shows that Video-X^2L outperforms existing KV-compression methods by a huge advantage while substantially saving the computation cost.
The first one is called bi-level KV compression.
https://arxiv.org/abs/2506.21184v1
https://arxiv.org/pdf/2506.21184v1.pdf
null
[ "Minghao Qin", "Yan Shu", "Peitian Zhang", "Kun Lun", "Huaying Yuan", "Juenjie Zhou", "Shitao Xiao", "Bo Zhao", "Zheng Liu" ]
[ "Video Understanding" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/groundflow-a-plug-in-module-for-temporal
2506.21188
null
null
GroundFlow: A Plug-in Module for Temporal Reasoning on 3D Point Cloud Sequential Grounding
Sequential grounding in 3D point clouds (SG3D) refers to locating sequences of objects by following text instructions for a daily activity with detailed steps. Current 3D visual grounding (3DVG) methods treat text instructions with multiple steps as a whole, without extracting useful temporal information from each step. However, the instructions in SG3D often contain pronouns such as "it", "here" and "the same" to make language expressions concise. This requires grounding methods to understand the context and retrieve relevant information from previous steps to correctly locate object sequences. Due to the lack of an effective module for collecting related historical information, state-of-the-art 3DVG methods face significant challenges in adapting to the SG3D task. To fill this gap, we propose GroundFlow -- a plug-in module for temporal reasoning on 3D point cloud sequential grounding. Firstly, we demonstrate that integrating GroundFlow improves the task accuracy of 3DVG baseline methods by a large margin (+7.5\% and +10.2\%) in the SG3D benchmark, even outperforming a 3D large language model pre-trained on various datasets. Furthermore, we selectively extract both short-term and long-term step information based on its relevance to the current instruction, enabling GroundFlow to take a comprehensive view of historical information and maintain its temporal understanding advantage as step counts increase. Overall, our work introduces temporal reasoning capabilities to existing 3DVG models and achieves state-of-the-art performance in the SG3D benchmark across five datasets.
null
https://arxiv.org/abs/2506.21188v1
https://arxiv.org/pdf/2506.21188v1.pdf
null
[ "Zijun Lin", "Shuting He", "Cheston Tan", "Bihan Wen" ]
[ "3D visual grounding", "Large Language Model", "Visual Grounding" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/bitmark-for-infinity-watermarking-bitwise
2506.21209
null
null
BitMark for Infinity: Watermarking Bitwise Autoregressive Image Generative Models
State-of-the-art text-to-image models like Infinity generate photorealistic images at an unprecedented speed. These models operate in a bitwise autoregressive manner over a discrete set of tokens that is practically infinite in size. However, their impressive generative power comes with a growing risk: as their outputs increasingly populate the Internet, they are likely to be scraped and reused as training data-potentially by the very same models. This phenomenon has been shown to lead to model collapse, where repeated training on generated content, especially from the models' own previous versions, causes a gradual degradation in performance. A promising mitigation strategy is watermarking, which embeds human-imperceptible yet detectable signals into generated images-enabling the identification of generated content. In this work, we introduce BitMark, a robust bitwise watermarking framework for Infinity. Our method embeds a watermark directly at the bit level of the token stream across multiple scales (also referred to as resolutions) during Infinity's image generation process. Our bitwise watermark subtly influences the bits to preserve visual fidelity and generation speed while remaining robust against a spectrum of removal techniques. Furthermore, it exhibits high radioactivity, i.e., when watermarked generated images are used to train another image generative model, this second model's outputs will also carry the watermark. The radioactive traces remain detectable even when only fine-tuning diffusion or image autoregressive models on images watermarked with our BitMark. Overall, our approach provides a principled step toward preventing model collapse in image generative models by enabling reliable detection of generated outputs.
null
https://arxiv.org/abs/2506.21209v1
https://arxiv.org/pdf/2506.21209v1.pdf
null
[ "Louis Kerner", "Michel Meintz", "Bihe Zhao", "Franziska Boenisch", "Adam Dziedzic" ]
[ "Image Generation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" }, { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" }, { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/reme-a-data-centric-framework-for-training
2506.21233
null
null
ReME: A Data-Centric Framework for Training-Free Open-Vocabulary Segmentation
Training-free open-vocabulary semantic segmentation (OVS) aims to segment images given a set of arbitrary textual categories without costly model fine-tuning. Existing solutions often explore attention mechanisms of pre-trained models, such as CLIP, or generate synthetic data and design complex retrieval processes to perform OVS. However, their performance is limited by the capability of reliant models or the suboptimal quality of reference sets. In this work, we investigate the largely overlooked data quality problem for this challenging dense scene understanding task, and identify that a high-quality reference set can significantly benefit training-free OVS. With this observation, we introduce a data-quality-oriented framework, comprising a data pipeline to construct a reference set with well-paired segment-text embeddings and a simple similarity-based retrieval to unveil the essential effect of data. Remarkably, extensive evaluations on ten benchmark datasets demonstrate that our method outperforms all existing training-free OVS approaches, highlighting the importance of data-centric design for advancing OVS without training. Our code is available at https://github.com/xiweix/ReME .
Training-free open-vocabulary semantic segmentation (OVS) aims to segment images given a set of arbitrary textual categories without costly model fine-tuning.
https://arxiv.org/abs/2506.21233v1
https://arxiv.org/pdf/2506.21233v1.pdf
null
[ "Xiwei Xuan", "Ziquan Deng", "Kwan-Liu Ma" ]
[ "Open Vocabulary Semantic Segmentation", "Open-Vocabulary Semantic Segmentation", "Retrieval", "Scene Understanding", "Semantic Segmentation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/OpenAI/CLIP", "description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)", "full_name": "Contrastive Language-Image Pre-training", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Representations", "parent": null }, "name": "CLIP", "source_title": "Learning Transferable Visual Models From Natural Language Supervision", "source_url": "https://arxiv.org/abs/2103.00020v1" }, { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/dimple-disentangled-multi-modal-prompt
2506.21237
null
null
DiMPLe -- Disentangled Multi-Modal Prompt Learning: Enhancing Out-Of-Distribution Alignment with Invariant and Spurious Feature Separation
We introduce DiMPLe (Disentangled Multi-Modal Prompt Learning), a novel approach to disentangle invariant and spurious features across vision and language modalities in multi-modal learning. Spurious correlations in visual data often hinder out-of-distribution (OOD) performance. Unlike prior methods focusing solely on image features, DiMPLe disentangles features within and across modalities while maintaining consistent alignment, enabling better generalization to novel classes and robustness to distribution shifts. Our method combines three key objectives: (1) mutual information minimization between invariant and spurious features, (2) spurious feature regularization, and (3) contrastive learning on invariant features. Extensive experiments demonstrate DiMPLe demonstrates superior performance compared to CoOp-OOD, when averaged across 11 diverse datasets, and achieves absolute gains of 15.27 in base class accuracy and 44.31 in novel class accuracy.
null
https://arxiv.org/abs/2506.21237v1
https://arxiv.org/pdf/2506.21237v1.pdf
null
[ "Umaima Rahman", "Mohammad Yaqub", "Dwarikanath Mahapatra" ]
[ "Contrastive Learning", "Prompt Learning" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": null, "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Contrastive Learning", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "Balanced Selection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Active Learning", "parent": null }, "name": "BASE", "source_title": "Active Learning at the ImageNet Scale", "source_url": "https://arxiv.org/abs/2111.12880v1" } ]
https://paperswithcode.com/paper/temporal-rate-reduction-clustering-for-human
2506.21249
null
null
Temporal Rate Reduction Clustering for Human Motion Segmentation
Human Motion Segmentation (HMS), which aims to partition videos into non-overlapping human motions, has attracted increasing research attention recently. Existing approaches for HMS are mainly dominated by subspace clustering methods, which are grounded on the assumption that high-dimensional temporal data align with a Union-of-Subspaces (UoS) distribution. However, the frames in video capturing complex human motions with cluttered backgrounds may not align well with the UoS distribution. In this paper, we propose a novel approach for HMS, named Temporal Rate Reduction Clustering ($\text{TR}^2\text{C}$), which jointly learns structured representations and affinity to segment the frame sequences in video. Specifically, the structured representations learned by $\text{TR}^2\text{C}$ maintain temporally consistent and align well with a UoS structure, which is favorable for the HMS task. We conduct extensive experiments on five benchmark HMS datasets and achieve state-of-the-art performances with different feature extractors.
null
https://arxiv.org/abs/2506.21249v1
https://arxiv.org/pdf/2506.21249v1.pdf
null
[ "Xianghan Meng", "Zhengyu Tong", "Zhiyuan Huang", "Chun-Guang Li" ]
[ "Clustering", "Motion Segmentation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" } ]
https://paperswithcode.com/paper/duet-dual-incremental-object-detection-via
2506.21260
null
null
DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic
Real-world object detection systems, such as those in autonomous driving and surveillance, must continuously learn new object categories and simultaneously adapt to changing environmental conditions. Existing approaches, Class Incremental Object Detection (CIOD) and Domain Incremental Object Detection (DIOD) only address one aspect of this challenge. CIOD struggles in unseen domains, while DIOD suffers from catastrophic forgetting when learning new classes, limiting their real-world applicability. To overcome these limitations, we introduce Dual Incremental Object Detection (DuIOD), a more practical setting that simultaneously handles class and domain shifts in an exemplar-free manner. We propose DuET, a Task Arithmetic-based model merging framework that enables stable incremental learning while mitigating sign conflicts through a novel Directional Consistency Loss. Unlike prior methods, DuET is detector-agnostic, allowing models like YOLO11 and RT-DETR to function as real-time incremental object detectors. To comprehensively evaluate both retention and adaptation, we introduce the Retention-Adaptability Index (RAI), which combines the Average Retention Index (Avg RI) for catastrophic forgetting and the Average Generalization Index for domain adaptability into a common ground. Extensive experiments on the Pascal Series and Diverse Weather Series demonstrate DuET's effectiveness, achieving a +13.12% RAI improvement while preserving 89.3% Avg RI on the Pascal Series (4 tasks), as well as a +11.39% RAI improvement with 88.57% Avg RI on the Diverse Weather Series (3 tasks), outperforming existing methods.
null
https://arxiv.org/abs/2506.21260v1
https://arxiv.org/pdf/2506.21260v1.pdf
null
[ "Munish Monga", "Vishal Chudasama", "Pankaj Wasnik", "Biplab Banerjee" ]
[ "Autonomous Driving", "Avg", "Class-Incremental Object Detection", "Exemplar-Free", "Incremental Learning", "Object", "object-detection", "Object Detection", "Task Arithmetic" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/video-virtual-try-on-with-conditional
2506.21270
null
null
Video Virtual Try-on with Conditional Diffusion Transformer Inpainter
Video virtual try-on aims to naturally fit a garment to a target person in consecutive video frames. It is a challenging task, on the one hand, the output video should be in good spatial-temporal consistency, on the other hand, the details of the given garment need to be preserved well in all the frames. Naively using image-based try-on methods frame by frame can get poor results due to severe inconsistency. Recent diffusion-based video try-on methods, though very few, happen to coincide with a similar solution: inserting temporal attention into image-based try-on model to adapt it for video try-on task, which have shown improvements but there still exist inconsistency problems. In this paper, we propose ViTI (Video Try-on Inpainter), formulate and implement video virtual try-on as a conditional video inpainting task, which is different from previous methods. In this way, we start with a video generation problem instead of an image-based try-on problem, which from the beginning has a better spatial-temporal consistency. Specifically, at first we build a video inpainting framework based on Diffusion Transformer with full 3D spatial-temporal attention, and then we progressively adapt it for video garment inpainting, with a collection of masking strategies and multi-stage training. After these steps, the model can inpaint the masked garment area with appropriate garment pixels according to the prompt with good spatial-temporal consistency. Finally, as other try-on methods, garment condition is added to the model to make sure the inpainted garment appearance and details are as expected. Both quantitative and qualitative experimental results show that ViTI is superior to previous works.
null
https://arxiv.org/abs/2506.21270v1
https://arxiv.org/pdf/2506.21270v1.pdf
null
[ "Cheng Zou", "Senlin Cheng", "Bolei Xu", "Dandan Zheng", "Xiaobo Li", "Jingdong Chen", "Ming Yang" ]
[ "Video Generation", "Video Inpainting", "Virtual Try-on" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "", "description": "Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.", "full_name": "Inpainting", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.", "name": "Self-Supervised Learning", "parent": null }, "name": "Inpainting", "source_title": "Context Encoders: Feature Learning by Inpainting", "source_url": "http://arxiv.org/abs/1604.07379v2" }, { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/wordcon-word-level-typography-control-in
2506.21276
null
null
WordCon: Word-level Typography Control in Scene Text Rendering
Achieving precise word-level typography control within generated images remains a persistent challenge. To address it, we newly construct a word-level controlled scene text dataset and introduce the Text-Image Alignment (TIA) framework. This framework leverages cross-modal correspondence between text and local image regions provided by grounding models to enhance the Text-to-Image (T2I) model training. Furthermore, we propose WordCon, a hybrid parameter-efficient fine-tuning (PEFT) method. WordCon reparameterizes selective key parameters, improving both efficiency and portability. This allows seamless integration into diverse pipelines, including artistic text rendering, text editing, and image-conditioned text rendering. To further enhance controllability, the masked loss at the latent level is applied to guide the model to concentrate on learning the text region in the image, and the joint-attention loss provides feature-level supervision to promote disentanglement between different words. Both qualitative and quantitative results demonstrate the superiority of our method to the state of the art. The datasets and source code will be available for academic use.
null
https://arxiv.org/abs/2506.21276v1
https://arxiv.org/pdf/2506.21276v1.pdf
null
[ "Wenda Shi", "Yiren Song", "Zihan Rao", "Dengming Zhang", "Jiaming Liu", "Xingxing Zou" ]
[ "Disentanglement", "parameter-efficient fine-tuning" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/humanomniv2-from-understanding-to-omni-modal
2506.21277
null
null
HumanOmniV2: From Understanding to Omni-Modal Reasoning with Context
With the rapid evolution of multimodal large language models, the capacity to deeply understand and interpret human intentions has emerged as a critical capability, which demands detailed and thoughtful reasoning. In recent studies, Reinforcement Learning (RL) has demonstrated potential in enhancing the reasoning capabilities of Large Language Models (LLMs). Nonetheless, the challenges associated with adapting RL to multimodal data and formats remain largely unaddressed. In this paper, we identify two issues in existing multimodal reasoning models: insufficient global context understanding and shortcut problems. Insufficient context understanding can happen when a model misinterprets multimodal context, resulting in incorrect answers. The shortcut problem occurs when the model overlooks crucial clues in multimodal inputs, directly addressing the query without considering the multimodal information. To tackle these issues, we emphasize the necessity for the model to reason with a clear understanding of the global context within multimodal inputs. This global context understanding can effectively prevent the model from overlooking key multimodal cues and ensure a thorough reasoning process. To ensure the accurate interpretation of multimodal context information, we implement a context reward judged by a large language model, alongside format and accuracy rewards. Additionally, to improve complex reasoning capability, we employ the LLM to assess the logical reward, determining whether the reasoning process successfully integrates multimodal information with logical methods. We also introduce a reasoning omni-modal benchmark, IntentBench, aimed at evaluating models in understanding complex human intentions and emotions. Our proposed method demonstrates advanced performance across multiple omni-modal benchmarks compared to other open-source omni-modal models.
With the rapid evolution of multimodal large language models, the capacity to deeply understand and interpret human intentions has emerged as a critical capability, which demands detailed and thoughtful reasoning.
https://arxiv.org/abs/2506.21277v1
https://arxiv.org/pdf/2506.21277v1.pdf
null
[ "Qize Yang", "Shimin Yao", "Weixuan Chen", "Shenghao Fu", "Detao Bai", "Jiaxing Zhao", "Boyuan Sun", "Bowen Yin", "Xihan Wei", "Jingren Zhou" ]
[ "Large Language Model", "Multimodal Reasoning", "Reinforcement Learning (RL)" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hierasurg-hierarchy-aware-diffusion-model-for
2506.21287
null
null
HieraSurg: Hierarchy-Aware Diffusion Model for Surgical Video Generation
Surgical Video Synthesis has emerged as a promising research direction following the success of diffusion models in general-domain video generation. Although existing approaches achieve high-quality video generation, most are unconditional and fail to maintain consistency with surgical actions and phases, lacking the surgical understanding and fine-grained guidance necessary for factual simulation. We address these challenges by proposing HieraSurg, a hierarchy-aware surgical video generation framework consisting of two specialized diffusion models. Given a surgical phase and an initial frame, HieraSurg first predicts future coarse-grained semantic changes through a segmentation prediction model. The final video is then generated by a second-stage model that augments these temporal segmentation maps with fine-grained visual features, leading to effective texture rendering and integration of semantic information in the video space. Our approach leverages surgical information at multiple levels of abstraction, including surgical phase, action triplets, and panoptic segmentation maps. The experimental results on Cholecystectomy Surgical Video Generation demonstrate that the model significantly outperforms prior work both quantitatively and qualitatively, showing strong generalization capabilities and the ability to generate higher frame-rate videos. The model exhibits particularly fine-grained adherence when provided with existing segmentation maps, suggesting its potential for practical surgical applications.
null
https://arxiv.org/abs/2506.21287v1
https://arxiv.org/pdf/2506.21287v1.pdf
null
[ "Diego Biagini", "Nassir Navab", "Azade Farshad" ]
[ "Panoptic Segmentation", "Segmentation", "Video Generation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/continual-self-supervised-learning-with
2506.21312
null
null
Continual Self-Supervised Learning with Masked Autoencoders in Remote Sensing
The development of continual learning (CL) methods, which aim to learn new tasks in a sequential manner from the training data acquired continuously, has gained great attention in remote sensing (RS). The existing CL methods in RS, while learning new tasks, enhance robustness towards catastrophic forgetting. This is achieved by using a large number of labeled training samples, which is costly and not always feasible to gather in RS. To address this problem, we propose a novel continual self-supervised learning method in the context of masked autoencoders (denoted as CoSMAE). The proposed CoSMAE consists of two components: i) data mixup; and ii) model mixup knowledge distillation. Data mixup is associated with retaining information on previous data distributions by interpolating images from the current task with those from the previous tasks. Model mixup knowledge distillation is associated with distilling knowledge from past models and the current model simultaneously by interpolating their model weights to form a teacher for the knowledge distillation. The two components complement each other to regularize the MAE at the data and model levels to facilitate better generalization across tasks and reduce the risk of catastrophic forgetting. Experimental results show that CoSMAE achieves significant improvements of up to 4.94% over state-of-the-art CL methods applied to MAE. Our code is publicly available at: https://git.tu-berlin.de/rsim/CoSMAE.
null
https://arxiv.org/abs/2506.21312v1
https://arxiv.org/pdf/2506.21312v1.pdf
null
[ "Lars Möllenbrok", "Behnood Rasti", "Begüm Demir" ]
[ "Continual Learning", "Continual Self-Supervised Learning", "Knowledge Distillation", "Self-Supervised Learning" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/facebookresearch/mixup-cifar10", "description": "**Mixup** is a data augmentation technique that generates a weighted combination of random image pairs from the training data. Given two images and their ground truth labels: $\\left(x\\_{i}, y\\_{i}\\right), \\left(x\\_{j}, y\\_{j}\\right)$, a synthetic training example $\\left(\\hat{x}, \\hat{y}\\right)$ is generated as:\r\n\r\n$$ \\hat{x} = \\lambda{x\\_{i}} + \\left(1 − \\lambda\\right){x\\_{j}} $$\r\n$$ \\hat{y} = \\lambda{y\\_{i}} + \\left(1 − \\lambda\\right){y\\_{j}} $$\r\n\r\nwhere $\\lambda \\sim \\text{Beta}\\left(\\alpha = 0.2\\right)$ is independently sampled for each augmented example.", "full_name": "Mixup", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Data Augmentation** refers to a class of methods that augment an image dataset to increase the effective size of the training set, or as a form of regularization to help the network learn more effective representations.", "name": "Image Data Augmentation", "parent": null }, "name": "Mixup", "source_title": "mixup: Beyond Empirical Risk Minimization", "source_url": "http://arxiv.org/abs/1710.09412v2" }, { "code_snippet_url": null, "description": "", "full_name": "Masked autoencoder", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.", "name": "Self-Supervised Learning", "parent": null }, "name": "MAE", "source_title": "Masked Autoencoders Are Scalable Vision Learners", "source_url": "https://arxiv.org/abs/2111.06377v2" }, { "code_snippet_url": "https://research.google/blog/auto-generated-summaries-in-google-docs/", "description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.\r\nSource: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)", "full_name": "Knowledge Distillation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Knowledge Distillation", "parent": null }, "name": "Knowledge Distillation", "source_title": "Distilling the Knowledge in a Neural Network", "source_url": "http://arxiv.org/abs/1503.02531v1" } ]
https://paperswithcode.com/paper/drishtikon-multi-granular-visual-grounding
2506.21316
null
null
DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document Images
Visual grounding in text-rich document images is a critical yet underexplored challenge for document intelligence and visual question answering (VQA) systems. We present \drishtikon, a multi-granular visual grounding framework designed to enhance interpretability and trust in VQA for complex, multilingual documents. Our approach integrates robust multi-lingual OCR, large language models, and a novel region matching algorithm to accurately localize answer spans at block, line, word, and point levels. We curate a new benchmark from the CircularsVQA test set, providing fine-grained, human-verified annotations across multiple granularities. Extensive experiments demonstrate that our method achieves state-of-the-art grounding accuracy, with line-level granularity offering the best trade-off between precision and recall. Ablation studies further highlight the benefits of multi-block and multi-line reasoning. Comparative evaluations with leading vision-language models reveal the limitations of current VLMs in precise localization, underscoring the effectiveness of our structured, alignment-based approach. Our findings pave the way for more robust and interpretable document understanding systems in real-world, text-centric scenarios. Code and dataset has been made available at https://github.com/kasuba-badri-vishal/DhrishtiKon.
Visual grounding in text-rich document images is a critical yet underexplored challenge for document intelligence and visual question answering (VQA) systems.
https://arxiv.org/abs/2506.21316v1
https://arxiv.org/pdf/2506.21316v1.pdf
null
[ "Badri Vishal Kasuba", "Parag Chaudhuri", "Ganesh Ramakrishnan" ]
[ "document understanding", "Optical Character Recognition (OCR)", "Question Answering", "Visual Grounding", "Visual Question Answering", "Visual Question Answering (VQA)" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/llava-pose-enhancing-human-pose-and-action
2506.21317
null
null
LLaVA-Pose: Enhancing Human Pose and Action Understanding via Keypoint-Integrated Instruction Tuning
Current vision-language models (VLMs) are well-adapted for general visual understanding tasks. However, they perform inadequately when handling complex visual tasks related to human poses and actions due to the lack of specialized vision-language instruction-following data. We introduce a method for generating such data by integrating human keypoints with traditional visual features such as captions and bounding boxes, enabling more precise understanding of human-centric scenes. Our approach constructs a dataset comprising 200,328 samples tailored to fine-tune models for human-centric tasks, focusing on three areas: conversation, detailed description, and complex reasoning. We establish an Extended Human Pose and Action Understanding Benchmark (E-HPAUB) to assess model performance on human pose and action understanding. We fine-tune the LLaVA-1.5-7B model using this dataset and evaluate our resulting LLaVA-Pose model on the benchmark, achieving significant improvements. Experimental results show an overall improvement of 33.2% compared to the original LLaVA-1.5-7B model. These findings highlight the effectiveness of keypoint-integrated data in enhancing multimodal models for human-centric visual understanding. Code is available at https://github.com/Ody-trek/LLaVA-Pose.
We fine-tune the LLaVA-1. 5-7B model using this dataset and evaluate our resulting LLaVA-Pose model on the benchmark, achieving significant improvements.
https://arxiv.org/abs/2506.21317v1
https://arxiv.org/pdf/2506.21317v1.pdf
null
[ "Dewen Zhang", "Tahir Hussain", "Wangpeng An", "Hayaru Shouno" ]
[ "Action Understanding", "Instruction Following" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/holistic-surgical-phase-recognition-with
2506.21330
null
null
Holistic Surgical Phase Recognition with Hierarchical Input Dependent State Space Models
Surgical workflow analysis is essential in robot-assisted surgeries, yet the long duration of such procedures poses significant challenges for comprehensive video analysis. Recent approaches have predominantly relied on transformer models; however, their quadratic attention mechanism restricts efficient processing of lengthy surgical videos. In this paper, we propose a novel hierarchical input-dependent state space model that leverages the linear scaling property of state space models to enable decision making on full-length videos while capturing both local and global dynamics. Our framework incorporates a temporally consistent visual feature extractor, which appends a state space model head to a visual feature extractor to propagate temporal information. The proposed model consists of two key modules: a local-aggregation state space model block that effectively captures intricate local dynamics, and a global-relation state space model block that models temporal dependencies across the entire video. The model is trained using a hybrid discrete-continuous supervision strategy, where both signals of discrete phase labels and continuous phase progresses are propagated through the network. Experiments have shown that our method outperforms the current state-of-the-art methods by a large margin (+2.8% on Cholec80, +4.3% on MICCAI2016, and +12.9% on Heichole datasets). Code will be publicly available after paper acceptance.
null
https://arxiv.org/abs/2506.21330v1
https://arxiv.org/pdf/2506.21330v1.pdf
null
[ "Haoyang Wu", "Tsun-Hsuan Wang", "Mathias Lechner", "Ramin Hasani", "Jennifer A. Eckhoff", "Paul Pak", "Ozanan R. Meireles", "Guy Rosman", "Yutong Ban", "Daniela Rus" ]
[ "State Space Models", "Surgical phase recognition" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/panst3r-multi-view-consistent-panoptic
2506.21348
null
null
PanSt3R: Multi-view Consistent Panoptic Segmentation
Panoptic segmentation of 3D scenes, involving the segmentation and classification of object instances in a dense 3D reconstruction of a scene, is a challenging problem, especially when relying solely on unposed 2D images. Existing approaches typically leverage off-the-shelf models to extract per-frame 2D panoptic segmentations, before optimizing an implicit geometric representation (often based on NeRF) to integrate and fuse the 2D predictions. We argue that relying on 2D panoptic segmentation for a problem inherently 3D and multi-view is likely suboptimal as it fails to leverage the full potential of spatial relationships across views. In addition to requiring camera parameters, these approaches also necessitate computationally expensive test-time optimization for each scene. Instead, in this work, we propose a unified and integrated approach PanSt3R, which eliminates the need for test-time optimization by jointly predicting 3D geometry and multi-view panoptic segmentation in a single forward pass. Our approach builds upon recent advances in 3D reconstruction, specifically upon MUSt3R, a scalable multi-view version of DUSt3R, and enhances it with semantic awareness and multi-view panoptic segmentation capabilities. We additionally revisit the standard post-processing mask merging procedure and introduce a more principled approach for multi-view segmentation. We also introduce a simple method for generating novel-view predictions based on the predictions of PanSt3R and vanilla 3DGS. Overall, the proposed PanSt3R is conceptually simple, yet fast and scalable, and achieves state-of-the-art performance on several benchmarks, while being orders of magnitude faster than existing methods.
null
https://arxiv.org/abs/2506.21348v1
https://arxiv.org/pdf/2506.21348v1.pdf
null
[ "Lojze Zust", "Yohann Cabon", "Juliette Marrie", "Leonid Antsfeld", "Boris Chidlovskii", "Jerome Revaud", "Gabriela Csurka" ]
[ "2D Panoptic Segmentation", "3D geometry", "3DGS", "3D Reconstruction", "NeRF", "Panoptic Segmentation", "Segmentation" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/shotbench-expert-level-cinematic
2506.21356
null
null
ShotBench: Expert-Level Cinematic Understanding in Vision-Language Models
Cinematography, the fundamental visual language of film, is essential for conveying narrative, emotion, and aesthetic quality. While recent Vision-Language Models (VLMs) demonstrate strong general visual understanding, their proficiency in comprehending the nuanced cinematic grammar embedded within individual shots remains largely unexplored and lacks robust evaluation. This critical gap limits both fine-grained visual comprehension and the precision of AI-assisted video generation. To address this, we introduce \textbf{ShotBench}, a comprehensive benchmark specifically designed for cinematic language understanding. It features over 3.5k expert-annotated QA pairs from images and video clips, meticulously curated from over 200 acclaimed (predominantly Oscar-nominated) films and spanning eight key cinematography dimensions. Our evaluation of 24 leading VLMs on ShotBench reveals their substantial limitations: even the top-performing model achieves less than 60\% average accuracy, particularly struggling with fine-grained visual cues and complex spatial reasoning. To catalyze advancement in this domain, we construct \textbf{ShotQA}, a large-scale multimodal dataset comprising approximately 70k cinematic QA pairs. Leveraging ShotQA, we develop \textbf{ShotVL} through supervised fine-tuning and Group Relative Policy Optimization. ShotVL significantly outperforms all existing open-source and proprietary models on ShotBench, establishing new \textbf{state-of-the-art} performance. We open-source our models, data, and code to foster rapid progress in this crucial area of AI-driven cinematic understanding and generation.
null
https://arxiv.org/abs/2506.21356v1
https://arxiv.org/pdf/2506.21356v1.pdf
null
[ "Hongbo Liu", "Jingwen He", "Yi Jin", "Dian Zheng", "Yuhao Dong", "Fan Zhang", "Ziqi Huang", "Yinan He", "Yangguang Li", "WeiChao Chen", "Yu Qiao", "Wanli Ouyang", "Shengjie Zhao", "Ziwei Liu" ]
[ "Spatial Reasoning", "Video Generation" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/copa-sg-dense-scene-graphs-with-parametric
2506.21357
null
null
CoPa-SG: Dense Scene Graphs with Parametric and Proto-Relations
2D scene graphs provide a structural and explainable framework for scene understanding. However, current work still struggles with the lack of accurate scene graph data. To overcome this data bottleneck, we present CoPa-SG, a synthetic scene graph dataset with highly precise ground truth and exhaustive relation annotations between all objects. Moreover, we introduce parametric and proto-relations, two new fundamental concepts for scene graphs. The former provides a much more fine-grained representation than its traditional counterpart by enriching relations with additional parameters such as angles or distances. The latter encodes hypothetical relations in a scene graph and describes how relations would form if new objects are placed in the scene. Using CoPa-SG, we compare the performance of various scene graph generation models. We demonstrate how our new relation types can be integrated in downstream applications to enhance planning and reasoning capabilities.
null
https://arxiv.org/abs/2506.21357v1
https://arxiv.org/pdf/2506.21357v1.pdf
null
[ "Julian Lorenz", "Mrunmai Phatak", "Robin Schön", "Katja Ludwig", "Nico Hörmann", "Annemarie Friedrich", "Rainer Lienhart" ]
[ "Graph Generation", "Relation", "Scene Graph Generation", "Scene Understanding" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ca-i2p-channel-adaptive-registration-network
2506.21364
null
null
CA-I2P: Channel-Adaptive Registration Network with Global Optimal Selection
Detection-free methods typically follow a coarse-to-fine pipeline, extracting image and point cloud features for patch-level matching and refining dense pixel-to-point correspondences. However, differences in feature channel attention between images and point clouds may lead to degraded matching results, ultimately impairing registration accuracy. Furthermore, similar structures in the scene could lead to redundant correspondences in cross-modal matching. To address these issues, we propose Channel Adaptive Adjustment Module (CAA) and Global Optimal Selection Module (GOS). CAA enhances intra-modal features and suppresses cross-modal sensitivity, while GOS replaces local selection with global optimization. Experiments on RGB-D Scenes V2 and 7-Scenes demonstrate the superiority of our method, achieving state-of-the-art performance in image-to-point cloud registration.
null
https://arxiv.org/abs/2506.21364v1
https://arxiv.org/pdf/2506.21364v1.pdf
null
[ "Zhixin Cheng", "Jiacheng Deng", "Xinjun Li", "Xiaotian Yin", "Bohao Liao", "Baoqun Yin", "Wenfei Yang", "Tianzhu Zhang" ]
[ "global-optimization", "Image to Point Cloud Registration", "Point Cloud Registration" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/genflow-interactive-modular-system-for-image
2506.21369
null
null
GenFlow: Interactive Modular System for Image Generation
Generative art unlocks boundless creative possibilities, yet its full potential remains untapped due to the technical expertise required for advanced architectural concepts and computational workflows. To bridge this gap, we present GenFlow, a novel modular framework that empowers users of all skill levels to generate images with precision and ease. Featuring a node-based editor for seamless customization and an intelligent assistant powered by natural language processing, GenFlow transforms the complexity of workflow creation into an intuitive and accessible experience. By automating deployment processes and minimizing technical barriers, our framework makes cutting-edge generative art tools available to everyone. A user study demonstrated GenFlow's ability to optimize workflows, reduce task completion times, and enhance user understanding through its intuitive interface and adaptive features. These results position GenFlow as a groundbreaking solution that redefines accessibility and efficiency in the realm of generative art.
null
https://arxiv.org/abs/2506.21369v1
https://arxiv.org/pdf/2506.21369v1.pdf
null
[ "Duc-Hung Nguyen", "Huu-Phuc Huynh", "Minh-Triet Tran", "Trung-Nghia Le" ]
[ "Image Generation" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fastref-fast-prototype-refinement-for-few
2506.21398
null
null
FastRef:Fast Prototype Refinement for Few-Shot Industrial Anomaly Detection
Few-shot industrial anomaly detection (FS-IAD) presents a critical challenge for practical automated inspection systems operating in data-scarce environments. While existing approaches predominantly focus on deriving prototypes from limited normal samples, they typically neglect to systematically incorporate query image statistics to enhance prototype representativeness. To address this issue, we propose FastRef, a novel and efficient prototype refinement framework for FS-IAD. Our method operates through an iterative two-stage process: (1) characteristic transfer from query features to prototypes via an optimizable transformation matrix, and (2) anomaly suppression through prototype alignment. The characteristic transfer is achieved through linear reconstruction of query features from prototypes, while the anomaly suppression addresses a key observation in FS-IAD that unlike conventional IAD with abundant normal prototypes, the limited-sample setting makes anomaly reconstruction more probable. Therefore, we employ optimal transport (OT) for non-Gaussian sampled features to measure and minimize the gap between prototypes and their refined counterparts for anomaly suppression. For comprehensive evaluation, we integrate FastRef with three competitive prototype-based FS-IAD methods: PatchCore, FastRecon, WinCLIP, and AnomalyDINO. Extensive experiments across four benchmark datasets of MVTec, ViSA, MPDD and RealIAD demonstrate both the effectiveness and computational efficiency of our approach under 1/2/4-shots.
null
https://arxiv.org/abs/2506.21398v1
https://arxiv.org/pdf/2506.21398v1.pdf
null
[ "Long Tian", "Yufei Li", "Yuyang Dai", "Wenchao Chen", "Xiyang Liu", "Bo Chen" ]
[ "Anomaly Detection", "Computational Efficiency" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/xverse-consistent-multi-subject-control-of
2506.21416
null
null
XVerse: Consistent Multi-Subject Control of Identity and Semantic Attributes via DiT Modulation
Achieving fine-grained control over subject identity and semantic attributes (pose, style, lighting) in text-to-image generation, particularly for multiple subjects, often undermines the editability and coherence of Diffusion Transformers (DiTs). Many approaches introduce artifacts or suffer from attribute entanglement. To overcome these challenges, we propose a novel multi-subject controlled generation model XVerse. By transforming reference images into offsets for token-specific text-stream modulation, XVerse allows for precise and independent control for specific subject without disrupting image latents or features. Consequently, XVerse offers high-fidelity, editable multi-subject image synthesis with robust control over individual subject characteristics and semantic attributes. This advancement significantly improves personalized and complex scene generation capabilities.
Achieving fine-grained control over subject identity and semantic attributes (pose, style, lighting) in text-to-image generation, particularly for multiple subjects, often undermines the editability and coherence of Diffusion Transformers (DiTs).
https://arxiv.org/abs/2506.21416v1
https://arxiv.org/pdf/2506.21416v1.pdf
null
[ "Bowen Chen", "Mengyi Zhao", "Haomiao Sun", "Li Chen", "Xu Wang", "Kang Du", "Xinglong Wu" ]
[ "Attribute", "Image Generation", "Scene Generation", "Text to Image Generation", "Text-to-Image Generation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/hypersort-self-organising-robust-training
2506.21430
null
null
HyperSORT: Self-Organising Robust Training with hyper-networks
Medical imaging datasets often contain heterogeneous biases ranging from erroneous labels to inconsistent labeling styles. Such biases can negatively impact deep segmentation networks performance. Yet, the identification and characterization of such biases is a particularly tedious and challenging task. In this paper, we introduce HyperSORT, a framework using a hyper-network predicting UNets' parameters from latent vectors representing both the image and annotation variability. The hyper-network parameters and the latent vector collection corresponding to each data sample from the training set are jointly learned. Hence, instead of optimizing a single neural network to fit a dataset, HyperSORT learns a complex distribution of UNet parameters where low density areas can capture noise-specific patterns while larger modes robustly segment organs in differentiated but meaningful manners. We validate our method on two 3D abdominal CT public datasets: first a synthetically perturbed version of the AMOS dataset, and TotalSegmentator, a large scale dataset containing real unknown biases and errors. Our experiments show that HyperSORT creates a structured mapping of the dataset allowing the identification of relevant systematic biases and erroneous samples. Latent space clusters yield UNet parameters performing the segmentation task in accordance with the underlying learned systematic bias. The code and our analysis of the TotalSegmentator dataset are made available: https://github.com/ImFusionGmbH/HyperSORT
Medical imaging datasets often contain heterogeneous biases ranging from erroneous labels to inconsistent labeling styles.
https://arxiv.org/abs/2506.21430v1
https://arxiv.org/pdf/2506.21430v1.pdf
null
[ "Samuel Joutard", "Marijn Stollenga", "Marc Balle Sanchez", "Mohammad Farid Azampour", "Raphael Prevost" ]
[]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/benchmarking-deep-learning-and-vision
2506.21444
null
null
Benchmarking Deep Learning and Vision Foundation Models for Atypical vs. Normal Mitosis Classification with Cross-Dataset Evaluation
Atypical mitoses mark a deviation in the cell division process that can be an independent prognostically relevant marker for tumor malignancy. However, their identification remains challenging due to low prevalence, at times subtle morphological differences from normal mitoses, low inter-rater agreement among pathologists, and class imbalance in datasets. Building on the Atypical Mitosis dataset for Breast Cancer (AMi-Br), this study presents a comprehensive benchmark comparing deep learning approaches for automated atypical mitotic figure (AMF) classification, including baseline models, foundation models with linear probing, and foundation models fine-tuned with low-rank adaptation (LoRA). For rigorous evaluation, we further introduce two new hold-out AMF datasets - AtNorM-Br, a dataset of mitoses from the The TCGA breast cancer cohort, and AtNorM-MD, a multi-domain dataset of mitoses from the MIDOG++ training set. We found average balanced accuracy values of up to 0.8135, 0.7696, and 0.7705 on the in-domain AMi-Br and the out-of-domain AtNorm-Br and AtNorM-MD datasets, respectively, with the results being particularly good for LoRA-based adaptation of the Virchow-line of foundation models. Our work shows that atypical mitosis classification, while being a challenging problem, can be effectively addressed through the use of recent advances in transfer learning and model fine-tuning techniques. We make available all code and data used in this paper in this github repository: https://github.com/DeepMicroscopy/AMi-Br_Benchmark.
Atypical mitoses mark a deviation in the cell division process that can be an independent prognostically relevant marker for tumor malignancy.
https://arxiv.org/abs/2506.21444v1
https://arxiv.org/pdf/2506.21444v1.pdf
null
[ "Sweta Banerjee", "Viktoria Weiss", "Taryn A. Donovan", "Rutger A. Fick", "Thomas Conrad", "Jonas Ammeling", "Nils Porsche", "Robert Klopfleisch", "Christopher Kaltenecker", "Katharina Breininger", "Marc Aubreville", "Christof A. Bertram" ]
[ "Benchmarking", "Transfer Learning" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/controllable-3d-placement-of-objects-with
2506.21446
null
null
Controllable 3D Placement of Objects with Scene-Aware Diffusion Models
Image editing approaches have become more powerful and flexible with the advent of powerful text-conditioned generative models. However, placing objects in an environment with a precise location and orientation still remains a challenge, as this typically requires carefully crafted inpainting masks or prompts. In this work, we show that a carefully designed visual map, combined with coarse object masks, is sufficient for high quality object placement. We design a conditioning signal that resolves ambiguities, while being flexible enough to allow for changing of shapes or object orientations. By building on an inpainting model, we leave the background intact by design, in contrast to methods that model objects and background jointly. We demonstrate the effectiveness of our method in the automotive setting, where we compare different conditioning signals in novel object placement tasks. These tasks are designed to measure edit quality not only in terms of appearance, but also in terms of pose and location accuracy, including cases that require non-trivial shape changes. Lastly, we show that fine location control can be combined with appearance control to place existing objects in precise locations in a scene.
null
https://arxiv.org/abs/2506.21446v1
https://arxiv.org/pdf/2506.21446v1.pdf
null
[ "Mohamed Omran", "Dimitris Kalatzis", "Jens Petersen", "Amirhossein Habibian", "Auke Wiggers" ]
[ "Object" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.", "full_name": "Inpainting", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.", "name": "Self-Supervised Learning", "parent": null }, "name": "Inpainting", "source_title": "Context Encoders: Feature Learning by Inpainting", "source_url": "http://arxiv.org/abs/1604.07379v2" } ]
https://paperswithcode.com/paper/a-comprehensive-dataset-for-underground-miner
2506.21451
null
null
A Comprehensive Dataset for Underground Miner Detection in Diverse Scenario
Underground mining operations face significant safety challenges that make emergency response capabilities crucial. While robots have shown promise in assisting with search and rescue operations, their effectiveness depends on reliable miner detection capabilities. Deep learning algorithms offer potential solutions for automated miner detection, but require comprehensive training datasets, which are currently lacking for underground mining environments. This paper presents a novel thermal imaging dataset specifically designed to enable the development and validation of miner detection systems for potential emergency applications. We systematically captured thermal imagery of various mining activities and scenarios to create a robust foundation for detection algorithms. To establish baseline performance metrics, we evaluated several state-of-the-art object detection algorithms including YOLOv8, YOLOv10, YOLO11, and RT-DETR on our dataset. While not exhaustive of all possible emergency situations, this dataset serves as a crucial first step toward developing reliable thermal-based miner detection systems that could eventually be deployed in real emergency scenarios. This work demonstrates the feasibility of using thermal imaging for miner detection and establishes a foundation for future research in this critical safety application.
null
https://arxiv.org/abs/2506.21451v1
https://arxiv.org/pdf/2506.21451v1.pdf
null
[ "Cyrus Addy", "Ajay Kumar Gurumadaiah", "Yixiang Gao", "Kwame Awuah-Offei" ]
[ "object-detection", "Object Detection" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "You Only Look Once", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.", "name": "Object Detection Models", "parent": null }, "name": "YOLOv8", "source_title": "YOLOv3: An Incremental Improvement", "source_url": "http://arxiv.org/abs/1804.02767v1" } ]
https://paperswithcode.com/paper/rethinking-oversaturation-in-classifier-free
2506.21452
null
null
Rethinking Oversaturation in Classifier-Free Guidance via Low Frequency
Classifier-free guidance (CFG) succeeds in condition diffusion models that use a guidance scale to balance the influence of conditional and unconditional terms. A high guidance scale is used to enhance the performance of the conditional term. However, the high guidance scale often results in oversaturation and unrealistic artifacts. In this paper, we introduce a new perspective based on low-frequency signals, identifying the accumulation of redundant information in these signals as the key factor behind oversaturation and unrealistic artifacts. Building on this insight, we propose low-frequency improved classifier-free guidance (LF-CFG) to mitigate these issues. Specifically, we introduce an adaptive threshold-based measurement to pinpoint the locations of redundant information. We determine a reasonable threshold by analyzing the change rate of low-frequency information between prior and current steps. We then apply a down-weight strategy to reduce the impact of redundant information in the low-frequency signals. Experimental results demonstrate that LF-CFG effectively alleviates oversaturation and unrealistic artifacts across various diffusion models, including Stable Diffusion-XL, Stable Diffusion 2.1, 3.0, 3.5, and SiT-XL.
null
https://arxiv.org/abs/2506.21452v1
https://arxiv.org/pdf/2506.21452v1.pdf
null
[ "Kaiyu Song", "Hanjiang Lai" ]
[]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/evaluation-of-traffic-signals-for-daily
2506.21469
null
null
Evaluation of Traffic Signals for Daily Traffic Pattern
The turning movement count data is crucial for traffic signal design, intersection geometry planning, traffic flow, and congestion analysis. This work proposes three methods called dynamic, static, and hybrid configuration for TMC-based traffic signals. A vision-based tracking system is developed to estimate the TMC of six intersections in Las Vegas using traffic cameras. The intersection design, route (e.g. vehicle movement directions), and signal configuration files with compatible formats are synthesized and imported into Simulation of Urban MObility for signal evaluation with realistic data. The initial experimental results based on estimated waiting times indicate that the cycle time of 90 and 120 seconds works best for all intersections. In addition, four intersections show better performance for dynamic signal timing configuration, and the other two with lower performance have a lower ratio of total vehicle count to total lanes of the intersection leg. Since daily traffic flow often exhibits a bimodal pattern, we propose a hybrid signal method that switches between dynamic and static methods, adapting to peak and off-peak traffic conditions for improved flow management. So, a built-in traffic generator module creates vehicle routes for 4 hours, including peak hours, and a signal design module produces signal schedule cycles according to static, dynamic, and hybrid methods. Vehicle count distributions are weighted differently for each zone (i.e., West, North, East, South) to generate diverse traffic patterns. The extended experimental results for 6 intersections with 4 hours of simulation time imply that zone-based traffic pattern distributions affect signal design selection. Although the static method works great for evenly zone-based traffic distribution, the hybrid method works well for highly weighted traffic at intersection pairs of the West-East and North-South zones.
null
https://arxiv.org/abs/2506.21469v1
https://arxiv.org/pdf/2506.21469v1.pdf
null
[ "Mohammad Shokrolah Shirazi", "Hung-Fu Chang" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/logios-an-open-source-greek-polytonic-optical
2506.21474
null
null
Logios : An open source Greek Polytonic Optical Character Recognition system
In this paper, we present an Optical Character Recognition (OCR) system specifically designed for the accurate recognition and digitization of Greek polytonic texts. By leveraging the combined strengths of convolutional layers for feature extraction and recurrent layers for sequence learning, our system addresses the unique challenges posed by Greek polytonic scripts. This approach aims to overcome the limitations of traditional OCR methods, offering significant improvements in accuracy and efficiency. We release the underlying model as an open-source library and make our OCR platform available for academic use.
null
https://arxiv.org/abs/2506.21474v1
https://arxiv.org/pdf/2506.21474v1.pdf
null
[ "Perifanos Konstantinos", "Goutsos Dionisis" ]
[ "Optical Character Recognition", "Optical Character Recognition (OCR)" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/global-and-local-entailment-learning-for
2506.21476
null
null
Global and Local Entailment Learning for Natural World Imagery
Learning the hierarchical structure of data in vision-language models is a significant challenge. Previous works have attempted to address this challenge by employing entailment learning. However, these approaches fail to model the transitive nature of entailment explicitly, which establishes the relationship between order and semantics within a representation space. In this work, we introduce Radial Cross-Modal Embeddings (RCME), a framework that enables the explicit modeling of transitivity-enforced entailment. Our proposed framework optimizes for the partial order of concepts within vision-language models. By leveraging our framework, we develop a hierarchical vision-language foundation model capable of representing the hierarchy in the Tree of Life. Our experiments on hierarchical species classification and hierarchical retrieval tasks demonstrate the enhanced performance of our models compared to the existing state-of-the-art models. Our code and models are open-sourced at https://vishu26.github.io/RCME/index.html.
Learning the hierarchical structure of data in vision-language models is a significant challenge.
https://arxiv.org/abs/2506.21476v1
https://arxiv.org/pdf/2506.21476v1.pdf
null
[ "Srikumar Sastry", "Aayush Dhakal", "Eric Xing", "Subash Khanal", "Nathan Jacobs" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/titan-query-token-based-domain-adaptive
2506.21484
null
null
TITAN: Query-Token based Domain Adaptive Adversarial Learning
We focus on the source-free domain adaptive object detection (SF-DAOD) problem when source data is unavailable during adaptation and the model must adapt to an unlabeled target domain. The majority of approaches for the problem employ a self-supervised approach using a student-teacher (ST) framework where pseudo-labels are generated via a source-pretrained model for further fine-tuning. We observe that the performance of a student model often degrades drastically, due to the collapse of the teacher model, primarily caused by high noise in pseudo-labels, resulting from domain bias, discrepancies, and a significant domain shift across domains. To obtain reliable pseudo-labels, we propose a Target-based Iterative Query-Token Adversarial Network (TITAN), which separates the target images into two subsets: those similar to the source (easy) and those dissimilar (hard). We propose a strategy to estimate variance to partition the target domain. This approach leverages the insight that higher detection variances correspond to higher recall and greater similarity to the source domain. Also, we incorporate query-token-based adversarial modules into a student-teacher baseline framework to reduce the domain gaps between two feature representations. Experiments conducted on four natural imaging datasets and two challenging medical datasets have substantiated the superior performance of TITAN compared to existing state-of-the-art (SOTA) methodologies. We report an mAP improvement of +22.7, +22.2, +21.1, and +3.7 percent over the current SOTA on C2F, C2B, S2C, and K2C benchmarks, respectively.
null
https://arxiv.org/abs/2506.21484v1
https://arxiv.org/pdf/2506.21484v1.pdf
null
[ "Tajamul Ashraf", "Janibul Bashir" ]
[]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/mitigating-hallucination-of-large-vision
2506.21509
null
null
Mitigating Hallucination of Large Vision-Language Models via Dynamic Logits Calibration
Large Vision-Language Models (LVLMs) have demonstrated significant advancements in multimodal understanding, yet they are frequently hampered by hallucination-the generation of text that contradicts visual input. Existing training-free decoding strategies exhibit critical limitations, including the use of static constraints that do not adapt to semantic drift during generation, inefficiency stemming from the need for multiple forward passes, and degradation of detail due to overly rigid intervention rules. To overcome these challenges, this paper introduces Dynamic Logits Calibration (DLC), a novel training-free decoding framework designed to dynamically align text generation with visual evidence at inference time. At the decoding phase, DLC step-wise employs CLIP to assess the semantic alignment between the input image and the generated text sequence. Then, the Relative Visual Advantage (RVA) of candidate tokens is evaluated against a dynamically updated contextual baseline, adaptively adjusting output logits to favor tokens that are visually grounded. Furthermore, an adaptive weighting mechanism, informed by a real-time context alignment score, carefully balances the visual guidance while ensuring the overall quality of the textual output. Extensive experiments conducted across diverse benchmarks and various LVLM architectures (such as LLaVA, InstructBLIP, and MiniGPT-4) demonstrate that DLC significantly reduces hallucinations, outperforming current methods while maintaining high inference efficiency by avoiding multiple forward passes. Overall, we present an effective and efficient decoding-time solution to mitigate hallucinations, thereby enhancing the reliability of LVLMs for more practices. Code will be released on Github.
Large Vision-Language Models (LVLMs) have demonstrated significant advancements in multimodal understanding, yet they are frequently hampered by hallucination-the generation of text that contradicts visual input.
https://arxiv.org/abs/2506.21509v1
https://arxiv.org/pdf/2506.21509v1.pdf
null
[ "Jiahe Chen", "Jiaying He", "Qian Shao", "Qiyuan Chen", "Jiahe Ying", "Hongxia Xu", "Jintai Chen", "Jianwei Zheng", "Jian Wu" ]
[ "Hallucination", "Text Generation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/OpenAI/CLIP", "description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)", "full_name": "Contrastive Language-Image Pre-training", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Representations", "parent": null }, "name": "CLIP", "source_title": "Learning Transferable Visual Models From Natural Language Supervision", "source_url": "https://arxiv.org/abs/2103.00020v1" }, { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" } ]
https://paperswithcode.com/paper/ggtalker-talking-head-systhesis-with
2506.21513
null
null
GGTalker: Talking Head Systhesis with Generalizable Gaussian Priors and Identity-Specific Adaptation
Creating high-quality, generalizable speech-driven 3D talking heads remains a persistent challenge. Previous methods achieve satisfactory results for fixed viewpoints and small-scale audio variations, but they struggle with large head rotations and out-of-distribution (OOD) audio. Moreover, they are constrained by the need for time-consuming, identity-specific training. We believe the core issue lies in the lack of sufficient 3D priors, which limits the extrapolation capabilities of synthesized talking heads. To address this, we propose GGTalker, which synthesizes talking heads through a combination of generalizable priors and identity-specific adaptation. We introduce a two-stage Prior-Adaptation training strategy to learn Gaussian head priors and adapt to individual characteristics. We train Audio-Expression and Expression-Visual priors to capture the universal patterns of lip movements and the general distribution of head textures. During the Customized Adaptation, individual speaking styles and texture details are precisely modeled. Additionally, we introduce a color MLP to generate fine-grained, motion-aligned textures and a Body Inpainter to blend rendered results with the background, producing indistinguishable, photorealistic video frames. Comprehensive experiments show that GGTalker achieves state-of-the-art performance in rendering quality, 3D consistency, lip-sync accuracy, and training efficiency.
null
https://arxiv.org/abs/2506.21513v1
https://arxiv.org/pdf/2506.21513v1.pdf
null
[ "Wentao Hu", "Shunkai Li", "Ziqiao Peng", "Haoxian Zhang", "Fan Shi", "Xiaoqiang Liu", "Pengfei Wan", "Di Zhang", "Hui Tian" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/g-2-d-boosting-multimodal-learning-with
2506.21514
null
null
G$^{2}$D: Boosting Multimodal Learning with Gradient-Guided Distillation
Multimodal learning aims to leverage information from diverse data modalities to achieve more comprehensive performance. However, conventional multimodal models often suffer from modality imbalance, where one or a few modalities dominate model optimization, leading to suboptimal feature representation and underutilization of weak modalities. To address this challenge, we introduce Gradient-Guided Distillation (G$^{2}$D), a knowledge distillation framework that optimizes the multimodal model with a custom-built loss function that fuses both unimodal and multimodal objectives. G$^{2}$D further incorporates a dynamic sequential modality prioritization (SMP) technique in the learning process to ensure each modality leads the learning process, avoiding the pitfall of stronger modalities overshadowing weaker ones. We validate G$^{2}$D on multiple real-world datasets and show that G$^{2}$D amplifies the significance of weak modalities while training and outperforms state-of-the-art methods in classification and regression tasks. Our code is available at https://github.com/rAIson-Lab/G2D.
Multimodal learning aims to leverage information from diverse data modalities to achieve more comprehensive performance.
https://arxiv.org/abs/2506.21514v1
https://arxiv.org/pdf/2506.21514v1.pdf
null
[ "Mohammed Rakib", "Arunkumar Bagavathi" ]
[ "Knowledge Distillation", "Model Optimization" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://research.google/blog/auto-generated-summaries-in-google-docs/", "description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.\r\nSource: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)", "full_name": "Knowledge Distillation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Knowledge Distillation", "parent": null }, "name": "Knowledge Distillation", "source_title": "Distilling the Knowledge in a Neural Network", "source_url": "http://arxiv.org/abs/1503.02531v1" } ]
https://paperswithcode.com/paper/madrive-memory-augmented-driving-scene
2506.21520
null
null
MADrive: Memory-Augmented Driving Scene Modeling
Recent advances in scene reconstruction have pushed toward highly realistic modeling of autonomous driving (AD) environments using 3D Gaussian splatting. However, the resulting reconstructions remain closely tied to the original observations and struggle to support photorealistic synthesis of significantly altered or novel driving scenarios. This work introduces MADrive, a memory-augmented reconstruction framework designed to extend the capabilities of existing scene reconstruction methods by replacing observed vehicles with visually similar 3D assets retrieved from a large-scale external memory bank. Specifically, we release MAD-Cars, a curated dataset of ${\sim}70$K 360{\deg} car videos captured in the wild and present a retrieval module that finds the most similar car instances in the memory bank, reconstructs the corresponding 3D assets from video, and integrates them into the target scene through orientation alignment and relighting. The resulting replacements provide complete multi-view representations of vehicles in the scene, enabling photorealistic synthesis of substantially altered configurations, as demonstrated in our experiments. Project page: https://yandex-research.github.io/madrive/
null
https://arxiv.org/abs/2506.21520v1
https://arxiv.org/pdf/2506.21520v1.pdf
null
[ "Polina Karpikova", "Daniil Selikhanovych", "Kirill Struminsky", "Ruslan Musaev", "Maria Golitsyna", "Dmitry Baranchuk" ]
[ "Autonomous Driving" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/waft-warping-alone-field-transforms-for
2506.21526
null
null
WAFT: Warping-Alone Field Transforms for Optical Flow
We introduce Warping-Alone Field Transforms (WAFT), a simple and effective method for optical flow. WAFT is similar to RAFT but replaces cost volume with high-resolution warping, achieving better accuracy with lower memory cost. This design challenges the conventional wisdom that constructing cost volumes is necessary for strong performance. WAFT is a simple and flexible meta-architecture with minimal inductive biases and reliance on custom designs. Compared with existing methods, WAFT ranks 1st on Spring and KITTI benchmarks, achieves the best zero-shot generalization on KITTI, while being up to 4.1x faster than methods with similar performance. Code and model weights are available at https://github.com/princeton-vl/WAFT.
We introduce Warping-Alone Field Transforms (WAFT), a simple and effective method for optical flow.
https://arxiv.org/abs/2506.21526v1
https://arxiv.org/pdf/2506.21526v1.pdf
null
[ "Yihan Wang", "Jia Deng" ]
[ "Optical Flow Estimation", "Zero-shot Generalization" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/strumamba3d-exploring-structural-mamba-for
2506.21541
null
null
StruMamba3D: Exploring Structural Mamba for Self-supervised Point Cloud Representation Learning
Recently, Mamba-based methods have demonstrated impressive performance in point cloud representation learning by leveraging State Space Model (SSM) with the efficient context modeling ability and linear complexity. However, these methods still face two key issues that limit the potential of SSM: Destroying the adjacency of 3D points during SSM processing and failing to retain long-sequence memory as the input length increases in downstream tasks. To address these issues, we propose StruMamba3D, a novel paradigm for self-supervised point cloud representation learning. It enjoys several merits. First, we design spatial states and use them as proxies to preserve spatial dependencies among points. Second, we enhance the SSM with a state-wise update strategy and incorporate a lightweight convolution to facilitate interactions between spatial states for efficient structure modeling. Third, our method reduces the sensitivity of pre-trained Mamba-based models to varying input lengths by introducing a sequence length-adaptive strategy. Experimental results across four downstream tasks showcase the superior performance of our method. In addition, our method attains the SOTA 95.1% accuracy on ModelNet40 and 92.75% accuracy on the most challenging split of ScanObjectNN without voting strategy.
null
https://arxiv.org/abs/2506.21541v1
https://arxiv.org/pdf/2506.21541v1.pdf
null
[ "Chuxin Wang", "Yixin Zha", "Wenfei Yang", "Tianzhu Zhang" ]
[ "Mamba", "Representation Learning" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/deocc-1-to-3-3d-de-occlusion-from-a-single
2506.21544
null
null
DeOcc-1-to-3: 3D De-Occlusion from a Single Image via Self-Supervised Multi-View Diffusion
Reconstructing 3D objects from a single image is a long-standing challenge, especially under real-world occlusions. While recent diffusion-based view synthesis models can generate consistent novel views from a single RGB image, they generally assume fully visible inputs and fail when parts of the object are occluded. This leads to inconsistent views and degraded 3D reconstruction quality. To overcome this limitation, we propose an end-to-end framework for occlusion-aware multi-view generation. Our method directly synthesizes six structurally consistent novel views from a single partially occluded image, enabling downstream 3D reconstruction without requiring prior inpainting or manual annotations. We construct a self-supervised training pipeline using the Pix2Gestalt dataset, leveraging occluded-unoccluded image pairs and pseudo-ground-truth views to teach the model structure-aware completion and view consistency. Without modifying the original architecture, we fully fine-tune the view synthesis model to jointly learn completion and multi-view generation. Additionally, we introduce the first benchmark for occlusion-aware reconstruction, encompassing diverse occlusion levels, object categories, and mask patterns. This benchmark provides a standardized protocol for evaluating future methods under partial occlusions. Our code is available at https://github.com/Quyans/DeOcc123.
Without modifying the original architecture, we fully fine-tune the view synthesis model to jointly learn completion and multi-view generation.
https://arxiv.org/abs/2506.21544v1
https://arxiv.org/pdf/2506.21544v1.pdf
null
[ "Yansong Qu", "Shaohui Dai", "Xinyang Li", "Yuze Wang", "You Shen", "Liujuan Cao", "Rongrong Ji" ]
[ "3D Reconstruction" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.", "full_name": "Inpainting", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.", "name": "Self-Supervised Learning", "parent": null }, "name": "Inpainting", "source_title": "Context Encoders: Feature Learning by Inpainting", "source_url": "http://arxiv.org/abs/1604.07379v2" } ]
https://paperswithcode.com/paper/hallusegbench-counterfactual-visual-reasoning
2506.21546
null
null
HalluSegBench: Counterfactual Visual Reasoning for Segmentation Hallucination Evaluation
Recent progress in vision-language segmentation has significantly advanced grounded visual understanding. However, these models often exhibit hallucinations by producing segmentation masks for objects not grounded in the image content or by incorrectly labeling irrelevant regions. Existing evaluation protocols for segmentation hallucination primarily focus on label or textual hallucinations without manipulating the visual context, limiting their capacity to diagnose critical failures. In response, we introduce HalluSegBench, the first benchmark specifically designed to evaluate hallucinations in visual grounding through the lens of counterfactual visual reasoning. Our benchmark consists of a novel dataset of 1340 counterfactual instance pairs spanning 281 unique object classes, and a set of newly introduced metrics that quantify hallucination sensitivity under visually coherent scene edits. Experiments on HalluSegBench with state-of-the-art vision-language segmentation models reveal that vision-driven hallucinations are significantly more prevalent than label-driven ones, with models often persisting in false segmentation, highlighting the need for counterfactual reasoning to diagnose grounding fidelity.
null
https://arxiv.org/abs/2506.21546v1
https://arxiv.org/pdf/2506.21546v1.pdf
null
[ "Xinzhuo Li", "Adheesh Juvekar", "Xingyou Liu", "Muntasir Wahed", "Kiet A. Nguyen", "Ismini Lourentzou" ]
[ "counterfactual", "Counterfactual Reasoning", "Hallucination", "Hallucination Evaluation", "Segmentation", "Vision-Language Segmentation", "Visual Grounding", "Visual Reasoning" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/sim3d-single-instance-multiview-multimodal
2506.21549
null
null
SiM3D: Single-instance Multiview Multimodal and Multisetup 3D Anomaly Detection Benchmark
We propose SiM3D, the first benchmark considering the integration of multiview and multimodal information for comprehensive 3D anomaly detection and segmentation (ADS), where the task is to produce a voxel-based Anomaly Volume. Moreover, SiM3D focuses on a scenario of high interest in manufacturing: single-instance anomaly detection, where only one object, either real or synthetic, is available for training. In this respect, SiM3D stands out as the first ADS benchmark that addresses the challenge of generalising from synthetic training data to real test data. SiM3D includes a novel multimodal multiview dataset acquired using top-tier industrial sensors and robots. The dataset features multiview high-resolution images (12 Mpx) and point clouds (7M points) for 333 instances of eight types of objects, alongside a CAD model for each type. We also provide manually annotated 3D segmentation GTs for anomalous test samples. To establish reference baselines for the proposed multiview 3D ADS task, we adapt prominent singleview methods and assess their performance using novel metrics that operate on Anomaly Volumes.
null
https://arxiv.org/abs/2506.21549v1
https://arxiv.org/pdf/2506.21549v1.pdf
null
[ "Alex Costanzino", "Pierluigi Zama Ramirez", "Luigi Lella", "Matteo Ragaglia", "Alessandro Oliva", "Giuseppe Lisanti", "Luigi Di Stefano" ]
[ "3D Anomaly Detection", "3D Anomaly Detection and Segmentation", "Anomaly Detection" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Goal-Driven Tree-Structured Neural Model", "introduced_year": 2000, "main_collection": { "area": "Sequential", "description": "", "name": "Sequence To Sequence Models", "parent": null }, "name": "GTS", "source_title": "A Goal-Driven Tree-Structured Neural Model for Math Word Problems", "source_url": "https://www.ijcai.org/Proceedings/2019/736" } ]
https://paperswithcode.com/paper/sharpzo-hybrid-sharpness-aware-vision
2506.20990
null
null
SharpZO: Hybrid Sharpness-Aware Vision Language Model Prompt Tuning via Forward-Only Passes
Fine-tuning vision language models (VLMs) has achieved remarkable performance across various downstream tasks; yet, it requires access to model gradients through backpropagation (BP), making them unsuitable for memory-constrained, inference-only edge devices. To address this limitation, previous work has explored various BP-free fine-tuning methods. However, these approaches often rely on high-variance evolutionary strategies (ES) or zeroth-order (ZO) optimization, and often fail to achieve satisfactory performance. In this paper, we propose a hybrid Sharpness-aware Zeroth-order optimization (SharpZO) approach, specifically designed to enhance the performance of ZO VLM fine-tuning via a sharpness-aware warm-up training. SharpZO features a two-stage optimization process: a sharpness-aware ES stage that globally explores and smooths the loss landscape to construct a strong initialization, followed by a fine-grained local search via sparse ZO optimization. The entire optimization relies solely on forward passes. Detailed theoretical analysis and extensive experiments on CLIP models demonstrate that SharpZO significantly improves accuracy and convergence speed, achieving up to 7% average gain over state-of-the-art forward-only methods.
Fine-tuning vision language models (VLMs) has achieved remarkable performance across various downstream tasks; yet, it requires access to model gradients through backpropagation (BP), making them unsuitable for memory-constrained, inference-only edge devices.
https://arxiv.org/abs/2506.20990v1
https://arxiv.org/pdf/2506.20990v1.pdf
null
[ "Yifan Yang", "Zhen Zhang", "Rupak Vignesh Swaminathan", "Jing Liu", "Nathan Susanj", "Zheng Zhang" ]
[ "Language Modeling", "Language Modelling" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/OpenAI/CLIP", "description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)", "full_name": "Contrastive Language-Image Pre-training", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Representations", "parent": null }, "name": "CLIP", "source_title": "Learning Transferable Visual Models From Natural Language Supervision", "source_url": "https://arxiv.org/abs/2103.00020v1" } ]
https://paperswithcode.com/paper/rl-selector-reinforcement-learning-guided
2506.21037
null
null
RL-Selector: Reinforcement Learning-Guided Data Selection via Redundancy Assessment
Modern deep architectures often rely on large-scale datasets, but training on these datasets incurs high computational and storage overhead. Real-world datasets often contain substantial redundancies, prompting the need for more data-efficient training paradigms. Data selection has shown promise to mitigate redundancy by identifying the most representative samples, thereby reducing training costs without compromising performance. Existing methods typically rely on static scoring metrics or pretrained models, overlooking the combined effect of selected samples and their evolving dynamics during training. We introduce the concept of epsilon-sample cover, which quantifies sample redundancy based on inter-sample relationships, capturing the intrinsic structure of the dataset. Based on this, we reformulate data selection as a reinforcement learning (RL) process and propose RL-Selector, where a lightweight RL agent optimizes the selection policy by leveraging epsilon-sample cover derived from evolving dataset distribution as a reward signal. Extensive experiments across benchmark datasets and diverse architectures demonstrate that our method consistently outperforms existing state-of-the-art baselines. Models trained with our selected datasets show enhanced generalization performance with improved training efficiency.
null
https://arxiv.org/abs/2506.21037v1
https://arxiv.org/pdf/2506.21037v1.pdf
null
[ "Suorong Yang", "Peijia Li", "Furao Shen", "Jian Zhao" ]
[ "Reinforcement Learning (RL)" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/personalized-federated-learning-via-dual
2506.21144
null
null
Personalized Federated Learning via Dual-Prompt Optimization and Cross Fusion
Federated learning (FL) enables collaborative model training across decentralized clients without sharing local data, but is challenged by heterogeneity in data, computation, and communication. Pretrained vision-language models (VLMs), with their strong generalization and lightweight tuning via prompts, offer a promising solution. However, existing federated prompt-learning methods rely only on text prompts and overlook joint label-domain distribution shifts. In this paper, we propose a personalized FL framework based on dual-prompt learning and cross fusion, termed pFedDC. Specifically, each client maintains both global and local prompts across vision and language modalities: global prompts capture common knowledge shared across the federation, while local prompts encode client-specific semantics and domain characteristics. Meanwhile, a cross-fusion module is designed to adaptively integrate prompts from different levels, enabling the model to generate personalized representations aligned with each client's unique data distribution. Extensive experiments across nine datasets with various types of heterogeneity show that pFedDC consistently outperforms state-of-the-art methods.
null
https://arxiv.org/abs/2506.21144v1
https://arxiv.org/pdf/2506.21144v1.pdf
null
[ "Yuguang Zhang", "Kuangpu Guo", "Zhihe Lu", "Yunbo Wang", "Jian Liang" ]
[ "Federated Learning", "Personalized Federated Learning", "Prompt Learning" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/spatial-mental-modeling-from-limited-views
2506.21458
null
null
Spatial Mental Modeling from Limited Views
Can Vision Language Models (VLMs) imagine the full scene from just a few views, like humans do? Humans form spatial mental models, internal representations of unseen space, to reason about layout, perspective, and motion. Our new MindCube benchmark with 21,154 questions across 3,268 images exposes this critical gap, where existing VLMs exhibit near-random performance. Using MindCube, we systematically evaluate how well VLMs build robust spatial mental models through representing positions (cognitive mapping), orientations (perspective-taking), and dynamics (mental simulation for "what-if" movements). We then explore three approaches to help VLMs approximate spatial mental models, including unseen intermediate views, natural language reasoning chains, and cognitive maps. The significant improvement comes from a synergistic approach, "map-then-reason", that jointly trains the model to first generate a cognitive map and then reason upon it. By training models to reason over these internal maps, we boosted accuracy from 37.8% to 60.8% (+23.0%). Adding reinforcement learning pushed performance even further to 70.7% (+32.9%). Our key insight is that such scaffolding of spatial mental models, actively constructing and utilizing internal structured spatial representations with flexible reasoning processes, significantly improves understanding of unobservable space.
Can Vision Language Models (VLMs) imagine the full scene from just a few views, like humans do?
https://arxiv.org/abs/2506.21458v1
https://arxiv.org/pdf/2506.21458v1.pdf
null
[ "Baiqiao Yin", "Qineng Wang", "Pingyue Zhang", "Jianshu Zhang", "Kangrui Wang", "Zihan Wang", "Jieyu Zhang", "Keshigeyan Chandrasegaran", "Han Liu", "Ranjay Krishna", "Saining Xie", "Manling Li", "Jiajun Wu", "Li Fei-Fei" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/anycalib-on-manifold-learning-for-model
2503.12701
null
null
AnyCalib: On-Manifold Learning for Model-Agnostic Single-View Camera Calibration
We present AnyCalib, a method for calibrating the intrinsic parameters of a camera from a single in-the-wild image, that is agnostic to the camera model. Current methods are predominantly tailored to specific camera models and/or require extrinsic cues, such as the direction of gravity, to be visible in the image. In contrast, we argue that the perspective and distortion cues inherent in images are sufficient for model-agnostic camera calibration. To demonstrate this, we frame the calibration process as the regression of the rays corresponding to each pixel. We show, for the first time, that this intermediate representation allows for a closed-form recovery of the intrinsics for a wide range of camera models, including but not limited to: pinhole, Brown-Conrady and Kannala-Brandt. Our approach also applies to edited -- cropped and stretched -- images. Experimentally, we demonstrate that AnyCalib consistently outperforms alternative methods, including 3D foundation models, despite being trained on orders of magnitude less data. Code is available at https://github.com/javrtg/AnyCalib.
We present AnyCalib, a method for calibrating the intrinsic parameters of a camera from a single in-the-wild image, that is agnostic to the camera model.
https://arxiv.org/abs/2503.12701v2
https://arxiv.org/pdf/2503.12701v2.pdf
null
[ "Javier Tirado-Garín", "Javier Civera" ]
[ "Camera Calibration" ]
2025-03-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/decouple-to-reconstruct-high-quality-uhd
2503.12764
null
null
Decouple to Reconstruct: High Quality UHD Restoration via Active Feature Disentanglement and Reversible Fusion
Ultra-high-definition (UHD) image restoration often faces computational bottlenecks and information loss due to its extremely high resolution. Existing studies based on Variational Autoencoders (VAE) improve efficiency by transferring the image restoration process from pixel space to latent space. However, degraded components are inherently coupled with background elements in degraded images, both information loss during compression and information gain during compensation remain uncontrollable. These lead to restored images often exhibiting image detail loss and incomplete degradation removal. To address this issue, we propose a Controlled Differential Disentangled VAE, which utilizes Hierarchical Contrastive Disentanglement Learning and an Orthogonal Gated Projection Module to guide the VAE to actively discard easily recoverable background information while encoding more difficult-to-recover degraded information into the latent space. Additionally, we design a Complex Invertible Multiscale Fusion Network to handle background features, ensuring their consistency, and utilize a latent space restoration network to transform the degraded latent features, leading to more accurate restoration results. Extensive experimental results demonstrate that our method effectively alleviates the information loss problem in VAE models while ensuring computational efficiency, significantly improving the quality of UHD image restoration, and achieves state-of-the-art results in six UHD restoration tasks with only 1M parameters.
null
https://arxiv.org/abs/2503.12764v2
https://arxiv.org/pdf/2503.12764v2.pdf
null
[ "Yidi Liu", "Dong Li", "Yuxin Ma", "Jie Huang", "Wenlong Zhang", "Xueyang Fu", "Zheng-Jun Zha" ]
[ "Computational Efficiency", "Disentanglement", "Image Restoration" ]
2025-03-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dwim-towards-tool-aware-visual-reasoning-via
2503.19263
null
null
DWIM: Towards Tool-aware Visual Reasoning via Discrepancy-aware Workflow Generation & Instruct-Masking Tuning
Visual reasoning (VR), which is crucial in many fields for enabling human-like visual understanding, remains highly challenging. Recently, compositional visual reasoning approaches, which leverage the reasoning abilities of large language models (LLMs) with integrated tools to solve problems, have shown promise as more effective strategies than end-to-end VR methods. However, these approaches face limitations, as frozen LLMs lack tool awareness in VR, leading to performance bottlenecks. While leveraging LLMs for reasoning is widely used in other domains, they are not directly applicable to VR due to limited training data, imperfect tools that introduce errors and reduce data collection efficiency in VR, and challenging in fine-tuning on noisy workflows. To address these challenges, we propose DWIM: i) Discrepancy-aware training Workflow generation, which assesses tool usage and extracts more viable workflows for training; and ii) Instruct-Masking fine-tuning, which guides the model to only clone effective actions, enabling the generation of more practical solutions. Our experiments demonstrate that DWIM achieves state-of-the-art performance across various VR tasks, exhibiting strong generalization on multiple widely-used datasets.
null
https://arxiv.org/abs/2503.19263v2
https://arxiv.org/pdf/2503.19263v2.pdf
null
[ "Fucai Ke", "Vijay Kumar B G", "Xingjian Leng", "Zhixi Cai", "Zaid Khan", "Weiqing Wang", "Pari Delir Haghighi", "Hamid Rezatofighi", "Manmohan Chandraker" ]
[ "Visual Reasoning" ]
2025-03-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/robustsplat-decoupling-densification-and
2506.02751
null
null
RobustSplat: Decoupling Densification and Dynamics for Transient-Free 3DGS
3D Gaussian Splatting (3DGS) has gained significant attention for its real-time, photo-realistic rendering in novel-view synthesis and 3D modeling. However, existing methods struggle with accurately modeling scenes affected by transient objects, leading to artifacts in the rendered images. We identify that the Gaussian densification process, while enhancing scene detail capture, unintentionally contributes to these artifacts by growing additional Gaussians that model transient disturbances. To address this, we propose RobustSplat, a robust solution based on two critical designs. First, we introduce a delayed Gaussian growth strategy that prioritizes optimizing static scene structure before allowing Gaussian splitting/cloning, mitigating overfitting to transient objects in early optimization. Second, we design a scale-cascaded mask bootstrapping approach that first leverages lower-resolution feature similarity supervision for reliable initial transient mask estimation, taking advantage of its stronger semantic consistency and robustness to noise, and then progresses to high-resolution supervision to achieve more precise mask prediction. Extensive experiments on multiple challenging datasets show that our method outperforms existing methods, clearly demonstrating the robustness and effectiveness of our method. Our project page is https://fcyycf.github.io/RobustSplat/.
null
https://arxiv.org/abs/2506.02751v2
https://arxiv.org/pdf/2506.02751v2.pdf
null
[ "Chuanyu Fu", "Yuqi Zhang", "Kunbin Yao", "GuanYing Chen", "Yuan Xiong", "Chuan Huang", "Shuguang Cui", "Xiaochun Cao" ]
[ "3DGS", "Novel View Synthesis" ]
2025-06-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/aligned-novel-view-image-and-geometry
2506.11924
null
null
Aligned Novel View Image and Geometry Synthesis via Cross-modal Attention Instillation
We introduce a diffusion-based framework that performs aligned novel view image and geometry generation via a warping-and-inpainting methodology. Unlike prior methods that require dense posed images or pose-embedded generative models limited to in-domain views, our method leverages off-the-shelf geometry predictors to predict partial geometries viewed from reference images, and formulates novel-view synthesis as an inpainting task for both image and geometry. To ensure accurate alignment between generated images and geometry, we propose cross-modal attention distillation, where attention maps from the image diffusion branch are injected into a parallel geometry diffusion branch during both training and inference. This multi-task approach achieves synergistic effects, facilitating geometrically robust image synthesis as well as well-defined geometry prediction. We further introduce proximity-based mesh conditioning to integrate depth and normal cues, interpolating between point cloud and filtering erroneously predicted geometry from influencing the generation process. Empirically, our method achieves high-fidelity extrapolative view synthesis on both image and geometry across a range of unseen scenes, delivers competitive reconstruction quality under interpolation settings, and produces geometrically aligned colored point clouds for comprehensive 3D completion. Project page is available at https://cvlab-kaist.github.io/MoAI.
null
https://arxiv.org/abs/2506.11924v2
https://arxiv.org/pdf/2506.11924v2.pdf
null
[ "Min-Seop Kwak", "Junho Kim", "Sangdoo Yun", "Dongyoon Han", "Taekyoung Kim", "Seungryong Kim", "Jin-Hwa Kim" ]
[ "Image Generation", "Novel View Synthesis" ]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.", "full_name": "Inpainting", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.", "name": "Self-Supervised Learning", "parent": null }, "name": "Inpainting", "source_title": "Context Encoders: Feature Learning by Inpainting", "source_url": "http://arxiv.org/abs/1604.07379v2" }, { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/structure-preserving-patch-decoding-for
2506.12896
null
null
Structure-Preserving Patch Decoding for Efficient Neural Video Representation
Implicit neural representations (INRs) are the subject of extensive research, particularly in their application to modeling complex signals by mapping spatial and temporal coordinates to corresponding values. When handling videos, mapping compact inputs to entire frames or spatially partitioned patch images is an effective approach. This strategy better preserves spatial relationships, reduces computational overhead, and improves reconstruction quality compared to coordinate-based mapping. However, predicting entire frames often limits the reconstruction of high-frequency visual details. Additionally, conventional patch-based approaches based on uniform spatial partitioning tend to introduce boundary discontinuities that degrade spatial coherence. We propose a neural video representation method based on Structure-Preserving Patches (SPPs) to address such limitations. Our method separates each video frame into patch images of spatially aligned frames through a deterministic pixel-based splitting similar to PixelUnshuffle. This operation preserves the global spatial structure while allowing patch-level decoding. We train the decoder to reconstruct these structured patches, enabling a global-to-local decoding strategy that captures the global layout first and refines local details. This effectively reduces boundary artifacts and mitigates distortions from naive upsampling. Experiments on standard video datasets demonstrate that our method achieves higher reconstruction quality and better compression performance than existing INR-based baselines.
null
https://arxiv.org/abs/2506.12896v2
https://arxiv.org/pdf/2506.12896v2.pdf
null
[ "Taiga Hayami", "Kakeru Koizumi", "Hiroshi Watanabe" ]
[]
2025-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hyperpath-knowledge-guided-hyperbolic
2506.16398
null
null
HyperPath: Knowledge-Guided Hyperbolic Semantic Hierarchy Modeling for WSI Analysis
Pathology is essential for cancer diagnosis, with multiple instance learning (MIL) widely used for whole slide image (WSI) analysis. WSIs exhibit a natural hierarchy -- patches, regions, and slides -- with distinct semantic associations. While some methods attempt to leverage this hierarchy for improved representation, they predominantly rely on Euclidean embeddings, which struggle to fully capture semantic hierarchies. To address this limitation, we propose HyperPath, a novel method that integrates knowledge from textual descriptions to guide the modeling of semantic hierarchies of WSIs in hyperbolic space, thereby enhancing WSI classification. Our approach adapts both visual and textual features extracted by pathology vision-language foundation models to the hyperbolic space. We design an Angular Modality Alignment Loss to ensure robust cross-modal alignment, while a Semantic Hierarchy Consistency Loss further refines feature hierarchies through entailment and contradiction relationships and thus enhance semantic coherence. The classification is performed with geodesic distance, which measures the similarity between entities in the hyperbolic semantic hierarchy. This eliminates the need for linear classifiers and enables a geometry-aware approach to WSI analysis. Extensive experiments show that our method achieves superior performance across tasks compared to existing methods, highlighting the potential of hyperbolic embeddings for WSI analysis.
null
https://arxiv.org/abs/2506.16398v2
https://arxiv.org/pdf/2506.16398v2.pdf
null
[ "Peixiang Huang", "Yanyan Huang", "Weiqin Zhao", "Junjun He", "Lequan Yu" ]
[ "cross-modal alignment", "Multiple Instance Learning" ]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mico-multiple-instance-learning-with-context
2506.18028
null
null
MiCo: Multiple Instance Learning with Context-Aware Clustering for Whole Slide Image Analysis
Multiple instance learning (MIL) has shown significant promise in histopathology whole slide image (WSI) analysis for cancer diagnosis and prognosis. However, the inherent spatial heterogeneity of WSIs presents critical challenges, as morphologically similar tissue types are often dispersed across distant anatomical regions. Conventional MIL methods struggle to model these scattered tissue distributions and capture cross-regional spatial interactions effectively. To address these limitations, we propose a novel Multiple instance learning framework with Context-Aware Clustering (MiCo), designed to enhance cross-regional intra-tissue correlations and strengthen inter-tissue semantic associations in WSIs. MiCo begins by clustering instances to distill discriminative morphological patterns, with cluster centroids serving as semantic anchors. To enhance cross-regional intra-tissue correlations, MiCo employs a Cluster Route module, which dynamically links instances of the same tissue type across distant regions via feature similarity. These semantic anchors act as contextual hubs, propagating semantic relationships to refine instance-level representations. To eliminate semantic fragmentation and strengthen inter-tissue semantic associations, MiCo integrates a Cluster Reducer module, which consolidates redundant anchors while enhancing information exchange between distinct semantic groups. Extensive experiments on two challenging tasks across nine large-scale public cancer datasets demonstrate the effectiveness of MiCo, showcasing its superiority over state-of-the-art methods. The code is available at https://github.com/junjianli106/MiCo.
To address these limitations, we propose a novel Multiple instance learning framework with Context-Aware Clustering (MiCo), designed to enhance cross-regional intra-tissue correlations and strengthen inter-tissue semantic associations in WSIs.
https://arxiv.org/abs/2506.18028v2
https://arxiv.org/pdf/2506.18028v2.pdf
null
[ "Junjian Li", "Hulin Kuang", "Jin Liu", "Hailin Yue", "Mengshen He", "Jianxin Wang" ]
[ "Clustering", "Multiple Instance Learning", "Prognosis" ]
2025-06-22T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Given a pattern $P,$ that is more complicated than the patterns, we fragment $P$ into simpler patterns such that their exact count is known. In the subgraph GNN proposed earlier, look into the subgraph of the host graph. We have seen that this technique is scalable on large graphs. Also, we have seen that subgraph GNN is more expressive and efficient than traditional GNN. So, we tried to explore the expressibility when the pattern is fragmented into smaller subpatterns.", "full_name": "Fragmentation", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Localization Models", "parent": null }, "name": "Fragmentation", "source_title": "Improving Expressivity of Graph Neural Networks using Localization", "source_url": "https://arxiv.org/abs/2305.19659v3" } ]
https://paperswithcode.com/paper/referring-expression-instance-retrieval-and-a
2506.18246
null
null
Referring Expression Instance Retrieval and A Strong End-to-End Baseline
Using natural language to query visual information is a fundamental need in real-world applications. Text-Image Retrieval (TIR) retrieves a target image from a gallery based on an image-level description, while Referring Expression Comprehension (REC) localizes a target object within a given image using an instance-level description. However, real-world applications often present more complex demands. Users typically query an instance-level description across a large gallery and expect to receive both relevant image and the corresponding instance location. In such scenarios, TIR struggles with fine-grained descriptions and object-level localization, while REC is limited in its ability to efficiently search large galleries and lacks an effective ranking mechanism. In this paper, we introduce a new task called \textbf{Referring Expression Instance Retrieval (REIR)}, which supports both instance-level retrieval and localization based on fine-grained referring expressions. First, we propose a large-scale benchmark for REIR, named REIRCOCO, constructed by prompting advanced vision-language models to generate high-quality referring expressions for instances in the MSCOCO and RefCOCO datasets. Second, we present a baseline method, Contrastive Language-Instance Alignment with Relation Experts (CLARE), which employs a dual-stream architecture to address REIR in an end-to-end manner. Given a referring expression, the textual branch encodes it into a query embedding. The visual branch detects candidate objects and extracts their instance-level visual features. The most similar candidate to the query is selected for bounding box prediction. CLARE is first trained on object detection and REC datasets to establish initial grounding capabilities, then optimized via Contrastive Language-Instance Alignment (CLIA) for improved retrieval across images. We will release our code and benchmark publicly.
null
https://arxiv.org/abs/2506.18246v3
https://arxiv.org/pdf/2506.18246v3.pdf
null
[ "Xiangzhao Hao", "Kuan Zhu", "Hongyu Guo", "Haiyun Guo", "Ning Jiang", "Quan Lu", "Ming Tang", "Jinqiao Wang" ]
[ "Image Retrieval", "Referring Expression", "Referring Expression Comprehension", "Retrieval" ]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/2d-triangle-splatting-for-direct
2506.18575
null
null
2D Triangle Splatting for Direct Differentiable Mesh Training
Differentiable rendering with 3D Gaussian primitives has emerged as a powerful method for reconstructing high-fidelity 3D scenes from multi-view images. While it offers improvements over NeRF-based methods, this representation still encounters challenges with rendering speed and advanced rendering effects, such as relighting and shadow rendering, compared to mesh-based models. In this paper, we propose 2D Triangle Splatting (2DTS), a novel method that replaces 3D Gaussian primitives with 2D triangle facelets. This representation naturally forms a discrete mesh-like structure while retaining the benefits of continuous volumetric modeling. By incorporating a compactness parameter into the triangle primitives, we enable direct training of photorealistic meshes. Our experimental results demonstrate that our triangle-based method, in its vanilla version (without compactness tuning), achieves higher fidelity compared to state-of-the-art Gaussian-based methods. Furthermore, our approach produces reconstructed meshes with superior visual quality compared to existing mesh reconstruction methods. Please visit our project page at https://gaoderender.github.io/triangle-splatting.
null
https://arxiv.org/abs/2506.18575v2
https://arxiv.org/pdf/2506.18575v2.pdf
null
[ "Kaifeng Sheng", "Zheng Zhou", "Yingliang Peng", "Qianwei Wang" ]
[ "NeRF" ]
2025-06-23T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/generate-the-forest-before-the-trees-a
2506.19391
null
null
Generate the Forest before the Trees -- A Hierarchical Diffusion model for Climate Downscaling
Downscaling is essential for generating the high-resolution climate data needed for local planning, but traditional methods remain computationally demanding. Recent years have seen impressive results from AI downscaling models, particularly diffusion models, which have attracted attention due to their ability to generate ensembles and overcome the smoothing problem common in other AI methods. However, these models typically remain computationally intensive. We introduce a Hierarchical Diffusion Downscaling (HDD) model, which introduces an easily-extensible hierarchical sampling process to the diffusion framework. A coarse-to-fine hierarchy is imposed via a simple downsampling scheme. HDD achieves competitive accuracy on ERA5 reanalysis datasets and CMIP6 models, significantly reducing computational load by running on up to half as many pixels with competitive results. Additionally, a single model trained at 0.25{\deg} resolution transfers seamlessly across multiple CMIP6 models with much coarser resolution. HDD thus offers a lightweight alternative for probabilistic climate downscaling, facilitating affordable large-ensemble high-resolution climate projections. See a full code implementation at: https://github.com/HDD-Hierarchical-Diffusion-Downscaling/HDD-Hierarchical-Diffusion-Downscaling.
Downscaling is essential for generating the high-resolution climate data needed for local planning, but traditional methods remain computationally demanding.
https://arxiv.org/abs/2506.19391v2
https://arxiv.org/pdf/2506.19391v2.pdf
null
[ "Declan J. Curran", "Sanaa Hobeichi", "Hira Saleem", "Hao Xue", "Flora D. Salim" ]
[]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/discovering-global-false-negatives-on-the-fly
2502.20612
null
null
Discovering Global False Negatives On the Fly for Self-supervised Contrastive Learning
In self-supervised contrastive learning, negative pairs are typically constructed using an anchor image and a sample drawn from the entire dataset, excluding the anchor. However, this approach can result in the creation of negative pairs with similar semantics, referred to as "false negatives", leading to their embeddings being falsely pushed apart. To address this issue, we introduce GloFND, an optimization-based approach that automatically learns on the fly the threshold for each anchor data to identify its false negatives during training. In contrast to previous methods for false negative discovery, our approach globally detects false negatives across the entire dataset rather than locally within the mini-batch. Moreover, its per-iteration computation cost remains independent of the dataset size. Experimental results on image and image-text data demonstrate the effectiveness of the proposed method. Our implementation is available at https://github.com/vibalcam/GloFND.
In self-supervised contrastive learning, negative pairs are typically constructed using an anchor image and a sample drawn from the entire dataset, excluding the anchor.
https://arxiv.org/abs/2502.20612v2
https://arxiv.org/pdf/2502.20612v2.pdf
null
[ "Vicente Balmaseda", "Bokun Wang", "Ching-Long Lin", "Tianbao Yang" ]
[ "Contrastive Learning" ]
2025-02-28T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/variational-supervised-contrastive-learning
2506.07413
null
null
Variational Supervised Contrastive Learning
Contrastive learning has proven to be highly efficient and adaptable in shaping representation spaces across diverse modalities by pulling similar samples together and pushing dissimilar ones apart. However, two key limitations persist: (1) Without explicit regulation of the embedding distribution, semantically related instances can inadvertently be pushed apart unless complementary signals guide pair selection, and (2) excessive reliance on large in-batch negatives and tailored augmentations hinders generalization. To address these limitations, we propose Variational Supervised Contrastive Learning (VarCon), which reformulates supervised contrastive learning as variational inference over latent class variables and maximizes a posterior-weighted evidence lower bound (ELBO) that replaces exhaustive pair-wise comparisons for efficient class-aware matching and grants fine-grained control over intra-class dispersion in the embedding space. Trained exclusively on image data, our experiments on CIFAR-10, CIFAR-100, ImageNet-100, and ImageNet-1K show that VarCon (1) achieves state-of-the-art performance for contrastive learning frameworks, reaching 79.36% Top-1 accuracy on ImageNet-1K and 78.29% on CIFAR-100 with a ResNet-50 encoder while converging in just 200 epochs; (2) yields substantially clearer decision boundaries and semantic organization in the embedding space, as evidenced by KNN classification, hierarchical clustering results, and transfer-learning assessments; and (3) demonstrates superior performance in few-shot learning than supervised baseline and superior robustness across various augmentation strategies.
null
https://arxiv.org/abs/2506.07413v2
https://arxiv.org/pdf/2506.07413v2.pdf
null
[ "Ziwen Wang", "Jiajun Fan", "Thao Nguyen", "Heng Ji", "Ge Liu" ]
[ "Contrastive Learning", "Few-Shot Learning", "Transfer Learning", "Variational Inference" ]
2025-06-09T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": null, "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Contrastive Learning", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "Variational Inference", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.", "name": "Dimensionality Reduction", "parent": null }, "name": "Variational Inference", "source_title": "Autoencoding Variational Autoencoder", "source_url": "https://arxiv.org/abs/2012.03715v1" } ]
https://paperswithcode.com/paper/metis-rise-rl-incentivizes-and-sft-enhances
2506.13056
null
null
Metis-RISE: RL Incentivizes and SFT Enhances Multimodal Reasoning Model Learning
Recent advancements in large language models (LLMs) have witnessed a surge in the development of advanced reasoning paradigms, which are now being integrated into multimodal large language models (MLLMs). However, existing approaches often fall short: methods solely employing reinforcement learning (RL) can struggle with sample inefficiency and activating entirely absent reasoning capabilities, while conventional pipelines that initiate with a cold-start supervised fine-tuning (SFT) phase before RL may restrict the model's exploratory capacity and face suboptimal convergence. In this work, we introduce \textbf{Metis-RISE} (\textbf{R}L \textbf{I}ncentivizes and \textbf{S}FT \textbf{E}nhances) for multimodal reasoning model learning. Unlike conventional approaches, Metis-RISE distinctively omits an initial SFT stage, beginning instead with an RL phase (e.g., using a Group Relative Policy Optimization variant) to incentivize and activate the model's latent reasoning capacity. Subsequently, the targeted SFT stage addresses two key challenges identified during RL: (1) \textit{inefficient trajectory sampling} for tasks where the model possesses but inconsistently applies correct reasoning, which we tackle using self-distilled reasoning trajectories from the RL model itself; and (2) \textit{fundamental capability absence}, which we address by injecting expert-augmented knowledge for prompts where the model entirely fails. This strategic application of RL for incentivization followed by SFT for enhancement forms the core of Metis-RISE, leading to two versions of our MLLMs (7B and 72B parameters). Evaluations on the OpenCompass Multimodal Reasoning Leaderboard demonstrate that both models achieve state-of-the-art performance among similar-sized models, with the 72B version ranking fourth overall. Please refer to our project page for open-source information.
Recent advancements in large language models (LLMs) have witnessed a surge in the development of advanced reasoning paradigms, which are now being integrated into multimodal large language models (MLLMs).
https://arxiv.org/abs/2506.13056v2
https://arxiv.org/pdf/2506.13056v2.pdf
null
[ "Haibo Qiu", "Xiaohan Lan", "Fanfan Liu", "Xiaohu Sun", "Delian Ruan", "Peng Shi", "Lin Ma" ]
[ "Multimodal Reasoning", "Reinforcement Learning (RL)" ]
2025-06-16T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**Shrink and Fine-Tune**, or **SFT**, is a type of distillation that avoids explicit distillation by copying parameters to a student student model and then fine-tuning. Specifically it extracts a student model from the maximally spaced layers of a fine-tuned teacher. Each layer $l \\in L'$ is copied fully from $L$. For example, when creating a [BART](https://paperswithcode.com/method/bart) student with 3 decoder layers from the 12 encoder layer 12 decoder layer teacher, we copy the teacher’s full $Enc^{L}$ and decoder layers 0, 6, and 11 to the student. When deciding which layers to copy, we break ties arbitrarily; copying layers 0, 5, and 11 might work just as well. When copy only 1 decoder layer, we copy layer 0. This was found this to work better than copying layer 11. The impact of initialization on performance is measured experimentally in Section 6.1. After initialization, the student model continues to fine-tune on the summarization dataset, with the objective of minimizing $\\mathcal{L}\\_{Data}$.", "full_name": "Shrink and Fine-Tune", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Knowledge Distillation", "parent": null }, "name": "SFT", "source_title": "Pre-trained Summarization Distillation", "source_url": "https://arxiv.org/abs/2010.13002v2" } ]
https://paperswithcode.com/paper/magpie-a-dataset-for-multi-agent-contextual
2506.20737
null
null
MAGPIE: A dataset for Multi-AGent contextual PrIvacy Evaluation
The proliferation of LLM-based agents has led to increasing deployment of inter-agent collaboration for tasks like scheduling, negotiation, resource allocation etc. In such systems, privacy is critical, as agents often access proprietary tools and domain-specific databases requiring strict confidentiality. This paper examines whether LLM-based agents demonstrate an understanding of contextual privacy. And, if instructed, do these systems preserve inference time user privacy in non-adversarial multi-turn conversation. Existing benchmarks to evaluate contextual privacy in LLM-agents primarily assess single-turn, low-complexity tasks where private information can be easily excluded. We first present a benchmark - MAGPIE comprising 158 real-life high-stakes scenarios across 15 domains. These scenarios are designed such that complete exclusion of private data impedes task completion yet unrestricted information sharing could lead to substantial losses. We then evaluate the current state-of-the-art LLMs on (a) their understanding of contextually private data and (b) their ability to collaborate without violating user privacy. Empirical experiments demonstrate that current models, including GPT-4o and Claude-2.7-Sonnet, lack robust understanding of contextual privacy, misclassifying private data as shareable 25.2\% and 43.6\% of the time. In multi-turn conversations, these models disclose private information in 59.9\% and 50.5\% of cases even under explicit privacy instructions. Furthermore, multi-agent systems fail to complete tasks in 71\% of scenarios. These results underscore that current models are not aligned towards both contextual privacy preservation and collaborative task-solving.
null
https://arxiv.org/abs/2506.20737v1
https://arxiv.org/pdf/2506.20737v1.pdf
null
[ "Gurusha Juneja", "Alon Albalak", "Wenyue Hua", "William Yang Wang" ]
[ "Scheduling" ]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dynamic-context-aware-prompt-recommendation
2506.20815
null
null
Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications
LLM-powered applications are highly susceptible to the quality of user prompts, and crafting high-quality prompts can often be challenging especially for domain-specific applications. This paper presents a novel dynamic context-aware prompt recommendation system for domain-specific AI applications. Our solution combines contextual query analysis, retrieval-augmented knowledge grounding, hierarchical skill organization, and adaptive skill ranking to generate relevant and actionable prompt suggestions. The system leverages behavioral telemetry and a two-stage hierarchical reasoning process to dynamically select and rank relevant skills, and synthesizes prompts using both predefined and adaptive templates enhanced with few-shot learning. Experiments on real-world datasets demonstrate that our approach achieves high usefulness and relevance, as validated by both automated and expert evaluations.
null
https://arxiv.org/abs/2506.20815v1
https://arxiv.org/pdf/2506.20815v1.pdf
null
[ "Xinye Tang", "Haijun Zhai", "Chaitanya Belwal", "Vineeth Thayanithi", "Philip Baumann", "Yogesh K Roy" ]
[ "Few-Shot Learning" ]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/beyond-reactive-safety-risk-aware-llm
2506.20949
null
null
Beyond Reactive Safety: Risk-Aware LLM Alignment via Long-Horizon Simulation
Given the growing influence of language model-based agents on high-stakes societal decisions, from public policy to healthcare, ensuring their beneficial impact requires understanding the far-reaching implications of their suggestions. We propose a proof-of-concept framework that projects how model-generated advice could propagate through societal systems on a macroscopic scale over time, enabling more robust alignment. To assess the long-term safety awareness of language models, we also introduce a dataset of 100 indirect harm scenarios, testing models' ability to foresee adverse, non-obvious outcomes from seemingly harmless user prompts. Our approach achieves not only over 20% improvement on the new dataset but also an average win rate exceeding 70% against strong baselines on existing safety benchmarks (AdvBench, SafeRLHF, WildGuardMix), suggesting a promising direction for safer agents.
Given the growing influence of language model-based agents on high-stakes societal decisions, from public policy to healthcare, ensuring their beneficial impact requires understanding the far-reaching implications of their suggestions.
https://arxiv.org/abs/2506.20949v1
https://arxiv.org/pdf/2506.20949v1.pdf
null
[ "Chenkai Sun", "Denghui Zhang", "ChengXiang Zhai", "Heng Ji" ]
[ "Language Modeling", "Language Modelling" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/unveiling-causal-reasoning-in-large-language
2506.21215
null
null
Unveiling Causal Reasoning in Large Language Models: Reality or Mirage?
Causal reasoning capability is critical in advancing large language models (LLMs) toward strong artificial intelligence. While versatile LLMs appear to have demonstrated capabilities in understanding contextual causality and providing responses that obey the laws of causality, it remains unclear whether they perform genuine causal reasoning akin to humans. However, current evidence indicates the contrary. Specifically, LLMs are only capable of performing shallow (level-1) causal reasoning, primarily attributed to the causal knowledge embedded in their parameters, but they lack the capacity for genuine human-like (level-2) causal reasoning. To support this hypothesis, methodologically, we delve into the autoregression mechanism of transformer-based LLMs, revealing that it is not inherently causal. Empirically, we introduce a new causal Q&A benchmark called CausalProbe-2024, whose corpora are fresh and nearly unseen for the studied LLMs. The LLMs exhibit a significant performance drop on CausalProbe-2024 compared to earlier benchmarks, indicating the fact that they primarily engage in level-1 causal reasoning. To bridge the gap towards level-2 causal reasoning, we draw inspiration from the fact that human reasoning is usually facilitated by general knowledge and intended goals. We propose G^2-Reasoner, a method that incorporates general knowledge and goal-oriented prompts into LLMs' causal reasoning processes. Experiments demonstrate that G^2-Reasoner significantly enhances LLMs' causal reasoning capability, particularly in fresh and counterfactual contexts. This work sheds light on a new path for LLMs to advance towards genuine causal reasoning, going beyond level-1 and making strides towards level-2.
Causal reasoning capability is critical in advancing large language models (LLMs) toward strong artificial intelligence.
https://arxiv.org/abs/2506.21215v1
https://arxiv.org/pdf/2506.21215v1.pdf
null
[ "Haoang Chi", "He Li", "Wenjing Yang", "Feng Liu", "Long Lan", "Xiaoguang Ren", "Tongliang Liu", "Bo Han" ]
[ "counterfactual", "General Knowledge" ]
2025-06-26T00:00:00
null
null
null
null
[]