paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/fifa-unified-faithfulness-evaluation
|
2507.06523
| null | null |
FIFA: Unified Faithfulness Evaluation Framework for Text-to-Video and Video-to-Text Generation
|
Video Multimodal Large Language Models (VideoMLLMs) have achieved remarkable progress in both Video-to-Text and Text-to-Video tasks. However, they often suffer fro hallucinations, generating content that contradicts the visual input. Existing evaluation methods are limited to one task (e.g., V2T) and also fail to assess hallucinations in open-ended, free-form responses. To address this gap, we propose FIFA, a unified FaIthFulness evAluation framework that extracts comprehensive descriptive facts, models their semantic dependencies via a Spatio-Temporal Semantic Dependency Graph, and verifies them using VideoQA models. We further introduce Post-Correction, a tool-based correction framework that revises hallucinated content. Extensive experiments demonstrate that FIFA aligns more closely with human judgment than existing evaluation methods, and that Post-Correction effectively improves factual consistency in both text and video generation.
| null |
https://arxiv.org/abs/2507.06523v1
|
https://arxiv.org/pdf/2507.06523v1.pdf
| null |
[
"Liqiang Jing",
"Viet Lai",
"Seunghyun Yoon",
"Trung Bui",
"Xinya Du"
] |
[
"Descriptive",
"Text Generation",
"Video Generation"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/go-to-zero-towards-zero-shot-motion
|
2507.07095
| null | null |
Go to Zero: Towards Zero-shot Motion Generation with Million-scale Data
|
Generating diverse and natural human motion sequences based on textual descriptions constitutes a fundamental and challenging research area within the domains of computer vision, graphics, and robotics. Despite significant advancements in this field, current methodologies often face challenges regarding zero-shot generalization capabilities, largely attributable to the limited size of training datasets. Moreover, the lack of a comprehensive evaluation framework impedes the advancement of this task by failing to identify directions for improvement. In this work, we aim to push text-to-motion into a new era, that is, to achieve the generalization ability of zero-shot. To this end, firstly, we develop an efficient annotation pipeline and introduce MotionMillion-the largest human motion dataset to date, featuring over 2,000 hours and 2 million high-quality motion sequences. Additionally, we propose MotionMillion-Eval, the most comprehensive benchmark for evaluating zero-shot motion generation. Leveraging a scalable architecture, we scale our model to 7B parameters and validate its performance on MotionMillion-Eval. Our results demonstrate strong generalization to out-of-domain and complex compositional motions, marking a significant step toward zero-shot human motion generation. The code is available at https://github.com/VankouF/MotionMillion-Codes.
| null |
https://arxiv.org/abs/2507.07095v1
|
https://arxiv.org/pdf/2507.07095v1.pdf
| null |
[
"Ke Fan",
"Shunlin Lu",
"Minyue Dai",
"Runyi Yu",
"Lixing Xiao",
"Zhiyang Dou",
"Junting Dong",
"Lizhuang Ma",
"Jingbo Wang"
] |
[
"Motion Generation",
"Zero-shot Generalization"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/2406-0323
|
2406.0323
| null | null | null | null | null |
https://arxiv.org/abs/2406.0323
| null | null |
[] |
[] | null | null | null | null | null |
[] |
https://paperswithcode.com/paper/prime-large-language-model-personalization
|
2507.04607
| null | null |
PRIME: Large Language Model Personalization with Cognitive Memory and Thought Processes
|
Large language model (LLM) personalization aims to align model outputs with individuals' unique preferences and opinions. While recent efforts have implemented various personalization methods, a unified theoretical framework that can systematically understand the drivers of effective personalization is still lacking. In this work, we integrate the well-established cognitive dual-memory model into LLM personalization, by mirroring episodic memory to historical user engagements and semantic memory to long-term, evolving user beliefs. Specifically, we systematically investigate memory instantiations and introduce a unified framework, PRIME, using episodic and semantic memory mechanisms. We further augment PRIME with a novel personalized thinking capability inspired by the slow thinking strategy. Moreover, recognizing the absence of suitable benchmarks, we introduce a dataset using Change My View (CMV) from Reddit, specifically designed to evaluate long-context personalization. Extensive experiments validate PRIME's effectiveness across both long- and short-context scenarios. Further analysis confirms that PRIME effectively captures dynamic personalization beyond mere popularity biases.
| null |
https://arxiv.org/abs/2507.04607v2
|
https://arxiv.org/pdf/2507.04607v2.pdf
| null |
[
"Xinliang Frederick Zhang",
"Nick Beauchamp",
"Lu Wang"
] |
[
"Language Modeling",
"Language Modelling",
"Large Language Model"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learnlens-llm-enabled-personalised-curriculum
|
2507.04295
| null | null |
LearnLens: LLM-Enabled Personalised, Curriculum-Grounded Feedback with Educators in the Loop
|
Effective feedback is essential for student learning but is time-intensive for teachers. We present LearnLens, a modular, LLM-based system that generates personalised, curriculum-aligned feedback in science education. LearnLens comprises three components: (1) an error-aware assessment module that captures nuanced reasoning errors; (2) a curriculum-grounded generation module that uses a structured, topic-linked memory chain rather than traditional similarity-based retrieval, improving relevance and reducing noise; and (3) an educator-in-the-loop interface for customisation and oversight. LearnLens addresses key challenges in existing systems, offering scalable, high-quality feedback that empowers both teachers and students.
| null |
https://arxiv.org/abs/2507.04295v3
|
https://arxiv.org/pdf/2507.04295v3.pdf
| null |
[
"Runcong Zhao",
"Artem Bobrov",
"Jiazheng Li",
"Yulan He"
] |
[
"Retrieval"
] | 2025-07-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nrseg-noise-resilient-learning-for-bev
|
2507.04002
| null | null |
NRSeg: Noise-Resilient Learning for BEV Semantic Segmentation via Driving World Models
|
Birds' Eye View (BEV) semantic segmentation is an indispensable perception task in end-to-end autonomous driving systems. Unsupervised and semi-supervised learning for BEV tasks, as pivotal for real-world applications, underperform due to the homogeneous distribution of the labeled data. In this work, we explore the potential of synthetic data from driving world models to enhance the diversity of labeled data for robustifying BEV segmentation. Yet, our preliminary findings reveal that generation noise in synthetic data compromises efficient BEV model learning. To fully harness the potential of synthetic data from world models, this paper proposes NRSeg, a noise-resilient learning framework for BEV semantic segmentation. Specifically, a Perspective-Geometry Consistency Metric (PGCM) is proposed to quantitatively evaluate the guidance capability of generated data for model learning. This metric originates from the alignment measure between the perspective road mask of generated data and the mask projected from the BEV labels. Moreover, a Bi-Distribution Parallel Prediction (BiDPP) is designed to enhance the inherent robustness of the model, where the learning process is constrained through parallel prediction of multinomial and Dirichlet distributions. The former efficiently predicts semantic probabilities, whereas the latter adopts evidential deep learning to realize uncertainty quantification. Furthermore, a Hierarchical Local Semantic Exclusion (HLSE) module is designed to address the non-mutual exclusivity inherent in BEV semantic segmentation tasks. Experimental results demonstrate that NRSeg achieves state-of-the-art performance, yielding the highest improvements in mIoU of 13.8% and 11.4% in unsupervised and semi-supervised BEV segmentation tasks, respectively. The source code will be made publicly available at https://github.com/lynn-yu/NRSeg.
| null |
https://arxiv.org/abs/2507.04002v1
|
https://arxiv.org/pdf/2507.04002v1.pdf
| null |
[
"Siyu Li",
"Fei Teng",
"Yihong Cao",
"Kailun Yang",
"Zhiyong Li",
"Yaonan Wang"
] |
[
"Autonomous Driving",
"BEV Segmentation",
"Segmentation",
"Semantic Segmentation",
"Uncertainty Quantification"
] | 2025-07-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/what-has-a-foundation-model-found-using
|
2507.06952
| null | null |
What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models
|
Foundation models are premised on the idea that sequence prediction can uncover deeper domain understanding, much like how Kepler's predictions of planetary motion later led to the discovery of Newtonian mechanics. However, evaluating whether these models truly capture deeper structure remains a challenge. We develop a technique for evaluating foundation models that examines how they adapt to synthetic datasets generated from some postulated world model. Our technique measures whether the foundation model's inductive bias aligns with the world model, and so we refer to it as an inductive bias probe. Across multiple domains, we find that foundation models can excel at their training tasks yet fail to develop inductive biases towards the underlying world model when adapted to new tasks. We particularly find that foundation models trained on orbital trajectories consistently fail to apply Newtonian mechanics when adapted to new physics tasks. Further analysis reveals that these models behave as if they develop task-specific heuristics that fail to generalize.
| null |
https://arxiv.org/abs/2507.06952v2
|
https://arxiv.org/pdf/2507.06952v2.pdf
| null |
[
"Keyon Vafa",
"Peter G. Chang",
"Ashesh Rambachan",
"Sendhil Mullainathan"
] |
[
"Inductive Bias"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/from-curiosity-to-competence-how-world-models
|
2507.08210
| null | null |
From Curiosity to Competence: How World Models Interact with the Dynamics of Exploration
|
What drives an agent to explore the world while also maintaining control over the environment? From a child at play to scientists in the lab, intelligent agents must balance curiosity (the drive to seek knowledge) with competence (the drive to master and control the environment). Bridging cognitive theories of intrinsic motivation with reinforcement learning, we ask how evolving internal representations mediate the trade-off between curiosity (novelty or information gain) and competence (empowerment). We compare two model-based agents using handcrafted state abstractions (Tabular) or learning an internal world model (Dreamer). The Tabular agent shows curiosity and competence guide exploration in distinct patterns, while prioritizing both improves exploration. The Dreamer agent reveals a two-way interaction between exploration and representation learning, mirroring the developmental co-evolution of curiosity and competence. Our findings formalize adaptive exploration as a balance between pursuing the unknown and the controllable, offering insights for cognitive theories and efficient reinforcement learning.
| null |
https://arxiv.org/abs/2507.08210v1
|
https://arxiv.org/pdf/2507.08210v1.pdf
| null |
[
"Fryderyk Mantiuk",
"Hanqi Zhou",
"Charley M. Wu"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Representation Learning"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/martian-world-models-controllable-video
|
2507.07978
| null | null |
Martian World Models: Controllable Video Synthesis with Physically Accurate 3D Reconstructions
|
Synthesizing realistic Martian landscape videos is crucial for mission rehearsal and robotic simulation. However, this task poses unique challenges due to the scarcity of high-quality Martian data and the significant domain gap between Martian and terrestrial imagery. To address these challenges, we propose a holistic solution composed of two key components: 1) A data curation pipeline Multimodal Mars Synthesis (M3arsSynth), which reconstructs 3D Martian environments from real stereo navigation images, sourced from NASA's Planetary Data System (PDS), and renders high-fidelity multiview 3D video sequences. 2) A Martian terrain video generator, MarsGen, which synthesizes novel videos visually realistic and geometrically consistent with the 3D structure encoded in the data. Our M3arsSynth engine spans a wide range of Martian terrains and acquisition dates, enabling the generation of physically accurate 3D surface models at metric-scale resolution. MarsGen, fine-tuned on M3arsSynth data, synthesizes videos conditioned on an initial image frame and, optionally, camera trajectories or textual prompts, allowing for video generation in novel environments. Experimental results show that our approach outperforms video synthesis models trained on terrestrial datasets, achieving superior visual fidelity and 3D structural consistency.
| null |
https://arxiv.org/abs/2507.07978v1
|
https://arxiv.org/pdf/2507.07978v1.pdf
| null |
[
"Longfei Li",
"Zhiwen Fan",
"Wenyan Cong",
"Xinhang Liu",
"Yuyang Yin",
"Matt Foutter",
"Panwang Pan",
"Chenyu You",
"Yue Wang",
"Zhangyang Wang",
"Yao Zhao",
"Marco Pavone",
"Yunchao Wei"
] |
[
"Video Generation"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dyn-o-building-structured-world-models-with
|
2507.03298
| null | null |
Dyn-O: Building Structured World Models with Object-Centric Representations
|
World models aim to capture the dynamics of the environment, enabling agents to predict and plan for future states. In most scenarios of interest, the dynamics are highly centered on interactions among objects within the environment. This motivates the development of world models that operate on object-centric rather than monolithic representations, with the goal of more effectively capturing environment dynamics and enhancing compositional generalization. However, the development of object-centric world models has largely been explored in environments with limited visual complexity (such as basic geometries). It remains underexplored whether such models can generalize to more complex settings with diverse textures and cluttered scenes. In this paper, we fill this gap by introducing Dyn-O, an enhanced structured world model built upon object-centric representations. Compared to prior work in object-centric representations, Dyn-O improves in both learning representations and modeling dynamics. On the challenging Procgen games, we find that our method can learn object-centric world models directly from pixel observations, outperforming DreamerV3 in rollout prediction accuracy. Furthermore, by decoupling object-centric features into dynamics-agnostic and dynamics-aware components, we enable finer-grained manipulation of these features and generate more diverse imagined trajectories.
| null |
https://arxiv.org/abs/2507.03298v1
|
https://arxiv.org/pdf/2507.03298v1.pdf
| null |
[
"Zizhao Wang",
"Kaixin Wang",
"Li Zhao",
"Peter Stone",
"Jiang Bian"
] |
[
"Object"
] | 2025-07-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/critiques-of-world-models
|
2507.05169
| null | null |
Critiques of World Models
|
World Model, the supposed algorithmic surrogate of the real-world environment which biological agents experience with and act upon, has been an emerging topic in recent years because of the rising needs to develop virtual agents with artificial (general) intelligence. There has been much debate on what a world model really is, how to build it, how to use it, and how to evaluate it. In this essay, starting from the imagination in the famed Sci-Fi classic Dune, and drawing inspiration from the concept of "hypothetical thinking" in psychology literature, we offer critiques of several schools of thoughts on world modeling, and argue the primary goal of a world model to be simulating all actionable possibilities of the real world for purposeful reasoning and acting. Building on the critiques, we propose a new architecture for a general-purpose world model, based on hierarchical, multi-level, and mixed continuous/discrete representations, and a generative and self-supervision learning framework, with an outlook of a Physical, Agentic, and Nested (PAN) AGI system enabled by such a model.
| null |
https://arxiv.org/abs/2507.05169v2
|
https://arxiv.org/pdf/2507.05169v2.pdf
| null |
[
"Eric Xing",
"Mingkai Deng",
"Jinyu Hou",
"Zhiting Hu"
] |
[] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/foundation-models-for-time-series-forecasting
|
2507.08858
| null | null |
Foundation models for time series forecasting: Application in conformal prediction
|
The zero-shot capabilities of foundation models (FMs) for time series forecasting offer promising potentials in conformal prediction, as most of the available data can be allocated to calibration. This study compares the performance of Time Series Foundation Models (TSFMs) with traditional methods, including statistical models and gradient boosting, within a conformal prediction setting. Our findings highlight two key advantages of TSFMs. First, when the volume of data is limited, TSFMs provide more reliable conformalized prediction intervals than classic models, thanks to their superior predictive accuracy. Second, the calibration process is more stable because more data are used for calibration. Morever, the fewer data available, the more pronounced these benefits become, as classic models require a substantial amount of data for effective training. These results underscore the potential of foundation models in improving conformal prediction reliability in time series applications, particularly in data-constrained cases. All the code to reproduce the experiments is available.
| null |
https://arxiv.org/abs/2507.08858v1
|
https://arxiv.org/pdf/2507.08858v1.pdf
| null |
[
"Sami Achour",
"Yassine Bouher",
"Duong Nguyen",
"Nicolas Chesneau"
] |
[
"Conformal Prediction",
"Prediction",
"Prediction Intervals",
"Time Series",
"Time Series Forecasting"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gdgb-a-benchmark-for-generative-dynamic-text
|
2507.03267
| null | null |
GDGB: A Benchmark for Generative Dynamic Text-Attributed Graph Learning
|
Dynamic Text-Attributed Graphs (DyTAGs), which intricately integrate structural, temporal, and textual attributes, are crucial for modeling complex real-world systems. However, most of the existing DyTAG datasets exhibit poor textual quality, which severely limits their utility for DyTAG generation tasks requiring semantically rich inputs. Additionally, prior work mainly focuses on discriminative tasks on DyTAGs, resulting in a lack of standardized task formulations and evaluation protocols tailored for DyTAG generation. To address these critical issues, we propose Generative DyTAG Benchmark (GDGB), which comprises eight meticulously curated DyTAG datasets with high-quality textual features for both nodes and edges, overcoming limitations of prior datasets. Building on GDGB, we define two novel DyTAG generation tasks: Transductive Dynamic Graph Generation (TDGG) and Inductive Dynamic Graph Generation (IDGG). TDGG transductively generates a target DyTAG based on the given source and destination node sets, while the more challenging IDGG introduces new node generation to inductively model the dynamic expansion of real-world graph data. To enable holistic evaluation, we design multifaceted metrics that assess the structural, temporal, and textual quality of the generated DyTAGs. We further propose GAG-General, an LLM-based multi-agent generative framework tailored for reproducible and robust benchmarking of DyTAG generation. Experimental results demonstrate that GDGB enables rigorous evaluation of TDGG and IDGG, with key insights revealing the critical interplay of structural and textual features in DyTAG generation. These findings establish GDGB as a foundational resource for advancing generative DyTAG research and unlocking further practical applications in DyTAG generation. GDGB datasets, source codes, and leaderboards are available at \href{https://gdgb-algo.github.io/}{here}.
| null |
https://arxiv.org/abs/2507.03267v1
|
https://arxiv.org/pdf/2507.03267v1.pdf
| null |
[
"Jie Peng",
"Jiarui Ji",
"Runlin Lei",
"Zhewei Wei",
"Yongchao Liu",
"Chuntao Hong"
] |
[
"Benchmarking",
"Graph Generation",
"Graph Learning"
] | 2025-07-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/reviving-cultural-heritage-a-novel-approach
|
2507.05108
| null | null |
Reviving Cultural Heritage: A Novel Approach for Comprehensive Historical Document Restoration
|
Historical documents represent an invaluable cultural heritage, yet have undergone significant degradation over time through tears, water erosion, and oxidation. Existing Historical Document Restoration (HDR) methods primarily focus on single modality or limited-size restoration, failing to meet practical needs. To fill this gap, we present a full-page HDR dataset (FPHDR) and a novel automated HDR solution (AutoHDR). Specifically, FPHDR comprises 1,633 real and 6,543 synthetic images with character-level and line-level locations, as well as character annotations in different damage grades. AutoHDR mimics historians' restoration workflows through a three-stage approach: OCR-assisted damage localization, vision-language context text prediction, and patch autoregressive appearance restoration. The modular architecture of AutoHDR enables seamless human-machine collaboration, allowing for flexible intervention and optimization at each restoration stage. Experiments demonstrate AutoHDR's remarkable performance in HDR. When processing severely damaged documents, our method improves OCR accuracy from 46.83\% to 84.05\%, with further enhancement to 94.25\% through human-machine collaboration. We believe this work represents a significant advancement in automated historical document restoration and contributes substantially to cultural heritage preservation. The model and dataset are available at https://github.com/SCUT-DLVCLab/AutoHDR.
| null |
https://arxiv.org/abs/2507.05108v1
|
https://arxiv.org/pdf/2507.05108v1.pdf
| null |
[
"Yuyi Zhang",
"Peirong Zhang",
"Zhenhua Yang",
"Pengyu Yan",
"Yongxin Shi",
"Pengwei Liu",
"Fengjun Guo",
"Lianwen Jin"
] |
[
"Optical Character Recognition (OCR)"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dynamic-chunking-for-end-to-end-hierarchical
|
2507.07955
| null | null |
Dynamic Chunking for End-to-End Hierarchical Sequence Modeling
|
Major progress on language models (LMs) in recent years has largely resulted from moving away from specialized models designed for specific tasks, to general models based on powerful architectures (e.g. the Transformer) that learn everything from raw data. Despite this trend, pre-processing steps such as tokenization remain a barrier to true end-to-end foundation models. We introduce a collection of new techniques that enable a dynamic chunking mechanism which automatically learns content- and context- dependent segmentation strategies learned jointly with the rest of the model. Incorporating this into an explicit hierarchical network (H-Net) allows replacing the (implicitly hierarchical) tokenization-LM-detokenization pipeline with a single model learned fully end-to-end. When compute- and data- matched, an H-Net with one stage of hierarchy operating at the byte level outperforms a strong Transformer language model operating over BPE tokens. Iterating the hierarchy to multiple stages further increases its performance by modeling multiple levels of abstraction, demonstrating significantly better scaling with data and matching the token-based Transformer of twice its size. H-Nets pretrained on English show significantly increased character-level robustness, and qualitatively learn meaningful data-dependent chunking strategies without any heuristics or explicit supervision. Finally, the H-Net's improvement over tokenized pipelines is further increased in languages and modalities with weaker tokenization heuristics, such as Chinese and code, or DNA sequences (nearly 4x improvement in data efficiency over baselines), showing the potential of true end-to-end models that learn and scale better from unprocessed data.
| null |
https://arxiv.org/abs/2507.07955v2
|
https://arxiv.org/pdf/2507.07955v2.pdf
| null |
[
"Sukjun Hwang",
"Brandon Wang",
"Albert Gu"
] |
[
"Chunking"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/evaluating-morphological-alignment-of
|
2507.06378
| null | null |
Evaluating Morphological Alignment of Tokenizers in 70 Languages
|
While tokenization is a key step in language modeling, with effects on model training and performance, it remains unclear how to effectively evaluate tokenizer quality. One proposed dimension of tokenizer quality is the extent to which tokenizers preserve linguistically meaningful subwords, aligning token boundaries with morphological boundaries within a word. We expand MorphScore (Arnett & Bergen, 2025), which previously covered 22 languages, to support a total of 70 languages. The updated MorphScore offers more flexibility in evaluation and addresses some of the limitations of the original version. We then correlate our alignment scores with downstream task performance for five pre-trained languages models on seven tasks, with at least one task in each of the languages in our sample. We find that morphological alignment does not explain very much variance in model performance, suggesting that morphological alignment alone does not measure dimensions of tokenization quality relevant to model performance.
| null |
https://arxiv.org/abs/2507.06378v1
|
https://arxiv.org/pdf/2507.06378v1.pdf
| null |
[
"Catherine Arnett",
"Marisa Hudspeth",
"Brendan O'Connor"
] |
[
"Language Modeling",
"Language Modelling"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/evolvenav-self-improving-embodied-reasoning
|
2506.01551
| null | null |
EvolveNav: Self-Improving Embodied Reasoning for LLM-Based Vision-Language Navigation
|
Building Vision-Language Navigation (VLN) agents which can navigate following natural language instructions is a long-standing goal in human-robot interaction applications. Recent studies have revealed the potential of training open-source Large Language Models (LLMs) to unleash LLMs' reasoning ability for improving navigation, and simultaneously mitigate the domain gap between LLMs' training corpus and the VLN task. However, these approaches primarily adopt direct input-output mapping paradigms, causing the mapping learning difficult and the navigational decisions unexplainable. Chain-of-Thought (CoT) training is a promising way to improve both navigational decision accuracy and interpretability, while the complexity of the navigation task makes the perfect CoT labels unavailable and may lead to overfitting through pure CoT supervised fine-tuning. In this paper, we propose a novel sElf-improving embodied reasoning framework for boosting LLM-based vision-language Navigation, dubbed EvolveNav. Our EvolveNav consists of two stages: (1) Formalized CoT Supervised Fine-Tuning, where we train the model with formalized CoT labels to both activate the model's navigational reasoning capabilities and increase the reasoning speed; (2) Self-Reflective Post-Training, where the model is iteratively trained with its own reasoning outputs as self-enriched CoT labels to enhance the supervision diversity. A self-reflective auxiliary task is also introduced to encourage learning correct reasoning patterns by contrasting with wrong ones. Experimental results on the popular VLN benchmarks demonstrate the superiority of EvolveNav over previous LLM-based VLN approaches. Code is available at https://github.com/expectorlin/EvolveNav.
| null |
https://arxiv.org/abs/2506.01551v2
|
https://arxiv.org/pdf/2506.01551v2.pdf
| null |
[
"Bingqian Lin",
"Yunshuang Nie",
"Khun Loun Zai",
"Ziming Wei",
"Mingfei Han",
"Rongtao Xu",
"Minzhe Niu",
"Jianhua Han",
"Liang Lin",
"Cewu Lu",
"Xiaodan Liang"
] |
[
"Navigate",
"Vision-Language Navigation"
] | 2025-06-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/vision-foundation-models-as-effective-visual
|
2507.08441
| null | null |
Vision Foundation Models as Effective Visual Tokenizers for Autoregressive Image Generation
|
Leveraging the powerful representations of pre-trained vision foundation models -- traditionally used for visual comprehension -- we explore a novel direction: building an image tokenizer directly atop such models, a largely underexplored area. Specifically, we employ a frozen vision foundation model as the encoder of our tokenizer. To enhance its effectiveness, we introduce two key components: (1) a region-adaptive quantization framework that reduces redundancy in the pre-trained features on regular 2D grids, and (2) a semantic reconstruction objective that aligns the tokenizer's outputs with the foundation model's representations to preserve semantic fidelity. Based on these designs, our proposed image tokenizer, VFMTok, achieves substantial improvements in image reconstruction and generation quality, while also enhancing token efficiency. It further boosts autoregressive (AR) generation -- achieving a gFID of 2.07 on ImageNet benchmarks, while accelerating model convergence by three times, and enabling high-fidelity class-conditional synthesis without the need for classifier-free guidance (CFG). The code will be released publicly to benefit the community.
| null |
https://arxiv.org/abs/2507.08441v1
|
https://arxiv.org/pdf/2507.08441v1.pdf
| null |
[
"Anlin Zheng",
"Xin Wen",
"Xuanyang Zhang",
"Chuofan Ma",
"Tiancai Wang",
"Gang Yu",
"Xiangyu Zhang",
"Xiaojuan Qi"
] |
[
"Image Generation",
"Image Reconstruction",
"Quantization"
] | 2025-07-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lumos-1-on-autoregressive-video-generation
|
2507.08801
| null | null |
Lumos-1: On Autoregressive Video Generation from a Unified Model Perspective
|
Autoregressive large language models (LLMs) have unified a vast range of language tasks, inspiring preliminary efforts in autoregressive video generation. Existing autoregressive video generators either diverge from standard LLM architectures, depend on bulky external text encoders, or incur prohibitive latency due to next-token decoding. In this paper, we introduce Lumos-1, an autoregressive video generator that retains the LLM architecture with minimal architectural modifications. To inject spatiotemporal correlations in LLMs, we identify the efficacy of incorporating 3D RoPE and diagnose its imbalanced frequency spectrum ranges. Therefore, we propose MM-RoPE, a RoPE scheme that preserves the original textual RoPE while providing comprehensive frequency spectra and scaled 3D positions for modeling multimodal spatiotemporal data. Moreover, Lumos-1 resorts to a token dependency strategy that obeys intra-frame bidirectionality and inter-frame temporal causality. Based on this dependency strategy, we identify the issue of frame-wise loss imbalance caused by spatial information redundancy and solve it by proposing Autoregressive Discrete Diffusion Forcing (AR-DF). AR-DF introduces temporal tube masking during training with a compatible inference-time masking policy to avoid quality degradation. By using memory-efficient training techniques, we pre-train Lumos-1 on only 48 GPUs, achieving performance comparable to EMU3 on GenEval, COSMOS-Video2World on VBench-I2V, and OpenSoraPlan on VBench-T2V. Code and models are available at https://github.com/alibaba-damo-academy/Lumos.
| null |
https://arxiv.org/abs/2507.08801v1
|
https://arxiv.org/pdf/2507.08801v1.pdf
| null |
[
"Hangjie Yuan",
"Weihua Chen",
"Jun Cen",
"Hu Yu",
"Jingyun Liang",
"Shuning Chang",
"Zhihui Lin",
"Tao Feng",
"Pengwei Liu",
"Jiazheng Xing",
"Hao Luo",
"Jiasheng Tang",
"Fan Wang",
"Yi Yang"
] |
[
"Video Generation"
] | 2025-07-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/audio-flamingo-3-advancing-audio-intelligence
|
2507.08128
| null | null |
Audio Flamingo 3: Advancing Audio Intelligence with Fully Open Large Audio Language Models
|
We present Audio Flamingo 3 (AF3), a fully open state-of-the-art (SOTA) large audio-language model that advances reasoning and understanding across speech, sound, and music. AF3 introduces: (i) AF-Whisper, a unified audio encoder trained using a novel strategy for joint representation learning across all 3 modalities of speech, sound, and music; (ii) flexible, on-demand thinking, allowing the model to do chain-of-thought-type reasoning before answering; (iii) multi-turn, multi-audio chat; (iv) long audio understanding and reasoning (including speech) up to 10 minutes; and (v) voice-to-voice interaction. To enable these capabilities, we propose several large-scale training datasets curated using novel strategies, including AudioSkills-XL, LongAudio-XL, AF-Think, and AF-Chat, and train AF3 with a novel five-stage curriculum-based training strategy. Trained on only open-source audio data, AF3 achieves new SOTA results on over 20+ (long) audio understanding and reasoning benchmarks, surpassing both open-weight and closed-source models trained on much larger datasets.
| null |
https://arxiv.org/abs/2507.08128v1
|
https://arxiv.org/pdf/2507.08128v1.pdf
| null |
[
"Arushi Goel",
"Sreyan Ghosh",
"Jaehyeon Kim",
"Sonal Kumar",
"Zhifeng Kong",
"Sang-gil Lee",
"Chao-Han Huck Yang",
"Ramani Duraiswami",
"Dinesh Manocha",
"Rafael Valle",
"Bryan Catanzaro"
] |
[
"Language Modeling",
"Language Modelling",
"Representation Learning"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/4kagent-agentic-any-image-to-4k-super
|
2507.07105
| null | null |
4KAgent: Agentic Any Image to 4K Super-Resolution
|
We present 4KAgent, a unified agentic super-resolution generalist system designed to universally upscale any image to 4K resolution (and even higher, if applied iteratively). Our system can transform images from extremely low resolutions with severe degradations, for example, highly distorted inputs at 256x256, into crystal-clear, photorealistic 4K outputs. 4KAgent comprises three core components: (1) Profiling, a module that customizes the 4KAgent pipeline based on bespoke use cases; (2) A Perception Agent, which leverages vision-language models alongside image quality assessment experts to analyze the input image and make a tailored restoration plan; and (3) A Restoration Agent, which executes the plan, following a recursive execution-reflection paradigm, guided by a quality-driven mixture-of-expert policy to select the optimal output for each step. Additionally, 4KAgent embeds a specialized face restoration pipeline, significantly enhancing facial details in portrait and selfie photos. We rigorously evaluate our 4KAgent across 11 distinct task categories encompassing a total of 26 diverse benchmarks, setting new state-of-the-art on a broad spectrum of imaging domains. Our evaluations cover natural images, portrait photos, AI-generated content, satellite imagery, fluorescence microscopy, and medical imaging like fundoscopy, ultrasound, and X-ray, demonstrating superior performance in terms of both perceptual (e.g., NIQE, MUSIQ) and fidelity (e.g., PSNR) metrics. By establishing a novel agentic paradigm for low-level vision tasks, we aim to catalyze broader interest and innovation within vision-centric autonomous agents across diverse research communities. We will release all the code, models, and results at: https://4kagent.github.io.
| null |
https://arxiv.org/abs/2507.07105v1
|
https://arxiv.org/pdf/2507.07105v1.pdf
| null |
[
"Yushen Zuo",
"Qi Zheng",
"Mingyang Wu",
"Xinrui Jiang",
"Renjie Li",
"Jian Wang",
"Yide Zhang",
"Gengchen Mai",
"Lihong V. Wang",
"James Zou",
"Xiaoyu Wang",
"Ming-Hsuan Yang",
"Zhengzhong Tu"
] |
[
"4k",
"Image Quality Assessment",
"Super-Resolution"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/compassjudger-2-towards-generalist-judge
|
2507.09104
| null | null |
CompassJudger-2: Towards Generalist Judge Model via Verifiable Rewards
|
Recently, the role of LLM-as-judge in evaluating large language models has gained prominence. However, current judge models suffer from narrow specialization and limited robustness, undermining their capacity for comprehensive evaluations. In this work, we present CompassJudger-2, a novel generalist judge model that overcomes these limitations via a task-driven, multi-domain data curation strategy. Central to our approach is supervising judgment tasks with verifiable rewards, guiding intrinsic critical reasoning through rejection sampling to foster robust, generalizable judgment capabilities. We introduce a refined learning objective with margin policy gradient loss to enhance performance. Empirically, CompassJudger-2 achieves superior results across multiple judge and reward benchmarks, and our 7B model demonstrates competitive judgment accuracy with significantly larger models like DeepSeek-V3 and Qwen3-235B-A22B. Additionally, we propose JudgerBenchV2, a comprehensive benchmark evaluating cross-domain judgment accuracy and rank consistency to standardize judge model evaluation. These contributions advance robust, scalable LLM judgment and establish new performance and evaluation standards.
| null |
https://arxiv.org/abs/2507.09104v1
|
https://arxiv.org/pdf/2507.09104v1.pdf
| null |
[
"Taolin Zhang",
"Maosong Cao",
"Alexander Lam",
"Songyang Zhang",
"Kai Chen"
] |
[] | 2025-07-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/propinn-demystifying-propagation-failures-in
|
2502.00803
| null | null |
ProPINN: Demystifying Propagation Failures in Physics-Informed Neural Networks
|
Physics-informed neural networks (PINNs) have earned high expectations in solving partial differential equations (PDEs), but their optimization usually faces thorny challenges due to the unique derivative-dependent loss function. By analyzing the loss distribution, previous research observed the propagation failure phenomenon of PINNs, intuitively described as the correct supervision for model outputs cannot ''propagate'' from initial states or boundaries to the interior domain. Going beyond intuitive understanding, this paper provides a formal and in-depth study of propagation failure and its root cause. Based on a detailed comparison with classical finite element methods, we ascribe the failure to the conventional single-point-processing architecture of PINNs and further prove that propagation failure is essentially caused by the lower gradient correlation of PINN models on nearby collocation points. Compared to superficial loss maps, this new perspective provides a more precise quantitative criterion to identify where and why PINN fails. The theoretical finding also inspires us to present a new PINN architecture, named ProPINN, which can effectively unite the gradients of region points for better propagation. ProPINN can reliably resolve PINN failure modes and significantly surpass advanced Transformer-based models with 46% relative promotion.
| null |
https://arxiv.org/abs/2502.00803v2
|
https://arxiv.org/pdf/2502.00803v2.pdf
| null |
[
"Haixu Wu",
"Yuezhou Ma",
"Hang Zhou",
"Huikun Weng",
"Jianmin Wang",
"Mingsheng Long"
] |
[] | 2025-02-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dc-ar-efficient-masked-autoregressive-image
|
2507.04947
| null | null |
DC-AR: Efficient Masked Autoregressive Image Generation with Deep Compression Hybrid Tokenizer
|
We introduce DC-AR, a novel masked autoregressive (AR) text-to-image generation framework that delivers superior image generation quality with exceptional computational efficiency. Due to the tokenizers' limitations, prior masked AR models have lagged behind diffusion models in terms of quality or efficiency. We overcome this limitation by introducing DC-HT - a deep compression hybrid tokenizer for AR models that achieves a 32x spatial compression ratio while maintaining high reconstruction fidelity and cross-resolution generalization ability. Building upon DC-HT, we extend MaskGIT and create a new hybrid masked autoregressive image generation framework that first produces the structural elements through discrete tokens and then applies refinements via residual tokens. DC-AR achieves state-of-the-art results with a gFID of 5.49 on MJHQ-30K and an overall score of 0.69 on GenEval, while offering 1.5-7.9x higher throughput and 2.0-3.5x lower latency compared to prior leading diffusion and autoregressive models.
| null |
https://arxiv.org/abs/2507.04947v1
|
https://arxiv.org/pdf/2507.04947v1.pdf
| null |
[
"Yecheng Wu",
"Junyu Chen",
"Zhuoyang Zhang",
"Enze Xie",
"Jincheng Yu",
"Junsong Chen",
"Jinyi Hu",
"Yao Lu",
"Song Han",
"Han Cai"
] |
[
"Computational Efficiency",
"Image Generation",
"Text to Image Generation",
"Text-to-Image Generation"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/enhancing-privacy-utility-trade-offs-to-1
|
2504.18032
| null | null |
Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models
|
Text-to-image diffusion models have demonstrated remarkable capabilities in creating images highly aligned with user prompts, yet their proclivity for memorizing training set images has sparked concerns about the originality of the generated images and privacy issues, potentially leading to legal complications for both model owners and users, particularly when the memorized images contain proprietary content. Although methods to mitigate these issues have been suggested, enhancing privacy often results in a significant decrease in the utility of the outputs, as indicated by text-alignment scores. To bridge the research gap, we introduce a novel method, PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring (PR) to improve privacy and incorporating semantic prompt search (SS) to enhance utility. Extensive experiments across various privacy levels demonstrate that our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.
| null |
https://arxiv.org/abs/2504.18032v1
|
https://arxiv.org/pdf/2504.18032v1.pdf
|
CVPR 2025 1
|
[
"Chen Chen",
"Daochang Liu",
"Mubarak Shah",
"Chang Xu"
] |
[
"Memorization"
] | 2025-04-25T00:00:00 |
http://openaccess.thecvf.com//content/CVPR2025/html/Chen_Enhancing_Privacy-Utility_Trade-offs_to_Mitigate_Memorization_in_Diffusion_Models_CVPR_2025_paper.html
|
http://openaccess.thecvf.com//content/CVPR2025/papers/Chen_Enhancing_Privacy-Utility_Trade-offs_to_Mitigate_Memorization_in_Diffusion_Models_CVPR_2025_paper.pdf
|
enhancing-privacy-utility-trade-offs-to
| null |
[] |
https://paperswithcode.com/paper/when-graph-contrastive-learning-backfires
|
2507.07436
| null | null |
When Graph Contrastive Learning Backfires: Spectral Vulnerability and Defense in Recommendation
|
Graph Contrastive Learning (GCL) has demonstrated substantial promise in enhancing the robustness and generalization of recommender systems, particularly by enabling models to leverage large-scale unlabeled data for improved representation learning. However, in this paper, we reveal an unexpected vulnerability: the integration of GCL inadvertently increases the susceptibility of a recommender to targeted promotion attacks. Through both theoretical investigation and empirical validation, we identify the root cause as the spectral smoothing effect induced by contrastive optimization, which disperses item embeddings across the representation space and unintentionally enhances the exposure of target items. Building on this insight, we introduce CLeaR, a bi-level optimization attack method that deliberately amplifies spectral smoothness, enabling a systematic investigation of the susceptibility of GCL-based recommendation models to targeted promotion attacks. Our findings highlight the urgent need for robust countermeasures; in response, we further propose SIM, a spectral irregularity mitigation framework designed to accurately detect and suppress targeted items without compromising model performance. Extensive experiments on multiple benchmark datasets demonstrate that, compared to existing targeted promotion attacks, GCL-based recommendation models exhibit greater susceptibility when evaluated with CLeaR, while SIM effectively mitigates these vulnerabilities.
| null |
https://arxiv.org/abs/2507.07436v1
|
https://arxiv.org/pdf/2507.07436v1.pdf
| null |
[
"Zongwei Wang",
"Min Gao",
"Junliang Yu",
"Shazia Sadiq",
"Hongzhi Yin",
"Ling Liu"
] |
[
"Contrastive Learning",
"Recommendation Systems",
"Representation Learning"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nlgcl-naturally-existing-neighbor-layers
|
2507.07522
| null | null |
NLGCL: Naturally Existing Neighbor Layers Graph Contrastive Learning for Recommendation
|
Graph Neural Networks (GNNs) are widely used in collaborative filtering to capture high-order user-item relationships. To address the data sparsity problem in recommendation systems, Graph Contrastive Learning (GCL) has emerged as a promising paradigm that maximizes mutual information between contrastive views. However, existing GCL methods rely on augmentation techniques that introduce semantically irrelevant noise and incur significant computational and storage costs, limiting effectiveness and efficiency. To overcome these challenges, we propose NLGCL, a novel contrastive learning framework that leverages naturally contrastive views between neighbor layers within GNNs. By treating each node and its neighbors in the next layer as positive pairs, and other nodes as negatives, NLGCL avoids augmentation-based noise while preserving semantic relevance. This paradigm eliminates costly view construction and storage, making it computationally efficient and practical for real-world scenarios. Extensive experiments on four public datasets demonstrate that NLGCL outperforms state-of-the-art baselines in effectiveness and efficiency.
| null |
https://arxiv.org/abs/2507.07522v1
|
https://arxiv.org/pdf/2507.07522v1.pdf
| null |
[
"Jinfeng Xu",
"Zheyu Chen",
"Shuo Yang",
"Jinze Li",
"Hewei Wang",
"Wei Wang",
"Xiping Hu",
"Edith Ngai"
] |
[
"Collaborative Filtering",
"Contrastive Learning",
"Recommendation Systems"
] | 2025-07-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/similarity-guided-diffusion-for-contrastive
|
2507.11866
| null | null |
Similarity-Guided Diffusion for Contrastive Sequential Recommendation
|
In sequential recommendation systems, data augmentation and contrastive learning techniques have recently been introduced using diffusion models to achieve robust representation learning. However, most of the existing approaches use random augmentation, which risk damaging the contextual information of the original sequence. Accordingly, we propose a Similarity-Guided Diffusion for Contrastive Sequential Recommendation. Our method leverages the similarity between item embedding vectors to generate semantically consistent noise. Moreover, we utilize high confidence score in the denoising process to select our augmentation positions. This approach more effectively reflects contextual and structural information compared to augmentation at random positions. From a contrastive learning perspective, the proposed augmentation technique provides more discriminative positive and negative samples, simultaneously improving training efficiency and recommendation performance. Experimental results on five benchmark datasets show that SimDiffRec outperforms the existing baseline models.
| null |
https://arxiv.org/abs/2507.11866v1
|
https://arxiv.org/pdf/2507.11866v1.pdf
| null |
[
"Jinkyeong Choi",
"Yejin Noh",
"Donghyeon Park"
] |
[
"Contrastive Learning",
"Data Augmentation",
"Denoising",
"Recommendation Systems",
"Representation Learning",
"Sequential Recommendation"
] | 2025-07-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/voxtral
|
2507.13264
| null | null |
Voxtral
|
We present Voxtral Mini and Voxtral Small, two multimodal audio chat models. Voxtral is trained to comprehend both spoken audio and text documents, achieving state-of-the-art performance across a diverse range of audio benchmarks, while preserving strong text capabilities. Voxtral Small outperforms a number of closed-source models, while being small enough to run locally. A 32K context window enables the model to handle audio files up to 40 minutes in duration and long multi-turn conversations. We also contribute three benchmarks for evaluating speech understanding models on knowledge and trivia. Both Voxtral models are released under Apache 2.0 license.
| null |
https://arxiv.org/abs/2507.13264v1
|
https://arxiv.org/pdf/2507.13264v1.pdf
| null |
[
"Alexander H. Liu",
"Andy Ehrenberg",
"Andy Lo",
"Clément Denoix",
"Corentin Barreau",
"Guillaume Lample",
"Jean-Malo Delignon",
"Khyathi Raghavi Chandu",
"Patrick von Platen",
"Pavankumar Reddy Muddireddy",
"Sanchit Gandhi",
"Soham Ghosh",
"Srijan Mishra",
"Thomas Foubert",
"Abhinav Rastogi",
"Adam Yang",
"Albert Q. Jiang",
"Alexandre Sablayrolles",
"Amélie Héliou",
"Amélie Martin",
"Anmol Agarwal",
"Antoine Roux",
"Arthur Darcet",
"Arthur Mensch",
"Baptiste Bout",
"Baptiste Rozière",
"Baudouin De Monicault",
"Chris Bamford",
"Christian Wallenwein",
"Christophe Renaudin",
"Clémence Lanfranchi",
"Darius Dabert",
"Devendra Singh Chaplot",
"Devon Mizelle",
"Diego de Las Casas",
"Elliot Chane-Sane",
"Emilien Fugier",
"Emma Bou Hanna",
"Gabrielle Berrada",
"Gauthier Delerce",
"Gauthier Guinet",
"Georgii Novikov",
"Guillaume Martin",
"Himanshu Jaju",
"Jan Ludziejewski",
"Jason Rute",
"Jean-Hadrien Chabran",
"Jessica Chudnovsky",
"Joachim Studnia",
"Joep Barmentlo",
"Jonas Amar",
"Josselin Somerville Roberts",
"Julien Denize",
"Karan Saxena",
"Karmesh Yadav",
"Kartik Khandelwal",
"Kush Jain",
"Lélio Renard Lavaud",
"Léonard Blier",
"Lingxiao Zhao",
"Louis Martin",
"Lucile Saulnier",
"Luyu Gao",
"Marie Pellat",
"Mathilde Guillaumin",
"Mathis Felardos",
"Matthieu Dinot",
"Maxime Darrin",
"Maximilian Augustin",
"Mickaël Seznec",
"Neha Gupta",
"Nikhil Raghuraman",
"Olivier Duchenne",
"Patricia Wang",
"Patryk Saffer",
"Paul Jacob",
"Paul Wambergue",
"Paula Kurylowicz",
"Philomène Chagniot",
"Pierre Stock",
"Pravesh Agrawal",
"Rémi Delacourt",
"Romain Sauvestre",
"Roman Soletskyi",
"Sagar Vaze",
"Sandeep Subramanian",
"Saurabh Garg",
"Shashwat Dalal",
"Siddharth Gandhi",
"Sumukh Aithal",
"Szymon Antoniak",
"Teven Le Scao",
"Thibault Schueller",
"Thibaut Lavril",
"Thomas Robert",
"Thomas Wang",
"Timothée Lacroix",
"Tom Bewley",
"Valeriia Nemychnikova",
"Victor Paltz",
"Virgile Richard",
"Wen-Ding Li",
"William Marshall",
"Xuanyu Zhang",
"Yihan Wan",
"Yunhao Tang"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/from-id-based-to-id-free-rethinking-id
|
2507.05715
| null | null |
From ID-based to ID-free: Rethinking ID Effectiveness in Multimodal Collaborative Filtering Recommendation
|
Most existing multimodal collaborative filtering recommendation (MCFRec) methods rely heavily on ID features and multimodal content to enhance recommendation performance. However, this paper reveals that ID features are effective but have limited benefits in multimodal collaborative filtering recommendation. Therefore, this paper systematically deconstruct the pros and cons of ID features: (i) they provide initial embedding but lack semantic richness, (ii) they provide a unique identifier for each user and item but hinder generalization to untrained data, and (iii) they assist in aligning and fusing multimodal features but may lead to representation shift. Based on these insights, this paper proposes IDFREE, an ID-free multimodal collaborative Filtering REcommEndation baseline. IDFREE replaces ID features with multimodal features and positional encodings to generate semantically meaningful ID-free embeddings. For ID-free multimodal collaborative filtering, it further proposes an adaptive similarity graph module to construct dynamic user-user and item-item graphs based on multimodal features. Then, an augmented user-item graph encoder is proposed to construct more effective user and item encoding. Finally, IDFREE achieves inter-multimodal alignment based on the contrastive learning and uses Softmax loss as recommendation loss. Basic experiments on three public datasets demonstrate that IDFREE outperforms existing ID-based MCFRec methods, achieving an average performance gain of 72.24% across standard metrics (Recall@5, 10, 20, 50 and NDCG@5, 10, 20, 50). Exploratory and extended experiments further validate our findings on the limitations of ID features in MCFRec. The code is released at https://github.com/G-H-Li/IDFREE.
| null |
https://arxiv.org/abs/2507.05715v1
|
https://arxiv.org/pdf/2507.05715v1.pdf
| null |
[
"Guohao Li",
"Li Jing",
"Jia Wu",
"Xuefei Li",
"Kai Zhu",
"Yue He"
] |
[
"Collaborative Filtering",
"Contrastive Learning"
] | 2025-07-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/hierarchical-intent-guided-optimization-with
|
2507.04623
| null | null |
Hierarchical Intent-guided Optimization with Pluggable LLM-Driven Semantics for Session-based Recommendation
|
Session-based Recommendation (SBR) aims to predict the next item a user will likely engage with, using their interaction sequence within an anonymous session. Existing SBR models often focus only on single-session information, ignoring inter-session relationships and valuable cross-session insights. Some methods try to include inter-session data but struggle with noise and irrelevant information, reducing performance. Additionally, most models rely on item ID co-occurrence and overlook rich semantic details, limiting their ability to capture fine-grained item features. To address these challenges, we propose a novel hierarchical intent-guided optimization approach with pluggable LLM-driven semantic learning for session-based recommendations, called HIPHOP. First, we introduce a pluggable embedding module based on large language models (LLMs) to generate high-quality semantic representations, enhancing item embeddings. Second, HIPHOP utilizes graph neural networks (GNNs) to model item transition relationships and incorporates a dynamic multi-intent capturing module to address users' diverse interests within a session. Additionally, we design a hierarchical inter-session similarity learning module, guided by user intent, to capture global and local session relationships, effectively exploring users' long-term and short-term interests. To mitigate noise, an intent-guided denoising strategy is applied during inter-session learning. Finally, we enhance the model's discriminative capability by using contrastive learning to optimize session representations. Experiments on multiple datasets show that HIPHOP significantly outperforms existing methods, demonstrating its effectiveness in improving recommendation quality. Our code is available: https://github.com/hjx159/HIPHOP.
| null |
https://arxiv.org/abs/2507.04623v1
|
https://arxiv.org/pdf/2507.04623v1.pdf
| null |
[
"Jinpeng Chen",
"Jianxiang He",
"Huan Li",
"Senzhang Wang",
"Yuan Cao",
"Kaimin Wei",
"Zhenye Yang",
"Ye Ji"
] |
[
"Contrastive Learning",
"Denoising",
"Session-Based Recommendations"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/lumicrs-asymmetric-contrastive-prototype
|
2507.04722
| null | null |
LumiCRS: Asymmetric Contrastive Prototype Learning for Long-Tail Conversational Movie Recommendation
|
Conversational recommender systems (CRSs) often suffer from an extreme long-tail distribution of dialogue data, causing a strong bias toward head-frequency blockbusters that sacrifices diversity and exacerbates the cold-start problem. An empirical analysis of DCRS and statistics on the REDIAL corpus show that only 10% of head movies account for nearly half of all mentions, whereas about 70% of tail movies receive merely 26% of the attention. This imbalance gives rise to three critical challenges: head over-fitting, body representation drift, and tail sparsity. To address these issues, we propose LumiCRS, an end-to-end framework that mitigates long-tail imbalance through three mutually reinforcing layers: (i) an Adaptive Comprehensive Focal Loss (ACFL) that dynamically adjusts class weights and focusing factors to curb head over-fitting and reduce popularity bias; (ii) Prototype Learning for Long-Tail Recommendation, which selects semantic, affective, and contextual prototypes to guide clustering and stabilize body and tail representations; and (iii) a GPT-4o-driven prototype-guided dialogue augmentation module that automatically generates diverse long-tail conversational snippets to alleviate tail sparsity and distribution shift. Together, these strategies enable LumiCRS to markedly improve recommendation accuracy, diversity, and fairness: on the REDIAL and INSPIRED benchmarks, LumiCRS boosts Recall@10 and Tail-Recall@10 by 7-15% over fifteen strong baselines, while human evaluations confirm superior fluency, informativeness, and long-tail relevance. These results demonstrate the effectiveness of multi-layer collaboration in building an efficient and fair long-tail conversational recommender.
| null |
https://arxiv.org/abs/2507.04722v1
|
https://arxiv.org/pdf/2507.04722v1.pdf
| null |
[
"Jinzhi Wang",
"Bin Li",
"Qingke Peng",
"Haozhou Li",
"Zeyuan Zeng",
"Ruimeng Li",
"Biyi Zhou"
] |
[
"Diversity",
"Fairness",
"Informativeness",
"Movie Recommendation",
"Recommendation Systems"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/clcarwin/focal_loss_pytorch/blob/e11e75bad957aecf641db6998a1016204722c1bb/focalloss.py#L6",
"description": "A **Focal Loss** function addresses class imbalance during training in tasks like object detection. Focal loss applies a modulating term to the cross entropy loss in order to focus learning on hard misclassified examples. It is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. \r\n\r\nFormally, the Focal Loss adds a factor $(1 - p\\_{t})^\\gamma$ to the standard cross entropy criterion. Setting $\\gamma>0$ reduces the relative loss for well-classified examples ($p\\_{t}>.5$), putting more focus on hard, misclassified examples. Here there is tunable *focusing* parameter $\\gamma \\ge 0$. \r\n\r\n$$ {\\text{FL}(p\\_{t}) = - (1 - p\\_{t})^\\gamma \\log\\left(p\\_{t}\\right)} $$",
"full_name": "Focal Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "Focal Loss",
"source_title": "Focal Loss for Dense Object Detection",
"source_url": "http://arxiv.org/abs/1708.02002v2"
}
] |
https://paperswithcode.com/paper/gist-cross-domain-click-through-rate
|
2507.05142
| null | null |
GIST: Cross-Domain Click-Through Rate Prediction via Guided Content-Behavior Distillation
|
Cross-domain Click-Through Rate prediction aims to tackle the data sparsity and the cold start problems in online advertising systems by transferring knowledge from source domains to a target domain. Most existing methods rely on overlapping users to facilitate this transfer, often focusing on joint training or pre-training with fine-tuning approach to connect the source and target domains. However, in real-world industrial settings, joint training struggles to learn optimal representations with different distributions, and pre-training with fine-tuning is not well-suited for continuously integrating new data. To address these issues, we propose GIST, a cross-domain lifelong sequence model that decouples the training processes of the source and target domains. Unlike previous methods that search lifelong sequences in the source domains using only content or behavior signals or their simple combinations, we innovatively introduce a Content-Behavior Joint Training Module (CBJT), which aligns content-behavior distributions and combines them with guided information to facilitate a more stable representation. Furthermore, we develop an Asymmetric Similarity Integration strategy (ASI) to augment knowledge transfer through similarity computation. Extensive experiments demonstrate the effectiveness of GIST, surpassing SOTA methods on offline evaluations and an online A/B test. Deployed on the Xiaohongshu (RedNote) platform, GIST effectively enhances online ads system performance at scale, serving hundreds of millions of daily active users.
| null |
https://arxiv.org/abs/2507.05142v1
|
https://arxiv.org/pdf/2507.05142v1.pdf
| null |
[
"Wei Xu",
"Haoran Li",
"Baoyuan Ou",
"Lai Xu",
"Yingjie Qin",
"Ruilong Su",
"Ruiwen Xu"
] |
[
"Click-Through Rate Prediction",
"Transfer Learning"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/bradley-terry-and-multi-objective-reward
|
2507.07375
| null | null |
Bradley-Terry and Multi-Objective Reward Modeling Are Complementary
|
Reward models trained on human preference data have demonstrated strong effectiveness in aligning Large Language Models (LLMs) with human intent under the framework of Reinforcement Learning from Human Feedback (RLHF). However, RLHF remains vulnerable to reward hacking, where the policy exploits imperfections in the reward function rather than genuinely learning the intended behavior. Although significant efforts have been made to mitigate reward hacking, they predominantly focus on and evaluate in-distribution scenarios, where the training and testing data for the reward model share the same distribution. In this paper, we empirically show that state-of-the-art methods struggle in more challenging out-of-distribution (OOD) settings. We further demonstrate that incorporating fine-grained multi-attribute scores helps address this challenge. However, the limited availability of high-quality data often leads to weak performance of multi-objective reward functions, which can negatively impact overall performance and become the bottleneck. To address this issue, we propose a unified reward modeling framework that jointly trains Bradley--Terry (BT) single-objective and multi-objective regression-based reward functions using a shared embedding space. We theoretically establish a connection between the BT loss and the regression objective and highlight their complementary benefits. Specifically, the regression task enhances the single-objective reward function's ability to mitigate reward hacking in challenging OOD settings, while BT-based training improves the scoring capability of the multi-objective reward function, enabling a 7B model to outperform a 70B baseline. Extensive experimental results demonstrate that our framework significantly improves both the robustness and the scoring performance of reward models.
| null |
https://arxiv.org/abs/2507.07375v1
|
https://arxiv.org/pdf/2507.07375v1.pdf
| null |
[
"Zhiwei Zhang",
"Hui Liu",
"Xiaomin Li",
"Zhenwei Dai",
"Jingying Zeng",
"Fali Wang",
"Minhua Lin",
"Ramraj Chandradevan",
"Zhen Li",
"Chen Luo",
"Xianfeng Tang",
"Qi He",
"Suhang Wang"
] |
[
"Attribute",
"regression"
] | 2025-07-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/exploring-sparse-adapters-for-scalable
|
2507.07140
| null | null |
Exploring Sparse Adapters for Scalable Merging of Parameter Efficient Experts
|
Merging parameter-efficient task experts has recently gained growing attention as a way to build modular architectures that can be rapidly adapted on the fly for specific downstream tasks, without requiring additional fine-tuning. Typically, LoRA serves as the foundational building block of such parameter-efficient modular architectures, leveraging low-rank weight structures to reduce the number of trainable parameters. In this paper, we study the properties of sparse adapters, which train only a subset of weights in the base neural network, as potential building blocks of modular architectures. First, we propose a simple method for training highly effective sparse adapters, which is conceptually simpler than existing methods in the literature and surprisingly outperforms both LoRA and full fine-tuning in our setting. Next, we investigate the merging properties of these sparse adapters by merging adapters for up to 20 natural language processing tasks, thus scaling beyond what is usually studied in the literature. Our findings demonstrate that sparse adapters yield superior in-distribution performance post-merging compared to LoRA or full model merging. Achieving strong held-out performance remains a challenge for all methods considered.
| null |
https://arxiv.org/abs/2507.07140v2
|
https://arxiv.org/pdf/2507.07140v2.pdf
| null |
[
"Samin Yeasar Arnob",
"Zhan Su",
"Minseon Kim",
"Oleksiy Ostapenko",
"Riyasat Ohib",
"Esra'a Saleh",
"Doina Precup",
"Lucas Caccia",
"Alessandro Sordoni"
] |
[] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
}
] |
https://paperswithcode.com/paper/selecting-and-merging-towards-adaptable-and
|
2506.22813
| null | null |
Selecting and Merging: Towards Adaptable and Scalable Named Entity Recognition with Large Language Models
|
Supervised fine-tuning (SFT) is widely used to align large language models (LLMs) with information extraction (IE) tasks, such as named entity recognition (NER). However, annotating such fine-grained labels and training domain-specific models is costly. Existing works typically train a unified model across multiple domains, but such approaches lack adaptation and scalability since not all training data benefits target domains and scaling trained models remains challenging. We propose the SaM framework, which dynamically Selects and Merges expert models at inference time. Specifically, for a target domain, we select domain-specific experts pre-trained on existing domains based on (i) domain similarity to the target domain and (ii) performance on sampled instances, respectively. The experts are then merged to create task-specific models optimized for the target domain. By dynamically merging experts beneficial to target domains, we improve generalization across various domains without extra training. Additionally, experts can be added or removed conveniently, leading to great scalability. Extensive experiments on multiple benchmarks demonstrate our framework's effectiveness, which outperforms the unified model by an average of 10%. We further provide insights into potential improvements, practical experience, and extensions of our framework.
| null |
https://arxiv.org/abs/2506.22813v1
|
https://arxiv.org/pdf/2506.22813v1.pdf
| null |
[
"Zhuojun Ding",
"Wei Wei",
"Chenghao Fan"
] |
[
"named-entity-recognition",
"Named Entity Recognition",
"Named Entity Recognition (NER)",
"NER"
] | 2025-06-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.",
"full_name": "ALIGN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)",
"name": "Vision and Language Pre-Trained Models",
"parent": null
},
"name": "ALIGN",
"source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision",
"source_url": "https://arxiv.org/abs/2102.05918v2"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
}
] |
https://paperswithcode.com/paper/a-single-merging-suffices-recovering-server
|
2507.06542
| null | null |
A Single Merging Suffices: Recovering Server-based Learning Performance in Decentralized Learning
|
Decentralized learning provides a scalable alternative to traditional parameter-server-based training, yet its performance is often hindered by limited peer-to-peer communication. In this paper, we study how communication should be scheduled over time, including determining when and how frequently devices synchronize. Our empirical results show that concentrating communication budgets in the later stages of decentralized training markedly improves global generalization. Surprisingly, we uncover that fully connected communication at the final step, implemented by a single global merging, is sufficient to match the performance of server-based training. We further show that low communication in decentralized learning preserves the \textit{mergeability} of local models throughout training. Our theoretical contributions, which explains these phenomena, are first to establish that the globally merged model of decentralized SGD can converge faster than centralized mini-batch SGD. Technically, we novelly reinterpret part of the discrepancy among local models, which were previously considered as detrimental noise, as constructive components that accelerate convergence. This work challenges the common belief that decentralized learning generalizes poorly under data heterogeneity and limited communication, while offering new insights into model merging and neural network loss landscapes.
| null |
https://arxiv.org/abs/2507.06542v1
|
https://arxiv.org/pdf/2507.06542v1.pdf
| null |
[
"Tongtian Zhu",
"Tianyu Zhang",
"Mingze Wang",
"Zhanpeng Zhou",
"Can Wang"
] |
[] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/transferring-visual-explainability-of-self
|
2507.04380
| null | null |
Transferring Visual Explainability of Self-Explaining Models through Task Arithmetic
|
In scenarios requiring both prediction and explanation efficiency for image classification, self-explaining models that perform both tasks in a single inference are effective. However, their training incurs substantial labeling and computational costs. This study aims to tackle the issue by proposing a method to transfer the visual explainability of self-explaining models, learned in a source domain, to a target domain based on a task arithmetic framework. Specifically, we construct a self-explaining model by extending image classifiers based on a vision-language pretrained model. We then define an \emph{explainability vector} as the difference between model parameters trained on the source domain with and without explanation supervision. Based on the task arithmetic framework, we impart explainability to a model trained only on the prediction task in the target domain by applying the explainability vector. Experimental results on various image classification datasets demonstrate that, except for transfers between some less-related domains, visual explainability can be successfully transferred from source to target domains, improving explanation quality in the target domain without sacrificing classification accuracy. Furthermore, we show that the explainability vector learned on a large and diverse dataset like ImageNet, extended with explanation supervision, exhibits universality and robustness, improving explanation quality on nine out of ten different target datasets. We also find that the explanation quality achieved with a single model inference is comparable to that of Kernel SHAP, which requires 150 model inferences.
| null |
https://arxiv.org/abs/2507.04380v1
|
https://arxiv.org/pdf/2507.04380v1.pdf
| null |
[
"Yuya Yoshikawa",
"Ryotaro Shimizu",
"Takahiro Kawashima",
"Yuki Saito"
] |
[
"image-classification",
"Image Classification",
"Task Arithmetic"
] | 2025-07-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/temporal-information-retrieval-via-time
|
2507.06782
| null | null |
Temporal Information Retrieval via Time-Specifier Model Merging
|
The rapid expansion of digital information and knowledge across structured and unstructured sources has heightened the importance of Information Retrieval (IR). While dense retrieval methods have substantially improved semantic matching for general queries, they consistently underperform on queries with explicit temporal constraints--often those containing numerical expressions and time specifiers such as ``in 2015.'' Existing approaches to Temporal Information Retrieval (TIR) improve temporal reasoning but often suffer from catastrophic forgetting, leading to reduced performance on non-temporal queries. To address this, we propose Time-Specifier Model Merging (TSM), a novel method that enhances temporal retrieval while preserving accuracy on non-temporal queries. TSM trains specialized retrievers for individual time specifiers and merges them in to a unified model, enabling precise handling of temporal constraints without compromising non-temporal retrieval. Extensive experiments on both temporal and non-temporal datasets demonstrate that TSM significantly improves performance on temporally constrained queries while maintaining strong results on non-temporal queries, consistently outperforming other baseline methods. Our code is available at https://github.com/seungyoonee/TSM .
| null |
https://arxiv.org/abs/2507.06782v1
|
https://arxiv.org/pdf/2507.06782v1.pdf
| null |
[
"SeungYoon Han",
"Taeho Hwang",
"Sukmin Cho",
"Soyeong Jeong",
"Hoyun Song",
"Huije Lee",
"Jong C. Park"
] |
[
"Information Retrieval",
"model",
"Retrieval"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mgffd-vlm-multi-granularity-prompt-learning
|
2507.12232
| null | null |
MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM
|
Recent studies have utilized visual large language models (VLMs) to answer not only "Is this face a forgery?" but also "Why is the face a forgery?" These studies introduced forgery-related attributes, such as forgery location and type, to construct deepfake VQA datasets and train VLMs, achieving high accuracy while providing human-understandable explanatory text descriptions. However, these methods still have limitations. For example, they do not fully leverage face quality-related attributes, which are often abnormal in forged faces, and they lack effective training strategies for forgery-aware VLMs. In this paper, we extend the VQA dataset to create DD-VQA+, which features a richer set of attributes and a more diverse range of samples. Furthermore, we introduce a novel forgery detection framework, MGFFD-VLM, which integrates an Attribute-Driven Hybrid LoRA Strategy to enhance the capabilities of Visual Large Language Models (VLMs). Additionally, our framework incorporates Multi-Granularity Prompt Learning and a Forgery-Aware Training Strategy. By transforming classification and forgery segmentation results into prompts, our method not only improves forgery classification but also enhances interpretability. To further boost detection performance, we design multiple forgery-related auxiliary losses. Experimental results demonstrate that our approach surpasses existing methods in both text-based forgery judgment and analysis, achieving superior accuracy.
| null |
https://arxiv.org/abs/2507.12232v1
|
https://arxiv.org/pdf/2507.12232v1.pdf
| null |
[
"Tao Chen",
"Jingyi Zhang",
"Decheng Liu",
"Chunlei Peng"
] |
[
"Attribute",
"Face Swapping",
"Prompt Learning",
"Visual Question Answering (VQA)"
] | 2025-07-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/a-neural-representation-framework-with-llm
|
2507.06719
| null | null |
A Neural Representation Framework with LLM-Driven Spatial Reasoning for Open-Vocabulary 3D Visual Grounding
|
Open-vocabulary 3D visual grounding aims to localize target objects based on free-form language queries, which is crucial for embodied AI applications such as autonomous navigation, robotics, and augmented reality. Learning 3D language fields through neural representations enables accurate understanding of 3D scenes from limited viewpoints and facilitates the localization of target objects in complex environments. However, existing language field methods struggle to accurately localize instances using spatial relations in language queries, such as ``the book on the chair.'' This limitation mainly arises from inadequate reasoning about spatial relations in both language queries and 3D scenes. In this work, we propose SpatialReasoner, a novel neural representation-based framework with large language model (LLM)-driven spatial reasoning that constructs a visual properties-enhanced hierarchical feature field for open-vocabulary 3D visual grounding. To enable spatial reasoning in language queries, SpatialReasoner fine-tunes an LLM to capture spatial relations and explicitly infer instructions for the target, anchor, and spatial relation. To enable spatial reasoning in 3D scenes, SpatialReasoner incorporates visual properties (opacity and color) to construct a hierarchical feature field. This field represents language and instance features using distilled CLIP features and masks extracted via the Segment Anything Model (SAM). The field is then queried using the inferred instructions in a hierarchical manner to localize the target 3D instance based on the spatial relation in the language query. Extensive experiments show that our framework can be seamlessly integrated into different neural representations, outperforming baseline models in 3D visual grounding while empowering their spatial reasoning capability.
| null |
https://arxiv.org/abs/2507.06719v1
|
https://arxiv.org/pdf/2507.06719v1.pdf
| null |
[
"Zhenyang Liu",
"Sixiao Zheng",
"Siyu Chen",
"Cairong Zhao",
"Longfei Liang",
"xiangyang xue",
"Yanwei Fu"
] |
[
"3D visual grounding",
"Autonomous Navigation",
"Large Language Model",
"Spatial Reasoning",
"Visual Grounding"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/OpenAI/CLIP",
"description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)",
"full_name": "Contrastive Language-Image Pre-training",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Representations",
"parent": null
},
"name": "CLIP",
"source_title": "Learning Transferable Visual Models From Natural Language Supervision",
"source_url": "https://arxiv.org/abs/2103.00020v1"
}
] |
https://paperswithcode.com/paper/disentangling-instance-and-scene-contexts-for
|
2507.08555
| null | null |
Disentangling Instance and Scene Contexts for 3D Semantic Scene Completion
|
3D Semantic Scene Completion (SSC) has gained increasing attention due to its pivotal role in 3D perception. Recent advancements have primarily focused on refining voxel-level features to construct 3D scenes. However, treating voxels as the basic interaction units inherently limits the utilization of class-level information, which is proven critical for enhancing the granularity of completion results. To address this, we propose \textbf{D}isentangling Instance and Scene Contexts (DISC), a novel dual-stream paradigm that enhances learning for both instance and scene categories through separated optimization. Specifically, we replace voxel queries with discriminative class queries, which incorporate class-specific geometric and semantic priors. Additionally, we exploit the intrinsic properties of classes to design specialized decoding modules, facilitating targeted interactions and efficient class-level information flow. Experimental results demonstrate that DISC achieves state-of-the-art (SOTA) performance on both SemanticKITTI and SSCBench-KITTI-360 benchmarks, with mIoU scores of 17.35 and 20.55, respectively. Remarkably, DISC even outperforms multi-frame SOTA methods using only single-frame input and significantly improves instance category performance, surpassing both single-frame and multi-frame SOTA instance mIoU by 17.9\% and 11.9\%, respectively, on the SemanticKITTI hidden test. The code is available at https://github.com/Enyu-Liu/DISC.
| null |
https://arxiv.org/abs/2507.08555v1
|
https://arxiv.org/pdf/2507.08555v1.pdf
| null |
[
"Enyu Liu",
"En Yu",
"Sijia Chen",
"Wenbing Tao"
] |
[
"3D Semantic Scene Completion"
] | 2025-07-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/i-2-world-intra-inter-tokenization-for
|
2507.09144
| null | null |
$I^{2}$-World: Intra-Inter Tokenization for Efficient Dynamic 4D Scene Forecasting
|
Forecasting the evolution of 3D scenes and generating unseen scenarios via occupancy-based world models offers substantial potential for addressing corner cases in autonomous driving systems. While tokenization has revolutionized image and video generation, efficiently tokenizing complex 3D scenes remains a critical challenge for 3D world models. To address this, we propose $I^{2}$-World, an efficient framework for 4D occupancy forecasting. Our method decouples scene tokenization into intra-scene and inter-scene tokenizers. The intra-scene tokenizer employs a multi-scale residual quantization strategy to hierarchically compress 3D scenes while preserving spatial details. The inter-scene tokenizer residually aggregates temporal dependencies across timesteps. This dual design preserves the compactness of 3D tokenizers while retaining the dynamic expressiveness of 4D tokenizers. Unlike decoder-only GPT-style autoregressive models, $I^{2}$-World adopts an encoder-decoder architecture. The encoder aggregates spatial context from the current scene and predicts a transformation matrix to enable high-level control over scene generation. The decoder, conditioned on this matrix and historical tokens, ensures temporal consistency during generation. Experiments demonstrate that $I^{2}$-World achieves state-of-the-art performance, outperforming existing methods by 25.1\% in mIoU and 36.9\% in IoU for 4D occupancy forecasting while exhibiting exceptional computational efficiency: it requires merely 2.9 GB of training memory and achieves real-time inference at 37.0 FPS. Our code is available on https://github.com/lzzzzzm/II-World.
| null |
https://arxiv.org/abs/2507.09144v1
|
https://arxiv.org/pdf/2507.09144v1.pdf
| null |
[
"Zhimin Liao",
"Ping Wei",
"Ruijie Zhang",
"Shuaijia Chen",
"Haoxuan Wang",
"Ziyang Ren"
] |
[
"Autonomous Driving",
"Computational Efficiency",
"Decoder",
"Scene Generation",
"Video Generation"
] | 2025-07-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lrm-1b-towards-large-routing-model
|
2507.03300
| null | null |
LRM-1B: Towards Large Routing Model
|
Vehicle routing problems (VRPs) are central to combinatorial optimization with significant practical implications. Recent advancements in neural combinatorial optimization (NCO) have demonstrated promising results by leveraging neural networks to solve VRPs, yet the exploration of model scaling within this domain remains underexplored. Inspired by the success of model scaling in large language models (LLMs), this study introduces a Large Routing Model with 1 billion parameters (LRM-1B), designed to address diverse VRP scenarios. We present a comprehensive evaluation of LRM-1B across multiple problem variants, distributions, and sizes, establishing state-of-the-art results. Our findings reveal that LRM-1B not only adapts to different VRP challenges but also showcases superior performance, outperforming existing models. Additionally, we explore the scaling behavior of neural routing models from 1M to 1B parameters. Our analysis confirms power-law between multiple model factors and performance, offering critical insights into the optimal configurations for foundation neural routing solvers.
| null |
https://arxiv.org/abs/2507.03300v1
|
https://arxiv.org/pdf/2507.03300v1.pdf
| null |
[
"Han Li",
"Fei Liu",
"Zhenkun Wang",
"Qingfu Zhang"
] |
[
"Combinatorial Optimization",
"model"
] | 2025-07-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lpoi-listwise-preference-optimization-for
|
2505.21061
| null | null |
LPOI: Listwise Preference Optimization for Vision Language Models
|
Aligning large VLMs with human preferences is a challenging task, as methods like RLHF and DPO often overfit to textual information or exacerbate hallucinations. Although augmenting negative image samples partially addresses these pitfalls, no prior work has employed listwise preference optimization for VLMs, due to the complexity and cost of constructing listwise image samples. In this work, we propose LPOI, the first object-aware listwise preference optimization developed for reducing hallucinations in VLMs. LPOI identifies and masks a critical object in the image, and then interpolates the masked region between the positive and negative images to form a sequence of incrementally more complete images. The model is trained to rank these images in ascending order of object visibility, effectively reducing hallucinations while retaining visual fidelity. LPOI requires no extra annotations beyond standard pairwise preference data, as it automatically constructs the ranked lists through object masking and interpolation. Comprehensive experiments on MMHalBench, AMBER, and Object HalBench confirm that LPOI outperforms existing preference optimization methods in reducing hallucinations and enhancing VLM performance. We make the code available at https://github.com/fatemehpesaran310/lpoi.
| null |
https://arxiv.org/abs/2505.21061v1
|
https://arxiv.org/pdf/2505.21061v1.pdf
| null |
[
"Fatemeh Pesaran Zadeh",
"Yoojin Oh",
"Gunhee Kim"
] |
[
"Object"
] | 2025-05-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Direct Preference Optimization",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Offline Reinforcement Learning Methods",
"parent": null
},
"name": "DPO",
"source_title": "Direct Preference Optimization: Your Language Model is Secretly a Reward Model",
"source_url": "https://arxiv.org/abs/2305.18290v3"
}
] |
https://paperswithcode.com/paper/emergence-of-functionally-differentiated
|
2507.12858
| null | null |
Emergence of Functionally Differentiated Structures via Mutual Information Optimization in Recurrent Neural Networks
|
Functional differentiation in the brain emerges as distinct regions specialize and is key to understanding brain function as a complex system. Previous research has modeled this process using artificial neural networks with specific constraints. Here, we propose a novel approach that induces functional differentiation in recurrent neural networks by minimizing mutual information between neural subgroups via mutual information neural estimation. We apply our method to a 2-bit working memory task and a chaotic signal separation task involving Lorenz and R\"ossler time series. Analysis of network performance, correlation patterns, and weight matrices reveals that mutual information minimization yields high task performance alongside clear functional modularity and moderate structural modularity. Importantly, our results show that functional differentiation, which is measured through correlation structures, emerges earlier than structural modularity defined by synaptic weights. This suggests that functional specialization precedes and probably drives structural reorganization within developing neural networks. Our findings provide new insights into how information-theoretic principles may govern the emergence of specialized functions and modular structures during artificial and biological brain development.
|
Analysis of network performance, correlation patterns, and weight matrices reveals that mutual information minimization yields high task performance alongside clear functional modularity and moderate structural modularity.
|
https://arxiv.org/abs/2507.12858v1
|
https://arxiv.org/pdf/2507.12858v1.pdf
| null |
[
"Yuki Tomoda",
"Ichiro Tsuda",
"Yutaka Yamaguti"
] |
[
"Time Series Analysis"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sgloc-semantic-localization-system-for-camera
|
2507.12027
| null | null |
SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation
|
We propose SGLoc, a novel localization system that directly regresses camera poses from 3D Gaussian Splatting (3DGS) representation by leveraging semantic information. Our method utilizes the semantic relationship between 2D image and 3D scene representation to estimate the 6DoF pose without prior pose information. In this system, we introduce a multi-level pose regression strategy that progressively estimates and refines the pose of query image from the global 3DGS map, without requiring initial pose priors. Moreover, we introduce a semantic-based global retrieval algorithm that establishes correspondences between 2D (image) and 3D (3DGS map). By matching the extracted scene semantic descriptors of 2D query image and 3DGS semantic representation, we align the image with the local region of the global 3DGS map, thereby obtaining a coarse pose estimation. Subsequently, we refine the coarse pose by iteratively optimizing the difference between the query image and the rendered image from 3DGS. Our SGLoc demonstrates superior performance over baselines on 12scenes and 7scenes datasets, showing excellent capabilities in global localization without initial pose prior. Code will be available at https://github.com/IRMVLab/SGLoc.
| null |
https://arxiv.org/abs/2507.12027v1
|
https://arxiv.org/pdf/2507.12027v1.pdf
| null |
[
"Beining Xu",
"Siting Zhu",
"Hesheng Wang"
] |
[
"3DGS",
"Camera Pose Estimation",
"Pose Estimation"
] | 2025-07-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.",
"full_name": "ALIGN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)",
"name": "Vision and Language Pre-Trained Models",
"parent": null
},
"name": "ALIGN",
"source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision",
"source_url": "https://arxiv.org/abs/2102.05918v2"
}
] |
https://paperswithcode.com/paper/dac-a-dynamic-attention-aware-approach-for
|
2507.11942
| null | null |
DAC: A Dynamic Attention-aware Approach for Task-Agnostic Prompt Compression
|
Task-agnostic prompt compression leverages the redundancy in natural language to reduce computational overhead and enhance information density within prompts, especially in long-context scenarios. Existing methods predominantly rely on information entropy as the metric to compress lexical units, aiming to achieve minimal information loss. However, these approaches overlook two critical aspects: (i) the importance of attention-critical tokens at the algorithmic level, and (ii) shifts in information entropy during the compression process. Motivated by these challenges, we propose a dynamic attention-aware approach for task-agnostic prompt compression (DAC). This approach effectively integrates entropy and attention information, dynamically sensing entropy shifts during compression to achieve fine-grained prompt compression. Extensive experiments across various domains, including LongBench, GSM8K, and BBH, show that DAC consistently yields robust and substantial improvements across a diverse range of tasks and LLMs, offering compelling evidence of its efficacy.
| null |
https://arxiv.org/abs/2507.11942v1
|
https://arxiv.org/pdf/2507.11942v1.pdf
| null |
[
"Yi Zhao",
"Zuchao Li",
"Hai Zhao",
"Baoyuan Qi",
"Guoming Liu"
] |
[
"GSM8K"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/think-clearly-improving-reasoning-via
|
2507.08806
| null | null |
Think Clearly: Improving Reasoning via Redundant Token Pruning
|
Recent large language models have shown promising capabilities in long-form reasoning, following structured chains of thought before arriving at a final answer. However, we observe that these reasoning paths tend to include substantial redundancy; analyzing attention patterns reveals that attention scores are widely scattered, particularly incorrect answers exhibit greater attention sparsity. In this paper, we demonstrate that deliberately removing this redundancy in the reasoning process significantly improves performance through clear thinking, i.e., removing distraction. Specifically, we systematically identify reasoning redundancy by measuring token-level attention scores to a special end-of-thinking token, which is appended to an explicit instruction inserted to conclude each intermediate reasoning step. Furthermore, we propose structure-aware pruning that prioritizes removing tokens in low-contributing reasoning chunks over individual tokens. After evicting redundant tokens, we remove the injected end-of-thinking instruction, then resume the reasoning generation. We demonstrate that our method significantly improves overall accuracy across reasoning-intensive benchmarks without any training involved. In particular, our method shows strong performance on challenging mathematical competition benchmarks such as AIME and AMC, where reasoning redundancy is more prevalent.
| null |
https://arxiv.org/abs/2507.08806v1
|
https://arxiv.org/pdf/2507.08806v1.pdf
| null |
[
"Daewon Choi",
"JiMin Lee",
"Jihoon Tack",
"Woomin Song",
"Saket Dingliwal",
"Sai Muralidhar Jayanthi",
"Bhavana Ganesh",
"Jinwoo Shin",
"Aram Galstyan",
"Sravan Babu Bodapati"
] |
[] | 2025-06-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
}
] |
https://paperswithcode.com/paper/ref-long-benchmarking-the-long-context
|
2507.09506
| null | null |
Ref-Long: Benchmarking the Long-context Referencing Capability of Long-context Language Models
|
Long-context language models (LCLMs) have exhibited impressive capabilities in long-context understanding tasks. Among these, long-context referencing -- a crucial task that requires LCLMs to attribute items of interest to specific parts of long-context data -- remains underexplored. To bridge this gap, this paper proposes Referencing Evaluation for Long-context Language Models (Ref-Long), a novel benchmark designed to assess the long-context referencing capability of LCLMs. Specifically, Ref-Long requires LCLMs to identify the indexes of documents that reference a specific key, emphasizing contextual relationships between the key and the documents over simple retrieval. Based on the task design, we construct three subsets ranging from synthetic to realistic scenarios to form the Ref-Long benchmark. Experimental results of 13 LCLMs reveal significant shortcomings in long-context referencing, even among advanced models like GPT-4o. To further investigate these challenges, we conduct comprehensive analyses, including human evaluations, task format adjustments, fine-tuning experiments, and error analyses, leading to several key insights. Our data and code can be found in https://github. com/wujunjie1998/Ref-Long.
| null |
https://arxiv.org/abs/2507.09506v1
|
https://arxiv.org/pdf/2507.09506v1.pdf
| null |
[
"Junjie Wu",
"Gefei Gu",
"Yanan Zheng",
"Dit-yan Yeung",
"Arman Cohan"
] |
[
"Attribute",
"Benchmarking",
"Long-Context Understanding"
] | 2025-07-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/krul-efficient-state-restoration-for-multi
|
2507.08045
| null | null |
Krul: Efficient State Restoration for Multi-turn Conversations with Dynamic Cross-layer KV Sharing
|
Efficient state restoration in multi-turn conversations with large language models (LLMs) remains a critical challenge, primarily due to the overhead of recomputing or loading full key-value (KV) caches for all historical tokens. To address this, existing approaches compress KV caches across adjacent layers with highly similar attention patterns. However, these methods often apply a fixed compression scheme across all conversations, selecting the same layer pairs for compression without considering conversation-specific attention dynamics. This static strategy overlooks variability in attention pattern similarity across different conversations, which can lead to noticeable accuracy degradation. We present Krul, a multi-turn LLM inference system that enables accurate and efficient KV cache restoration. Krul dynamically selects compression strategies based on attention similarity across layer pairs and uses a recomputation-loading pipeline to restore the KV cache. It introduces three key innovations: 1) a preemptive compression strategy selector to preserve critical context for future conversation turns and selects a customized strategy for the conversation; 2) a token-wise heterogeneous attention similarity estimator to mitigate the attention similarity computation and storage overhead during model generation; 3) a bubble-free restoration scheduler to reduce potential bubbles brought by the imbalance of recomputing and loading stream due to compressed KV caches. Empirical evaluations on real-world tasks demonstrate that Krul achieves a 1.5x-2.68x reduction in time-to-first-token (TTFT) and a 1.33x-2.35x reduction in KV cache storage compared to state-of-the-art methods without compromising generation quality.
| null |
https://arxiv.org/abs/2507.08045v1
|
https://arxiv.org/pdf/2507.08045v1.pdf
| null |
[
"Junyi Wen",
"Junyuan Liang",
"Zicong Hong",
"Wuhui Chen",
"Zibin Zheng"
] |
[] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/compactor-calibrated-query-agnostic-kv-cache
|
2507.08143
| null | null |
Compactor: Calibrated Query-Agnostic KV Cache Compression with Approximate Leverage Scores
|
Modern Large Language Models (LLMs) are increasingly trained to support very large context windows. Unfortunately the ability to use long contexts in generation is complicated by the large memory requirement of the KV cache, which scales linearly with the context length. This memory footprint is often the dominant resource bottleneck in real-world deployments, limiting throughput and increasing serving cost. One way to address this is by compressing the KV cache, which can be done either with knowledge of the question being asked (query-aware) or without knowledge of the query (query-agnostic). We present Compactor, a parameter-free, query-agnostic KV compression strategy that uses approximate leverage scores to determine token importance. We show that Compactor can achieve the same performance as competing methods while retaining 1/2 the tokens in both synthetic and real-world context tasks, with minimal computational overhead. We further introduce a procedure for context-calibrated compression, which allows one to infer the maximum compression ratio a given context can support. Using context-calibrated compression, we show that Compactor achieves full KV performance on Longbench while reducing the KV memory burden by 63%, on average. To demonstrate the efficacy and generalizability of our approach, we apply Compactor to 27 synthetic and real-world tasks from RULER and Longbench, with models from both the Qwen 2.5 and Llama 3.1 families.
| null |
https://arxiv.org/abs/2507.08143v1
|
https://arxiv.org/pdf/2507.08143v1.pdf
| null |
[
"Vivek Chari",
"Benjamin Van Durme"
] |
[] | 2025-07-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**LLaMA** is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.\r\n\r\n- RMSNorm normalizing function is used to improve the training stability, by normalizing the input of each transformer sub-layer, instead of normalizing the output.\r\n- The ReLU non-linearity is replaced by the SwiGLU activation function to improve performance.\r\n- Absolute positional embeddings are removed and instead rotary positional embeddings (RoPE) are added at each layer of the network.",
"full_name": "LLaMA",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "LLaMA",
"source_title": "LLaMA: Open and Efficient Foundation Language Models",
"source_url": "https://arxiv.org/abs/2302.13971v1"
}
] |
https://paperswithcode.com/paper/gemini-2-5-pushing-the-frontier-with-advanced
|
2507.06261
| null | null |
Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities
|
In this report, we introduce the Gemini 2.X model family: Gemini 2.5 Pro and Gemini 2.5 Flash, as well as our earlier Gemini 2.0 Flash and Flash-Lite models. Gemini 2.5 Pro is our most capable model yet, achieving SoTA performance on frontier coding and reasoning benchmarks. In addition to its incredible coding and reasoning skills, Gemini 2.5 Pro is a thinking model that excels at multimodal understanding and it is now able to process up to 3 hours of video content. Its unique combination of long context, multimodal and reasoning capabilities can be combined to unlock new agentic workflows. Gemini 2.5 Flash provides excellent reasoning abilities at a fraction of the compute and latency requirements and Gemini 2.0 Flash and Flash-Lite provide high performance at low latency and cost. Taken together, the Gemini 2.X model generation spans the full Pareto frontier of model capability vs cost, allowing users to explore the boundaries of what is possible with complex agentic problem solving.
| null |
https://arxiv.org/abs/2507.06261v3
|
https://arxiv.org/pdf/2507.06261v3.pdf
| null |
[
"Gheorghe Comanici",
"Eric Bieber",
"Mike Schaekermann",
"Ice Pasupat",
"Noveen Sachdeva",
"Inderjit Dhillon",
"Marcel Blistein",
"Ori Ram",
"Dan Zhang",
"Evan Rosen",
"Luke Marris",
"Sam Petulla",
"Colin Gaffney",
"Asaf Aharoni",
"Nathan Lintz",
"Tiago Cardal Pais",
"Henrik Jacobsson",
"Idan Szpektor",
"Nan-Jiang Jiang",
"Krishna Haridasan",
"Ahmed Omran",
"Nikunj Saunshi",
"Dara Bahri",
"Gaurav Mishra",
"Eric Chu",
"Toby Boyd",
"Brad Hekman",
"Aaron Parisi",
"Chaoyi Zhang",
"Kornraphop Kawintiranon",
"Tania Bedrax-Weiss",
"Oliver Wang",
"Ya Xu",
"Ollie Purkiss",
"Uri Mendlovic",
"Ilaï Deutel",
"Nam Nguyen",
"Adam Langley",
"Flip Korn",
"Lucia Rossazza",
"Alexandre Ramé",
"Sagar Waghmare",
"Helen Miller",
"Vaishakh Keshava",
"Ying Jian",
"Xiaofan Zhang",
"Raluca Ada Popa",
"Kedar Dhamdhere",
"Blaž Bratanič",
"Kyuyeun Kim",
"Terry Koo",
"Ferran Alet",
"Yi-Ting Chen",
"Arsha Nagrani",
"Hannah Muckenhirn",
"Zhiyuan Zhang",
"Corbin Quick",
"Filip Pavetić",
"Duc Dung Nguyen",
"Joao Carreira",
"Michael Elabd",
"Haroon Qureshi",
"Fabian Mentzer",
"Yao-Yuan Yang",
"Danielle Eisenbud",
"Anmol Gulati",
"Ellie Talius",
"Eric Ni",
"Sahra Ghalebikesabi",
"Edouard Yvinec",
"Alaa Saade",
"Thatcher Ulrich",
"Lorenzo Blanco",
"Dan A. Calian",
"Muhuan Huang",
"Aäron van den Oord",
"Naman Goyal",
"Terry Chen",
"Praynaa Rawlani",
"Christian Schallhart",
"Swachhand Lokhande",
"Xianghong Luo",
"Jyn Shan",
"Ceslee Montgomery",
"Victoria Krakovna",
"Federico Piccinini",
"Omer Barak",
"Jingyu Cui",
"Yiling Jia",
"Mikhail Dektiarev",
"Alexey Kolganov",
"Shiyu Huang",
"Zhe Chen",
"Xingyu Wang",
"Jessica Austin",
"Peter de Boursac",
"Evgeny Sluzhaev",
"Frank Ding",
"Huijian Li",
"Surya Bhupatiraju",
"Mohit Agarwal",
"Sławek Kwasiborski",
"Paramjit Sandhu",
"Patrick Siegler",
"Ahmet Iscen",
"Eyal Ben-David",
"Shiraz Butt",
"Miltos Allamanis",
"Seth Benjamin",
"Robert Busa-Fekete",
"Felix Hernandez-Campos",
"Sasha Goldshtein",
"Matt Dibb",
"Weiyang Zhang",
"Annie Marsden",
"Carey Radebaugh",
"Stephen Roller",
"Abhishek Nayyar",
"Jacob Austin",
"Tayfun Terzi",
"Bhargav Kanagal Shamanna",
"Pete Shaw",
"Aayush Singh",
"Florian Luisier",
"Artur Mendonça",
"Vaibhav Aggarwal",
"Larisa Markeeva",
"Claudio Fantacci",
"Sergey Brin",
"HyunJeong Choe",
"Guanyu Wang",
"Hartwig Adam",
"Avigail Dabush",
"Tatsuya Kiyono",
"Eyal Marcus",
"Jeremy Cole",
"Theophane Weber",
"Hongrae Lee",
"Ronny Huang",
"Alex Muzio",
"Leandro Kieliger",
"Maigo Le",
"Courtney Biles",
"Long Le",
"Archit Sharma",
"Chengrun Yang",
"Avery Lamp",
"Dave Dopson",
"Nate Hurley",
"Katrina",
"Zhihao Shan",
"Shuang Song",
"Jiewen Tan",
"Alexandre Senges",
"George Zhang",
"Chong You",
"Yennie Jun",
"David Raposo",
"Susanna Ricco",
"Xuan Yang",
"WeiJie Chen",
"Prakhar Gupta",
"Arthur Szlam",
"Kevin Villela",
"Chun-Sung Ferng",
"Daniel Kasenberg",
"Chen Liang",
"Rui Zhu",
"Arunachalam Narayanaswamy",
"Florence Perot",
"Paul Pucciarelli",
"Anna Shekhawat",
"Alexey Stern",
"Rishikesh Ingale",
"Stefani Karp",
"Sanaz Bahargam",
"Adrian Goedeckemeyer",
"Jie Han",
"Sicheng Li",
"Andrea Tacchetti",
"Dian Yu",
"Abhishek Chakladar",
"Zhiying Zhang",
"Mona El Mahdy",
"Xu Gao",
"Dale Johnson",
"Samrat Phatale",
"AJ Piergiovanni",
"Hyeontaek Lim",
"Clement Farabet",
"Carl Lebsack",
"Theo Guidroz",
"John Blitzer",
"Nico Duduta",
"David Madras",
"Steve Li",
"Daniel von Dincklage",
"Xin Li",
"Mahdis Mahdieh",
"George Tucker",
"Ganesh Jawahar",
"Owen Xiao",
"Danny Tarlow",
"Robert Geirhos",
"Noam Velan",
"Daniel Vlasic",
"Kalesha Bullard",
"SK Park",
"Nishesh Gupta",
"Kellie Webster",
"Ayal Hitron",
"Jieming Mao",
"Julian Eisenschlos",
"Laurel Prince",
"Nina D'Souza",
"Kelvin Zheng",
"Sara Nasso",
"Gabriela Botea",
"Carl Doersch",
"Caglar Unlu",
"Chris Alberti",
"Alexey Svyatkovskiy",
"Ankita Goel",
"Krzysztof Choromanski",
"Pan-Pan Jiang",
"Richard Nguyen",
"Four Flynn",
"Daria Ćurko",
"Peter Chen",
"Nicholas Roth",
"Kieran Milan",
"Caleb Habtegebriel",
"Shashi Narayan",
"Michael Moffitt",
"Jake Marcus",
"Thomas Anthony",
"Brendan Mcmahan",
"Gowoon Cheon",
"Ruibo Liu",
"Megan Barnes",
"Lukasz Lew",
"Rebeca Santamaria-Fernandez",
"Mayank Upadhyay",
"Arjun Akula",
"Arnar Mar Hrafnkelsson",
"Alvaro Caceres",
"Andrew Bunner",
"Michal Sokolik",
"Subha Puttagunta",
"Lawrence Moore",
"Berivan Isik",
"Jay Hartford",
"Lawrence Chan",
"Pradeep Shenoy",
"Dan Holtmann-Rice",
"Jane Park",
"Fabio Viola",
"Alex Salcianu",
"Sujeevan Rajayogam",
"Ian Stewart-Binks",
"Zelin Wu",
"Richard Everett",
"Xi Xiong",
"Pierre-Antoine Manzagol",
"Gary Leung",
"Carl Saroufim",
"Bo Pang",
"Dawid Wegner",
"George Papamakarios",
"Jennimaria Palomaki",
"Helena Pankov",
"Guangda Lai",
"Guilherme Tubone",
"Shubin Zhao",
"Theofilos Strinopoulos",
"Seth Neel",
"Mingqiu Wang",
"Joe Kelley",
"Li Li",
"Pingmei Xu",
"Anitha Vijayakumar",
"Andrea D'olimpio",
"Omer Levy",
"Massimo Nicosia",
"Grigory Rozhdestvenskiy",
"Ni Lao",
"Sirui Xie",
"Yash Katariya",
"Jon Simon",
"Sanjiv Kumar",
"Florian Hartmann",
"Michael Kilgore",
"Jinhyuk Lee",
"Aroma Mahendru",
"Roman Ring",
"Tom Hennigan",
"Fiona Lang",
"Colin Cherry",
"David Steiner",
"Dawsen Hwang",
"Ray Smith",
"Pidong Wang",
"Jeremy Chen",
"Ming-Hsuan Yang",
"Sam Kwei",
"Philippe Schlattner",
"Donnie Kim",
"Ganesh Poomal Girirajan",
"Nikola Momchev",
"Ayushi Agarwal",
"Xingyi Zhou",
"Ilkin Safarli",
"Zachary Garrett",
"AJ Pierigiovanni",
"Sarthak Jauhari",
"Alif Raditya Rochman",
"Shikhar Vashishth",
"Quan Yuan",
"Christof Angermueller",
"Jon Blanton",
"Xinying Song",
"Nitesh Bharadwaj Gundavarapu",
"Thi Avrahami",
"Maxine Deines",
"Subhrajit Roy",
"Manish Gupta",
"Christopher Semturs",
"Shobha Vasudevan",
"Aditya Srikanth Veerubhotla",
"Shriya Sharma",
"Josh Jacob",
"Zhen Yang",
"Andreas Terzis",
"Dan Karliner",
"Auriel Wright",
"Tania Rojas-Esponda",
"Ashley Brown",
"Abhijit Guha Roy",
"Pawan Dogra",
"Andrei Kapishnikov",
"Peter Young",
"Wendy Kan",
"Vinodh Kumar Rajendran",
"Maria Ivanova",
"Salil Deshmukh",
"Chia-Hua Ho",
"Mike Kwong",
"Stav Ginzburg",
"Annie Louis",
"KP Sawhney",
"Slav Petrov",
"Jing Xie",
"Yunfei Bai",
"Georgi Stoyanov",
"Alex Fabrikant",
"Rajesh Jayaram",
"Yuqi Li",
"Joe Heyward",
"Justin Gilmer",
"Yaqing Wang",
"Radu Soricut",
"Luyang Liu",
"Qingnan Duan",
"Jamie Hayes",
"Maura O'Brien",
"Gaurav Singh Tomar",
"Sivan Eiger",
"Bahar Fatemi",
"Jeffrey Hui",
"Catarina Barros",
"Adaeze Chukwuka",
"Alena Butryna",
"Saksham Thakur",
"Austin Huang",
"Zhufeng Pan",
"Haotian Tang",
"Serkan Cabi",
"Tulsee Doshi",
"Michiel Bakker",
"Sumit Bagri",
"Ruy Ley-Wild",
"Adam Lelkes",
"Jennie Lees",
"Patrick Kane",
"David Greene",
"Shimu Wu",
"Jörg Bornschein",
"Gabriela Surita",
"Sarah Hodkinson",
"Fangtao Li",
"Chris Hidey",
"Sébastien Pereira",
"Sean Ammirati",
"Phillip Lippe",
"Adam Kraft",
"Pu Han",
"Sebastian Gerlach",
"Zifeng Wang",
"Liviu Panait",
"Feng Han",
"Brian Farris",
"Yingying Bi",
"Hannah DeBalsi",
"Miaosen Wang",
"Gladys Tyen",
"James Cohan",
"Susan Zhang",
"Jarred Barber",
"Da-Woon Chung",
"Jaeyoun Kim",
"Markus Kunesch",
"Steven Pecht",
"Nami Akazawa",
"Abe Friesen",
"James Lyon",
"Ali Eslami",
"Junru Wu",
"Jie Tan",
"Yue Song",
"Ravi Kumar",
"Chris Welty",
"Ilia Akolzin",
"Gena Gibson",
"Sean Augenstein",
"Arjun Pillai",
"Nancy Yuen",
"Du Phan",
"Xin Wang",
"Iain Barr",
"Heiga Zen",
"Nan Hua",
"Casper Liu",
"Jilei",
"Wang",
"Tanuj Bhatia",
"Hao Xu",
"Oded Elyada",
"Pushmeet Kohli",
"Mirek Olšák",
"Ke Chen",
"Azalia Mirhoseini",
"Noam Shazeer",
"Shoshana Jakobovits",
"Maggie Tran",
"Nolan Ramsden",
"Tarun Bharti",
"Fred Alcober",
"Yunjie Li",
"Shilpa Shetty",
"Jing Chen",
"Dmitry Kalashnikov",
"Megha Nawhal",
"Sercan Arik",
"Hanwen Chen",
"Michiel Blokzijl",
"Shubham Gupta",
"James Rubin",
"Rigel Swavely",
"Sophie Bridgers",
"Ian Gemp",
"Chen Su",
"Arun Suggala",
"Juliette Pluto",
"Mary Cassin",
"Alain Vaucher",
"Kaiyang Ji",
"Jiahao Cai",
"Andrew Audibert",
"Animesh Sinha",
"David Tian",
"Efrat Farkash",
"Amy Hua",
"Jilin Chen",
"Duc-Hieu Tran",
"Edward Loper",
"Nicole Brichtova",
"Lara McConnaughey",
"Ballie Sandhu",
"Robert Leland",
"Doug DeCarlo",
"Andrew Over",
"James Huang",
"Xing Wu",
"Connie Fan",
"Eric Li",
"Yun Lei",
"Deepak Sharma",
"Cosmin Paduraru",
"Luo Yu",
"Matko Bošnjak",
"Phuong Dao",
"Min Choi",
"Sneha Kudugunta",
"Jakub Adamek",
"Carlos Guía",
"Ali Khodaei",
"Jie Feng",
"Wenjun Zeng",
"David Welling",
"Sandeep Tata",
"Christina Butterfield",
"Andrey Vlasov",
"Seliem El-Sayed",
"Swaroop Mishra",
"Tara Sainath",
"Shentao Yang",
"RJ Skerry-Ryan",
"Jeremy Shar",
"Robert Berry",
"Arunkumar Rajendran",
"Arun Kandoor",
"Andrea Burns",
"Deepali Jain",
"Tom Stone",
"Wonpyo Park",
"Shibo Wang",
"Albin Cassirer",
"Guohui Wang",
"Hayato Kobayashi",
"Sergey Rogulenko",
"Vineetha Govindaraj",
"Mikołaj Rybiński",
"Nadav Olmert",
"Colin Evans",
"Po-Sen Huang",
"Kelvin Xu",
"Premal Shah",
"Terry Thurk",
"Caitlin Sikora",
"Mu Cai",
"Jin Xie",
"Elahe Dabir",
"Saloni Shah",
"Norbert Kalb",
"Carrie Zhang",
"Shruthi Prabhakara",
"Amit Sabne",
"Artiom Myaskovsky",
"Vikas Raunak",
"Blanca Huergo",
"Behnam Neyshabur",
"Jon Clark",
"Ye Zhang",
"Shankar Krishnan",
"Eden Cohen",
"Dinesh Tewari",
"James Lottes",
"Yumeya Yamamori",
"Hui",
"Li",
"Mohamed Elhawaty",
"Ada Maksutaj Oflazer",
"Adrià Recasens",
"Sheryl Luo",
"Duy Nguyen",
"Taylor Bos",
"Kalyan Andra",
"Ana Salazar",
"Ed Chi",
"Jeongwoo Ko",
"Matt Ginsberg",
"Anders Andreassen",
"Anian Ruoss",
"Todor Davchev",
"Elnaz Davoodi",
"Chenxi Liu",
"Min Kim",
"Santiago Ontanon",
"Chi Ming To",
"Dawei Jia",
"Rosemary Ke",
"Jing Wang",
"Anna Korsun",
"Moran Ambar",
"Ilya Kornakov",
"Irene Giannoumis",
"Toni Creswell",
"Denny Zhou",
"Yi Su",
"Ishaan Watts",
"Aleksandr Zaks",
"Evgenii Eltyshev",
"Ziqiang Feng",
"Sidharth Mudgal",
"Alex Kaskasoli",
"Juliette Love",
"Kingshuk Dasgupta",
"Sam Shleifer",
"Richard Green",
"Sungyong Seo",
"Chansoo Lee",
"Dale Webster",
"Prakash Shroff",
"Ganna Raboshchuk",
"Isabel Leal",
"James Manyika",
"Sofia Erell",
"Daniel Murphy",
"Zhisheng Xiao",
"Anton Bulyenov",
"Julian Walker",
"Mark Collier",
"Matej Kastelic",
"Nelson George",
"Sushant Prakash",
"Sailesh Sidhwani",
"Alexey Frolov",
"Steven Hansen",
"Petko Georgiev",
"Tiberiu Sosea",
"Chris Apps",
"Aishwarya Kamath",
"David Reid",
"Emma Cooney",
"Charlotte Magister",
"Oriana Riva",
"Alec Go",
"Pu-Chin Chen",
"Sebastian Krause",
"Nir Levine",
"Marco Fornoni",
"Ilya Figotin",
"Nick Roy",
"Parsa Mahmoudieh",
"Vladimir Magay",
"Mukundan Madhavan",
"Jin Miao",
"Jianmo Ni",
"Yasuhisa Fujii",
"Ian Chou",
"George Scrivener",
"Zak Tsai",
"Siobhan Mcloughlin",
"Jeremy Selier",
"Sandra Lefdal",
"Jeffrey Zhao",
"Abhijit Karmarkar",
"Kushal Chauhan",
"Shivanker Goel",
"Zhaoyi Zhang",
"Vihan Jain",
"Parisa Haghani",
"Mostafa Dehghani",
"Jacob Scott",
"Erin Farnese",
"Anastasija Ilić",
"Steven Baker",
"Julia Pawar",
"Li Zhong",
"Josh Camp",
"Yoel Zeldes",
"Shravya Shetty",
"Anand Iyer",
"Vít Listík",
"Jiaxian Guo",
"Luming Tang",
"Mark Geller",
"Simon Bucher",
"Yifan Ding",
"Hongzhi Shi",
"Carrie Muir",
"Dominik Grewe",
"Ramy Eskander",
"Octavio Ponce",
"Boqing Gong",
"Derek Gasaway",
"Samira Khan",
"Umang Gupta",
"Angelos Filos",
"Weicheng Kuo",
"Klemen Kloboves",
"Jennifer Beattie",
"Christian Wright",
"Leon Li",
"Alicia Jin",
"Sandeep Mariserla",
"Miteyan Patel",
"Jens Heitkaemper",
"Dilip Krishnan",
"Vivek Sharma",
"David Bieber",
"Christian Frank",
"John Lambert",
"Paul Caron",
"Martin Polacek",
"Mai Giménez",
"Himadri Choudhury",
"Xing Yu",
"Sasan Tavakkol",
"Arun Ahuja",
"Franz Och",
"Rodolphe Jenatton",
"Wojtek Skut",
"Bryan Richter",
"David Gaddy",
"Andy Ly",
"Misha Bilenko",
"Megh Umekar",
"Ethan Liang",
"Martin Sevenich",
"Mandar Joshi",
"Hassan Mansoor",
"Rebecca Lin",
"Sumit Sanghai",
"Abhimanyu Singh",
"Xiaowei Li",
"Sudheendra Vijayanarasimhan",
"Zaheer Abbas",
"Yonatan Bitton",
"Hansa Srinivasan",
"Manish Reddy Vuyyuru",
"Alexander Frömmgen",
"Yanhua Sun",
"Ralph Leith",
"Alfonso Castaño",
"DJ Strouse",
"Le Yan",
"Austin Kyker",
"Satish Kambala",
"Mary Jasarevic",
"Thibault Sellam",
"Chao Jia",
"Alexander Pritzel",
"Raghavender R",
"Huizhong Chen",
"Natalie Clay",
"Sudeep Gandhe",
"Sean Kirmani",
"Sayna Ebrahimi",
"Hannah Kirkwood",
"Jonathan Mallinson",
"Chao Wang",
"Adnan Ozturel",
"Kuo Lin",
"Shyam Upadhyay",
"Vincent Cohen-Addad",
"Sean Purser-haskell",
"Yichong Xu",
"Ebrahim Songhori",
"Babi Seal",
"Alberto Magni",
"Almog Gueta",
"Tingting Zou",
"Guru Guruganesh",
"Thais Kagohara",
"Hung Nguyen",
"Khalid Salama",
"Alejandro Cruzado Ruiz",
"Justin Frye",
"Zhenkai Zhu",
"Matthias Lochbrunner",
"Simon Osindero",
"Wentao Yuan",
"Lisa Lee",
"Aman Prasad",
"Lam Nguyen Thiet",
"Daniele Calandriello",
"Victor Stone",
"Qixuan Feng",
"Han Ke",
"Maria Voitovich",
"Geta Sampemane",
"Lewis Chiang",
"Ling Wu",
"Alexander Bykovsky",
"Matt Young",
"Luke Vilnis",
"Ishita Dasgupta",
"Aditya Chawla",
"Qin Cao",
"Bowen Liang",
"Daniel Toyama",
"Szabolcs Payrits",
"Anca Stefanoiu",
"Dimitrios Vytiniotis",
"Ankesh Anand",
"Tianxiao Shen",
"Blagoj Mitrevski",
"Michael Tschannen",
"Sreenivas Gollapudi",
"Aishwarya P S",
"José Leal",
"Zhe Shen",
"Han Fu",
"Wei Wang",
"Arvind Kannan",
"Doron Kukliansky",
"Sergey Yaroshenko",
"Svetlana Grant",
"Umesh Telang",
"David Wood",
"Alexandra Chronopoulou",
"Alexandru Ţifrea",
"Tao Zhou",
"Tony",
"Nguy\\~ên",
"Muge Ersoy",
"Anima Singh",
"Meiyan Xie",
"Emanuel Taropa",
"Woohyun Han",
"Eirikur Agustsson",
"Andrei Sozanschi",
"Hui Peng",
"Alex Chen",
"Yoel Drori",
"Efren Robles",
"Yang Gao",
"Xerxes Dotiwalla",
"Ying Chen",
"Anudhyan Boral",
"Alexei Bendebury",
"John Nham",
"Chris Tar",
"Luis Castro",
"Jiepu Jiang",
"Canoee Liu",
"Felix Halim",
"Jinoo Baek",
"Andy Wan",
"Jeremiah Liu",
"Yuan Cao",
"Shengyang Dai",
"Trilok Acharya",
"Ruoxi Sun",
"Fuzhao Xue",
"Saket Joshi",
"Morgane Lustman",
"Yongqin Xian",
"Rishabh Joshi",
"Deep Karkhanis",
"Nora Kassner",
"Jamie Hall",
"Xiangzhuo Ding",
"Gan Song",
"Gang Li",
"Chen Zhu",
"Yana Kulizhskaya",
"Bin Ni",
"Alexey Vlaskin",
"Solomon Demmessie",
"Lucio Dery",
"Salah Zaiem",
"Yanping Huang",
"Cindy Fan",
"Felix Gimeno",
"Ananth Balashankar",
"Koji Kojima",
"Hagai Taitelbaum",
"Maya Meng",
"Dero Gharibian",
"Sahil Singla",
"Wei Chen",
"Ambrose Slone",
"Guanjie Chen",
"Sujee Rajayogam",
"Max Schumacher",
"Suyog Kotecha",
"Rory Blevins",
"Qifei Wang",
"Mor Hazan Taege",
"Alex Morris",
"Xin Liu",
"Fayaz Jamil",
"Richard Zhang",
"Pratik Joshi",
"Ben Ingram",
"Tyler Liechty",
"Ahmed Eleryan",
"Scott Baird",
"Alex Grills",
"Gagan Bansal",
"Shan Han",
"Kiran Yalasangi",
"Shawn Xu",
"Majd Al Merey",
"Isabel Gao",
"Felix Weissenberger",
"Igor Karpov",
"Robert Riachi",
"Ankit Anand",
"Gautam Prasad",
"Kay Lamerigts",
"Reid Hayes",
"Jamie Rogers",
"Mandy Guo",
"Ashish Shenoy",
"Qiong",
"Hu",
"Kyle He",
"Yuchen Liu",
"Polina Zablotskaia",
"Sagar Gubbi",
"Yifan Chang",
"Jay Pavagadhi",
"Kristian Kjems",
"Archita Vadali",
"Diego Machado",
"Yeqing Li",
"Renshen Wang",
"Dipankar Ghosh",
"Aahil Mehta",
"Dana Alon",
"George Polovets",
"Alessio Tonioni",
"Nate Kushman",
"Joel D'sa",
"Lin Zhuo",
"Allen Wu",
"Rohin Shah",
"John Youssef",
"Jiayu Ye",
"Justin Snyder",
"Karel Lenc",
"Senaka Buthpitiya",
"Matthew Tung",
"Jichuan Chang",
"Tao Chen",
"David Saxton",
"Jenny Lee",
"Lydia Lihui Zhang",
"James Qin",
"Prabakar Radhakrishnan",
"Maxwell Chen",
"Piotr Ambroszczyk",
"Metin Toksoz-Exley",
"Yan Zhong",
"Nitzan Katz",
"Brendan O'Donoghue",
"Tamara von Glehn",
"Adi Gerzi Rosenthal",
"Aga Świetlik",
"Xiaokai Zhao",
"Nick Fernando",
"Jinliang Wei",
"Jieru Mei",
"Sergei Vassilvitskii",
"Diego Cedillo",
"Pranjal Awasthi",
"Hui Zheng",
"Koray Kavukcuoglu",
"Itay Laish",
"Joseph Pagadora",
"Marc Brockschmidt",
"Christopher A. Choquette-Choo",
"Arunkumar Byravan",
"Yifeng Lu",
"Xu Chen",
"Mia Chen",
"Kenton Lee",
"Rama Pasumarthi",
"Sijal Bhatnagar",
"Aditya Shah",
"Qiyin Wu",
"Zhuoyuan Chen",
"Zack Nado",
"Bartek Perz",
"Zixuan Jiang",
"David Kao",
"Ganesh Mallya",
"Nino Vieillard",
"Lantao Mei",
"Sertan Girgin",
"Mandy Jordan",
"Yeongil Ko",
"Alekh Agarwal",
"Yaxin Liu",
"Yasemin Altun",
"Raoul de Liedekerke",
"Anastasios Kementsietsidis",
"Daiyi Peng",
"Dangyi Liu",
"Utku Evci",
"Peter Humphreys",
"Austin Tarango",
"Xiang Deng",
"Yoad Lewenberg",
"Kevin Aydin",
"Chengda Wu",
"Bhavishya Mittal",
"Tsendsuren Munkhdalai",
"Kleopatra Chatziprimou",
"Rodrigo Benenson",
"Uri First",
"Xiao Ma",
"Jinning Li",
"Armand Joulin",
"Hamish Tomlinson",
"Tingnan Zhang",
"Milad Nasr",
"Zhi Hong",
"Michaël Sander",
"Lisa Anne Hendricks",
"Anuj Sharma",
"Andrew Bolt",
"Eszter Vértes",
"Jiri Simsa",
"Tomer Levinboim",
"Olcan Sercinoglu",
"Divyansh Shukla",
"Austin Wu",
"Craig Swanson",
"Danny Vainstein",
"Fan Bu",
"Bo wang",
"Ryan Julian",
"Charles Yoon",
"Sergei Lebedev",
"Antonious Girgis",
"Bernd Bandemer",
"David Du",
"Todd Wang",
"Xi Chen",
"Ying Xiao",
"Peggy Lu",
"Natalie Ha",
"Vlad Ionescu",
"Simon Rowe",
"Josip Matak",
"Federico Lebron",
"Andreas Steiner",
"Lalit Jain",
"Manaal Faruqui",
"Nicolas Lacasse",
"Georgie Evans",
"Neesha Subramaniam",
"Dean Reich",
"Giulia Vezzani",
"Aditya Pandey",
"Joe Stanton",
"Tianhao Zhou",
"Liam McCafferty",
"Henry Griffiths",
"Verena Rieser",
"Soheil Hassas Yeganeh",
"Eleftheria Briakou",
"Lu Huang",
"Zichuan Wei",
"Liangchen Luo",
"Erik Jue",
"Gabby Wang",
"Victor Cotruta",
"Myriam Khan",
"Jongbin Park",
"Qiuchen Guo",
"Peiran Li",
"Rong Rong",
"Diego Antognini",
"Anastasia Petrushkina",
"Chetan Tekur",
"Eli Collins",
"Parul Bhatia",
"Chester Kwak",
"Wenhu Chen",
"Arvind Neelakantan",
"Immanuel Odisho",
"Sheng Peng",
"Vincent Nallatamby",
"Vaibhav Tulsyan",
"Fabian Pedregosa",
"Peng Xu",
"Raymond Lin",
"Yulong Wang",
"Emma Wang",
"Sholto Douglas",
"Reut Tsarfaty",
"Elena Gribovskaya",
"Renga Aravamudhan",
"Manu Agarwal",
"Mara Finkelstein",
"Qiao Zhang",
"Elizabeth Cole",
"Phil Crone",
"Sarmishta Velury",
"Anil Das",
"Chris Sauer",
"Luyao Xu",
"Danfeng Qin",
"Chenjie Gu",
"Dror Marcus",
"CJ Zheng",
"Wouter Van Gansbeke",
"Sobhan Miryoosefi",
"Haitian Sun",
"Yaguang Li",
"Charlie Chen",
"Jae Yoo",
"Pavel Dubov",
"Alex Tomala",
"Adams Yu",
"Paweł Wesołowski",
"Alok Gunjan",
"Eddie Cao",
"Jiaming Luo",
"Nikhil Sethi",
"Arkadiusz Socala",
"Laura Graesser",
"Tomas Kocisky",
"Arturo BC",
"Minmin Chen",
"Edward Lee",
"Sophie Wang",
"Weize Kong",
"Qiantong Xu",
"Nilesh Tripuraneni",
"Yiming Li",
"Xinxin Yu",
"Allen Porter",
"Paul Voigtlaender",
"Biao Zhang",
"Arpi Vezer",
"Sarah York",
"Qing Wei",
"Geoffrey Cideron",
"Mark Kurzeja",
"Seungyeon Kim",
"Benny Li",
"Angéline Pouget",
"Hyo Lee",
"Kaspar Daugaard",
"Yang Li",
"Dave Uthus",
"Aditya Siddhant",
"Paul Cavallaro",
"Sriram Ganapathy",
"Maulik Shah",
"Rolf Jagerman",
"Jeff Stanway",
"Piermaria Mendolicchio",
"Li Xiao",
"Kayi Lee",
"Tara Thompson",
"Shubham Milind Phal",
"Jason Chase",
"Sun Jae Lee",
"Adrian N Reyes",
"Disha Shrivastava",
"Zhen Qin",
"Roykrong Sukkerd",
"Seth Odoom",
"Lior Madmoni",
"John Aslanides",
"Jonathan Herzig",
"Elena Pochernina",
"Sheng Zhang",
"Parker Barnes",
"Daisuke Ikeda",
"Qiujia Li",
"Shuo-Yiin Chang",
"Shakir Mohamed",
"Jim Sproch",
"Richard Powell",
"Bidisha Samanta",
"Domagoj Ćevid",
"Anton Kovsharov",
"Shrestha Basu Mallick",
"Srinivas Tadepalli",
"Anne Zheng",
"Kareem Ayoub",
"Andreas Noever",
"Christian Reisswig",
"Zhuo Xu",
"Junhyuk Oh",
"Martin Matysiak",
"Tim Blyth",
"Shereen Ashraf",
"Julien Amelot",
"Boone Severson",
"Michele Bevilacqua",
"Motoki Sano",
"Ethan Dyer",
"Ofir Roval",
"Anu Sinha",
"Yin Zhong",
"Sagi Perel",
"Tea Sabolić",
"Johannes Mauerer",
"Willi Gierke",
"Mauro Verzetti",
"Rodrigo Cabrera",
"Alvin Abdagic",
"Steven Hemingray",
"Austin Stone",
"Jong Lee",
"Farooq Ahmad",
"Karthik Raman",
"Lior Shani",
"Jonathan Lai",
"Orhan Firat",
"Nathan Waters",
"Eric Ge",
"Mo Shomrat",
"Himanshu Gupta",
"Rajeev Aggarwal",
"Tom Hudson",
"Bill Jia",
"Simon Baumgartner",
"Palak Jain",
"Joe Kovac",
"Junehyuk Jung",
"Ante Žužul",
"Will Truong",
"Morteza Zadimoghaddam",
"Songyou Peng",
"Marco Liang",
"Rachel Sterneck",
"Balaji Lakshminarayanan",
"Machel Reid",
"Oliver Woodman",
"Tong Zhou",
"Jianling Wang",
"Vincent Coriou",
"Arjun Narayanan",
"Jay Hoover",
"Yenai Ma",
"Apoorv Jindal",
"Clayton Sanford",
"Doug Reid",
"Swaroop Ramaswamy",
"Alex Kurakin",
"Roland Zimmermann",
"Yana Lunts",
"Dragos Dena",
"Zalán Borsos",
"Vered Cohen",
"Shujian Zhang",
"Will Grathwohl",
"Robert Dadashi",
"Morgan Redshaw",
"Joshua Kessinger",
"Julian Odell",
"Silvano Bonacina",
"Zihang Dai",
"Grace Chen",
"Ayush Dubey",
"Pablo Sprechmann",
"Mantas Pajarskas",
"Wenxuan Zhou",
"Niharika Ahuja",
"Tara Thomas",
"Martin Nikoltchev",
"Matija Kecman",
"Bharath Mankalale",
"Andrey Ryabtsev",
"Jennifer She",
"Christian Walder",
"Jiaming Shen",
"Lu Li",
"Carolina Parada",
"Sheena Panthaplackel",
"Okwan Kwon",
"Matt Lawlor",
"Utsav Prabhu",
"Yannick Schroecker",
"Marc'Aurelio Ranzato",
"Pete Blois",
"Iurii Kemaev",
"Ting Yu",
"Dmitry Lepikhin",
"Hao Xiong",
"Sahand Sharifzadeh",
"Oleaser Johnson",
"Jeremiah Willcock",
"Rui Yao",
"Greg Farquhar",
"Sujoy Basu",
"Hidetoshi Shimokawa",
"Nina Anderson",
"Haiguang Li",
"Khiem Pham",
"Yizhong Liang",
"Sebastian Borgeaud",
"Alexandre Moufarek",
"Hideto Kazawa",
"Blair Kutzman",
"Marcin Sieniek",
"Sara Smoot",
"Ruth Wang",
"Natalie Axelsson",
"Nova Fallen",
"Prasha Sundaram",
"Yuexiang Zhai",
"Varun Godbole",
"Petros Maniatis",
"Alek Wang",
"Ilia Shumailov",
"Santhosh Thangaraj",
"Remi Crocker",
"Nikita Gupta",
"Gang Wu",
"Phil Chen",
"Gellért Weisz",
"Celine Smith",
"Mojtaba Seyedhosseini",
"Boya Fang",
"Xiyang Luo",
"Roey Yogev",
"Zeynep Cankara",
"Andrew Hard",
"Helen Ran",
"Rahul Sukthankar",
"George Necula",
"Gaël Liu",
"Honglong Cai",
"Praseem Banzal",
"Daniel Keysers",
"Sanjay Ghemawat",
"Connie Tao",
"Emma Dunleavy",
"Aditi Chaudhary",
"Wei Li",
"Maciej Mikuła",
"Chen-Yu Lee",
"Tiziana Refice",
"Krishna Somandepalli",
"Alexandre Fréchette",
"Dan Bahir",
"John Karro",
"Keith Rush",
"Sarah Perrin",
"Bill Rosgen",
"Xiaomeng Yang",
"Clara Huiyi Hu",
"Mahmoud Alnahlawi",
"Justin Mao-Jones",
"Roopal Garg",
"Hoang Nguyen",
"Bat-Orgil Batsaikhan",
"Iñaki Iturrate",
"Anselm Levskaya",
"Avi Singh",
"Ashyana Kachra",
"Tony Lu",
"Denis Petek",
"Zheng Xu",
"Mark Graham",
"Lukas Zilka",
"Yael Karov",
"Marija Kostelac",
"Fangyu Liu",
"Yaohui Guo",
"Weiyue Wang",
"Bernd Bohnet",
"Emily Pitler",
"Tony Bruguier",
"Keisuke Kinoshita",
"Chrysovalantis Anastasiou",
"Nilpa Jha",
"Ting Liu",
"Jerome Connor",
"Phil Wallis",
"Philip Pham",
"Eric Bailey",
"Shixin Li",
"Heng-Tze Cheng",
"Sally Ma",
"Haiqiong Li",
"Akanksha Maurya",
"Kate Olszewska",
"Manfred Warmuth",
"Christy Koh",
"Dominik Paulus",
"Siddhartha Reddy Jonnalagadda",
"Enrique Piqueras",
"Ali Elqursh",
"Geoff Brown",
"Hadar Shemtov",
"Loren Maggiore",
"Fei Xia",
"Ryan Foley",
"Beka Westberg",
"George van den Driessche",
"Livio Baldini Soares",
"Arjun Kar",
"MICHAEL QUINN",
"Siqi Zuo",
"Jialin Wu",
"Kyle Kastner",
"Anna Bortsova",
"Aijun Bai",
"Ales Mikhalap",
"Luowei Zhou",
"Jennifer Brennan",
"Vinay Ramasesh",
"Honglei Zhuang",
"John Maggs",
"Johan Schalkwyk",
"Yuntao Xu",
"Hui Huang",
"Andrew Howard",
"Sasha Brown",
"Linting Xue",
"Gloria Shen",
"Brian Albert",
"Neha Jha",
"Daniel Zheng",
"Varvara Krayvanova",
"Spurthi Amba Hombaiah",
"Olivier Lacombe",
"Gautam Vasudevan",
"Dan Graur",
"Tian Xie",
"Meet Gandhi",
"Bangju Wang",
"Dustin Zelle",
"Harman Singh",
"Dahun Kim",
"Sébastien Cevey",
"Victor Ungureanu",
"Natasha Noy",
"Fei Liu",
"Annie Xie",
"Fangxiaoyu Feng",
"Katerina Tsihlas",
"Daniel Formoso",
"Neera Vats",
"Quentin Wellens",
"Yinan Wang",
"Niket Kumar Bhumihar",
"Samrat Ghosh",
"Matt Hoffman",
"Tom Lieber",
"Oran Lang",
"Kush Bhatia",
"Tom Paine",
"Aroonalok Pyne",
"Ronny Votel",
"Madeleine Clare Elish",
"Benoit Schillings",
"Alex Panagopoulos",
"Haichuan Yang",
"Adam Raveret",
"Zohar Yahav",
"Shuang Liu",
"Dalia El Badawy",
"Nishant Agrawal",
"Mohammed Badawi",
"Mahdi Mirzazadeh",
"Carla Bromberg",
"Fan Ye",
"Chang Liu",
"Tatiana Sholokhova",
"George-Cristian Muraru",
"Gargi Balasubramaniam",
"Jonathan Malmaud",
"Alen Carin",
"Danilo Martins",
"Irina Jurenka",
"Pankil Botadra",
"Dave Lacey",
"Richa Singh",
"Mariano Schain",
"Dan Zheng",
"Isabelle Guyon",
"Victor Lavrenko",
"Seungji Lee",
"Xiang Zhou",
"Demis Hassabis",
"Jeshwanth Challagundla",
"Derek Cheng",
"Nikhil Mehta",
"Matthew Mauger",
"Michela Paganini",
"Pushkar Mishra",
"Kate Lee",
"Zhang Li",
"Lexi Baugher",
"Ondrej Skopek",
"Max Chang",
"Amir Zait",
"Gaurav Menghani",
"Lizzetth Bellot",
"Guangxing Han",
"Jean-Michel Sarr",
"Sharat Chikkerur",
"Himanshu Sahni",
"Rohan Anil",
"Arun Narayanan",
"Chandu Thekkath",
"Daniele Pighin",
"Hana Strejček",
"Marko Velic",
"Fred Bertsch",
"Manuel Tragut",
"Keran Rong",
"Alicia Parrish",
"Kai Bailey",
"Jiho Park",
"Isabela Albuquerque",
"Abhishek Bapna",
"Rajesh Venkataraman",
"Alec Kosik",
"Johannes Griesser",
"Zhiwei Deng",
"Alek Andreev",
"Qingyun Dou",
"Kevin Hui",
"Fanny Wei",
"Xiaobin Yu",
"Lei Shu",
"Avia Aharon",
"David Barker",
"Badih Ghazi",
"Sebastian Flennerhag",
"Chris Breaux",
"Yuchuan Liu",
"Matthew Bilotti",
"Josh Woodward",
"Uri Alon",
"Stephanie Winkler",
"Tzu-Kuo Huang",
"Kostas Andriopoulos",
"João Gabriel Oliveira",
"Penporn Koanantakool",
"Berkin Akin",
"Michael Wunder",
"Cicero Nogueira dos santos",
"Mohammad Hossein Bateni",
"Lin Yang",
"Dan Horgan",
"Beer Changpinyo",
"Keyvan Amiri",
"Min Ma",
"Dayeong Lee",
"Lihao Liang",
"Anirudh Baddepudi",
"Tejasi Latkar",
"Raia Hadsell",
"Jun Xu",
"Hairong Mu",
"Michael Han",
"Aedan Pope",
"Snchit Grover",
"Frank Kim",
"Ankit Bhagatwala",
"Guan Sun",
"Yamini Bansal",
"Amir Globerson",
"Alireza Nazari",
"Samira Daruki",
"Hagen Soltau",
"Jane Labanowski",
"Laurent El Shafey",
"Matt Harvey",
"Yanif Ahmad",
"Elan Rosenfeld",
"William Kong",
"Etienne Pot",
"Yi-Xuan Tan",
"Aurora Wei",
"Victoria Langston",
"Marcel Prasetya",
"Petar Veličković",
"Richard Killam",
"Robin Strudel",
"Darren Ni",
"Zhenhai Zhu",
"Aaron Archer",
"Kavya Kopparapu",
"Lynn Nguyen",
"Emilio Parisotto",
"Hussain Masoom",
"Sravanti Addepalli",
"Jordan Grimstad",
"Hexiang Hu",
"Joss Moore",
"Avinatan Hassidim",
"Le Hou",
"Mukund Raghavachari",
"Jared Lichtarge",
"Adam R. Brown",
"Hilal Dib",
"Natalia Ponomareva",
"Justin Fu",
"Yujing Zhang",
"Altaf Rahman",
"Joana Iljazi",
"Edouard Leurent",
"Gabriel Dulac-Arnold",
"Cosmo Du",
"Chulayuth Asawaroengchai",
"Larry Jin",
"Ela Gruzewska",
"Ziwei Ji",
"Benigno Uria",
"Daniel De Freitas",
"Paul Barham",
"Lauren Beltrone",
"Víctor Campos",
"Jun Yan",
"Neel Kovelamudi",
"Arthur Nguyen",
"Elinor Davies",
"Zhichun Wu",
"Zoltan Egyed",
"Kristina Toutanova",
"Nithya Attaluri",
"Hongliang Fei",
"Peter Stys",
"Siddhartha Brahma",
"Martin Izzard",
"Siva Velusamy",
"Scott Lundberg",
"Vincent Zhuang",
"Kevin Sequeira",
"Adam Santoro",
"Ehsan Amid",
"Ophir Aharoni",
"Shuai Ye",
"Mukund Sundararajan",
"Lijun Yu",
"Yu-Cheng Ling",
"Stephen Spencer",
"Hugo Song",
"Josip Djolonga",
"Christo Kirov",
"Sonal Gupta",
"Alessandro Bissacco",
"Clemens Meyer",
"Mukul Bhutani",
"Andrew Dai",
"Weiyi Wang",
"SiQi Liu",
"Ashwin Sreevatsa",
"Qijun Tan",
"Maria Wang",
"Lucy Kim",
"Yicheng Wang",
"Alex Irpan",
"Yang Xiao",
"Stanislav Fort",
"Yifan He",
"Alex Gurney",
"Bryan Gale",
"Yue Ma",
"Monica Roy",
"Viorica Patraucean",
"Taylan Bilal",
"Golnaz Ghiasi",
"Anahita Hosseini",
"Melvin Johnson",
"Zhuowan Li",
"Yi Tay",
"Benjamin Beyret",
"Katie Millican",
"Josef Broder",
"Mayank Lunayach",
"Danny Swisher",
"Eugen Vušak",
"David Parkinson",
"MH Tessler",
"Adi Mayrav Gilady",
"Richard Song",
"Allan Dafoe",
"Yves Raimond",
"Masa Yamaguchi",
"Itay Karo",
"Elizabeth Nielsen",
"Kevin Kilgour",
"Mike Dusenberry",
"Rajiv Mathews",
"Jiho Choi",
"Siyuan Qiao",
"Harsh Mehta",
"Sahitya Potluri",
"Chris Knutsen",
"Jialu Liu",
"Tat Tan",
"Kuntal Sengupta",
"Keerthana Gopalakrishnan",
"Abodunrinwa Toki",
"Mencher Chiang",
"Mike Burrows",
"Grace Vesom",
"Zafarali Ahmed",
"Ilia Labzovsky",
"Siddharth Vashishtha",
"Preeti Singh",
"Ankur Sharma",
"Ada Ma",
"Jinyu Xie",
"Pranav Talluri",
"Hannah Forbes-Pollard",
"Aarush Selvan",
"Joel Wee",
"Loic Matthey",
"Tom Funkhouser",
"Parthasarathy Gopavarapu",
"Lev Proleev",
"Cheng Li",
"Matt Thomas",
"Kashyap Kolipaka",
"Zhipeng Jia",
"Ashwin Kakarla",
"Srinivas Sunkara",
"Joan Puigcerver",
"Suraj Satishkumar Sheth",
"Emily Graves",
"Chen Wang",
"Sadh MNM Khan",
"Kai Kang",
"Shyamal Buch",
"Fred Zhang",
"Omkar Savant",
"David Soergel",
"Kevin Lee",
"Linda Friso",
"Xuanyi Dong",
"Rahul Arya",
"Shreyas Chandrakaladharan",
"Connor Schenck",
"Greg Billock",
"Tejas Iyer",
"Anton Bakalov",
"Leslie Baker",
"Alex Ruiz",
"Angad Chandorkar",
"Trieu Trinh",
"Matt Miecnikowski",
"Yanqi Zhou",
"Yangsibo Huang",
"Jiazhong Nie",
"Ali Shah",
"Ashish Thapliyal",
"Sam Haves",
"Lun Wang",
"Uri Shaham",
"Patrick Morris-Suzuki",
"Soroush Radpour",
"Leonard Berrada",
"Thomas Strohmann",
"Chaochao Yan",
"Jingwei Shen",
"Sonam Goenka",
"Tris Warkentin",
"Petar Dević",
"Dan Belov",
"Albert Webson",
"Madhavi Yenugula",
"Puranjay Datta",
"Jerry Chang",
"Nimesh Ghelani",
"Aviral Kumar",
"Vincent Perot",
"Jessica Lo",
"Yang song",
"Herman Schmit",
"Jianmin Chen",
"Vasilisa Bashlovkina",
"Xiaoyue Pan",
"Diana Mincu",
"Paul Roit",
"Isabel Edkins",
"Andy Davis",
"Yujia Li",
"Ben Horn",
"Xinjian Li",
"Pradeep Kumar S",
"Eric Doi",
"Wanzheng Zhu",
"Sri Gayatri Sundara Padmanabhan",
"Siddharth Verma",
"Jasmine Liu",
"Heng Chen",
"Mihajlo Velimirović",
"Malcolm Reynolds",
"Priyanka Agrawal",
"Nick Sukhanov",
"Abhinit Modi",
"Siddharth Goyal",
"John Palowitch",
"Nima Khajehnouri",
"Wing Lowe",
"David Klinghoffer",
"Sharon Silver",
"Vinh Tran",
"Candice Schumann",
"Francesco Piccinno",
"Xi Liu",
"Mario Lučić",
"Xiaochen Yang",
"Sandeep Kumar",
"Ajay Kannan",
"Ragha Kotikalapudi",
"Mudit Bansal",
"Fabian Fuchs",
"Mohammad Javad Hosseini",
"Abdelrahman Abdelhamed",
"Dawn Bloxwich",
"Tianhe Yu",
"Ruoxin Sang",
"Gregory Thornton",
"Karan Gill",
"Yuchi Liu",
"Virat Shejwalkar",
"Jason Lin",
"Zhipeng Yan",
"Kehang Han",
"Thomas Buschmann",
"Michael Pliskin",
"Zhi Xing",
"Susheel Tatineni",
"Junlin Zhang",
"Sissie Hsiao",
"Gavin Buttimore",
"Marcus Wu",
"Zefei Li",
"Geza Kovacs",
"Legg Yeung",
"Tao Huang",
"Aaron Cohen",
"Bethanie Brownfield",
"Averi Nowak",
"Mikel Rodriguez",
"Tianze Shi",
"Hado van Hasselt",
"Kevin Cen",
"Deepanway Ghoshal",
"Kushal Majmundar",
"Weiren Yu",
"Warren",
"Chen",
"Danila Sinopalnikov",
"Hao Zhang",
"Vlado Galić",
"Di Lu",
"Zeyu Zheng",
"Maggie Song",
"Gary Wang",
"Gui Citovsky",
"Swapnil Gawde",
"Isaac Galatzer-Levy",
"David Silver",
"Ivana Balazevic",
"Dipanjan Das",
"Kingshuk Majumder",
"Yale Cong",
"Praneet Dutta",
"Dustin Tran",
"Hui Wan",
"Junwei Yuan",
"Daniel Eppens",
"Alanna Walton",
"Been Kim",
"Harry Ragan",
"James Cobon-Kerr",
"Lu Liu",
"Weijun Wang",
"Bryce Petrini",
"Jack Rae",
"Rakesh Shivanna",
"Yan Xiong",
"Chace Lee",
"Pauline Coquinot",
"Yiming Gu",
"Lisa Patel",
"Blake Hechtman",
"Aviel Boag",
"Orion Jankowski",
"Alex Wertheim",
"Alex Lee",
"Paul Covington",
"Hila Noga",
"Sam Sobell",
"Shanthal Vasanth",
"William Bono",
"Chirag Nagpal",
"Wei Fan",
"Xavier Garcia",
"Kedar Soparkar",
"Aybuke Turker",
"Nathan Howard",
"Sachit Menon",
"Yuankai Chen",
"Vikas Verma",
"Vladimir Pchelin",
"Harish Rajamani",
"Valentin Dalibard",
"Ana Ramalho",
"Yang Guo",
"Kartikeya Badola",
"Seojin Bang",
"Nathalie Rauschmayr",
"Julia Proskurnia",
"Sudeep Dasari",
"Xinyun Chen",
"Mikhail Sushkov",
"Anja Hauth",
"Pauline Sho",
"Abhinav Singh",
"Bilva Chandra",
"Allie Culp",
"Max Dylla",
"Olivier Bachem",
"James Besley",
"Heri Zhao",
"Timothy Lillicrap",
"Wei Wei",
"Wael Al Jishi",
"Ning Niu",
"Alban Rrustemi",
"Raphaël Lopez Kaufman",
"Ryan Poplin",
"Jewel Zhao",
"Minh Truong",
"Shikhar Bharadwaj",
"Ester Hlavnova",
"Eli Stickgold",
"Cordelia Schmid",
"Georgi Stephanov",
"Zhaoqi Leng",
"Frederick Liu",
"Léonard Hussenot",
"Shenil Dodhia",
"Juliana Vicente Franco",
"Lesley Katzen",
"Abhanshu Sharma",
"Sarah Cogan",
"Zuguang Yang",
"Aniket Ray",
"Sergi Caelles",
"Shen Yan",
"Ravin Kumar",
"Daniel Gillick",
"Renee Wong",
"Joshua Ainslie",
"Jonathan Hoech",
"Séb Arnold",
"Dan Abolafia",
"Anca Dragan",
"Ben Hora",
"Grace Hu",
"Alexey Guseynov",
"Yang Lu",
"Chas Leichner",
"Jinmeng Rao",
"Abhimanyu Goyal",
"Nagabhushan Baddi",
"Daniel Hernandez Diaz",
"Tim McConnell",
"Max Bain",
"Jake Abernethy",
"Qiqi Yan",
"Rylan Schaeffer",
"Paul Vicol",
"Will Thompson",
"Montse Gonzalez Arenas",
"Mathias Bellaiche",
"Pablo Barrio",
"Stefan Zinke",
"Riccardo Patana",
"Pulkit Mehta",
"JK Kearns",
"Avraham Ruderman",
"Scott Pollom",
"David D'Ambrosio",
"Cath Hope",
"Yang Yu",
"Andrea Gesmundo",
"Kuang-Huei Lee",
"Aviv Rosenberg",
"Yiqian Zhou",
"Yaoyiran Li",
"Drew Garmon",
"Yonghui Wu",
"Safeen Huda",
"Gil Fidel",
"Martin Baeuml",
"Jian Li",
"Phoebe Kirk",
"Rhys May",
"Tao Tu",
"Sara Mc Carthy",
"Toshiyuki Fukuzawa",
"Miranda Aperghis",
"Chih-Kuan Yeh",
"Toshihiro Yoshino",
"Bo Li",
"Austin Myers",
"Kaisheng Yao",
"Ben Limonchik",
"Changwan Ryu",
"Rohun Saxena",
"Alex Goldin",
"Ruizhe Zhao",
"Rocky Rhodes",
"Tao Zhu",
"Divya Tyam",
"Heidi Howard",
"Nathan Byrd",
"Hongxu Ma",
"Yan Wu",
"Ryan Mullins",
"Qingze Wang",
"Aida Amini",
"Sebastien Baur",
"Yiran Mao",
"Subhashini Venugopalan",
"Will Song",
"Wen Ding",
"Paul Collins",
"Sashank Reddi",
"Megan Shum",
"Andrei Rusu",
"Luisa Zintgraf",
"Kelvin Chan",
"Sheela Goenka",
"Mathieu Blondel",
"Michael Collins",
"Renke Pan",
"Marissa Giustina",
"Nikolai Chinaev",
"Christian Schuler",
"Ce Zheng",
"Jonas Valfridsson",
"Alyssa Loo",
"Alex Yakubovich",
"Jamie Smith",
"Tao Jiang",
"Rich Munoz",
"Gabriel Barcik",
"Rishabh Bansal",
"Mingyao Yang",
"Yilun Du",
"Pablo Duque",
"Mary Phuong",
"Alexandra Belias",
"Kunal Lad",
"Zeyu Liu",
"Tal Schuster",
"Karthik Duddu",
"Jieru Hu",
"Paige Kunkle",
"Matthew Watson",
"Jackson Tolins",
"Josh Smith",
"Denis Teplyashin",
"Garrett Bingham",
"Marvin Ritter",
"Marco Andreetto",
"Divya Pitta",
"Mohak Patel",
"Shashank Viswanadha",
"Trevor Strohman",
"Catalin Ionescu",
"Jincheng Luo",
"Yogesh Kalley",
"Jeremy Wiesner",
"Dan Deutsch",
"Derek Lockhart",
"Peter Choy",
"Rumen Dangovski",
"Chawin Sitawarin",
"Cat Graves",
"Tanya Lando",
"Joost van Amersfoort",
"Ndidi Elue",
"Zhouyuan Huo",
"Pooya Moradi",
"Jean Tarbouriech",
"Henryk Michalewski",
"Wenting Ye",
"Eunyoung Kim",
"Alex Druinsky",
"Florent Altché",
"Xinyi Chen",
"Artur Dwornik",
"Da-Cheng Juan",
"Rivka Moroshko",
"Horia Toma",
"Jarrod Kahn",
"Hai Qian",
"Maximilian Sieb",
"Irene Cai",
"Roman Goldenberg",
"Praneeth Netrapalli",
"Sindhu Raghuram",
"Yuan Gong",
"Lijie Fan",
"Evan Palmer",
"Yossi Matias",
"Valentin Gabeur",
"Shreya Pathak",
"Tom Ouyang",
"Don Metzler",
"Geoff Bacon",
"Srinivasan Venkatachary",
"Sridhar Thiagarajan",
"Alex Cullum",
"Eran Ofek",
"Vytenis Sakenas",
"Mohamed Hammad",
"Cesar Magalhaes",
"Mayank Daswani",
"Oscar Chang",
"Ashok Popat",
"Ruichao Li",
"Komal Jalan",
"Yanhan Hou",
"Josh Lipschultz",
"Antoine He",
"Wenhao Jia",
"Pier Giuseppe Sessa",
"Prateek Kolhar",
"William Wong",
"Sumeet Singh",
"Lukas Haas",
"Jay Whang",
"Hanna Klimczak-Plucińska",
"Georges Rotival",
"Grace Chung",
"Yiqing Hua",
"Anfal Siddiqui",
"Nicolas Serrano",
"Dongkai Chen",
"Billy Porter",
"Libin Bai",
"Keshav Shivam",
"Sho Arora",
"Partha Talukdar",
"Tom Cobley",
"Sangnie Bhardwaj",
"Evgeny Gladchenko",
"Simon Green",
"Kelvin Guu",
"Felix Fischer",
"Xiao Wu",
"Eric Wang",
"Achintya Singhal",
"Tatiana Matejovicova",
"James Martens",
"Hongji Li",
"Roma Patel",
"Elizabeth Kemp",
"Jiaqi Pan",
"Lily Wang",
"Blake JianHang Chen",
"Jean-Baptiste Alayrac",
"Navneet Potti",
"Erika Gemzer",
"Eugene Ie",
"Kay McKinney",
"Takaaki Saeki",
"Edward Chou",
"Pascal Lamblin",
"SQ Mah",
"Zach Fisher",
"Martin Chadwick",
"Jon Stritar",
"Obaid Sarvana",
"Andrew Hogue",
"Artem Shtefan",
"Hadi Hashemi",
"Yang Xu",
"Jindong Gu",
"Sharad Vikram",
"Chung-Ching Chang",
"Sabela Ramos",
"Logan Kilpatrick",
"Weijuan Xi",
"Jenny Brennan",
"Yinghao Sun",
"Abhishek Jindal",
"Ionel Gog",
"Dawn Chen",
"Felix Wu",
"Jason Lee",
"Sudhindra Kopalle",
"Srinadh Bhojanapalli",
"Oriol Vinyals",
"Natan Potikha",
"Burcu Karagol Ayan",
"Yuan Yuan",
"Michael Riley",
"Piotr Stanczyk",
"Sergey Kishchenko",
"Bing Wang",
"Dan Garrette",
"Antoine Yang",
"Vlad Feinberg",
"CJ Carey",
"Javad Azizi",
"Viral Shah",
"Erica Moreira",
"Chongyang Shi",
"Josh Feldman",
"Elizabeth Salesky",
"Thomas Lampe",
"Aneesh Pappu",
"Duhyeon Kim",
"Jonas Adler",
"Avi Caciularu",
"Brian Walker",
"Yunhan Xu",
"Yochai Blau",
"Dylan Scandinaro",
"Terry Huang",
"Sam El-Husseini",
"Abhishek Sinha",
"Lijie Ren",
"Taylor Tobin",
"Patrik Sundberg",
"Tim Sohn",
"Vikas Yadav",
"Mimi Ly",
"Emily Xue",
"Jing Xiong",
"Afzal Shama Soudagar",
"Sneha Mondal",
"Nikhil Khadke",
"Qingchun Ren",
"Ben Vargas",
"Stan Bileschi",
"Sarah Chakera",
"Cindy Wang",
"Boyu Wang",
"Yoni Halpern",
"Joe Jiang",
"Vikas Sindhwani",
"Petre Petrov",
"Pranavaraj Ponnuramu",
"Sanket Vaibhav Mehta",
"Yu Watanabe",
"Betty Chan",
"Matheus Wisniewski",
"Trang Pham",
"Jingwei Zhang",
"Conglong Li",
"Dario de Cesare",
"Art Khurshudov",
"Alex Vasiloff",
"Melissa Tan",
"Zoe Ashwood",
"Bobak Shahriari",
"Maryam Majzoubi",
"Garrett Tanzer",
"Olga Kozlova",
"Robin Alazard",
"James Lee-Thorp",
"Nguyet Minh Phu",
"Isaac Tian",
"Junwhan Ahn",
"Andy Crawford",
"Lauren Lax",
"Yuan Shangguan",
"Iftekhar Naim",
"David Ross",
"Oleksandr Ferludin",
"Tongfei Guo",
"Andrea Banino",
"Hubert Soyer",
"Xiaoen Ju",
"Dominika Rogozińska",
"Ishaan Malhi",
"Marcella Valentine",
"Daniel Balle",
"Apoorv Kulshreshtha",
"Maciej Kula",
"Yiwen Song",
"Sophia Austin",
"John Schultz",
"Roy Hirsch",
"Arthur Douillard",
"Apoorv Reddy",
"Michael Fink",
"Summer Yue",
"Khyatti Gupta",
"Adam Zhang",
"Norman Rink",
"Daniel McDuff",
"Lei Meng",
"András György",
"Yasaman Razeghi",
"Ricky Liang",
"Kazuki Osawa",
"Aviel Atias",
"Matan Eyal",
"Tyrone Hill",
"Nikolai Grigorev",
"Zhengdong Wang",
"Nitish Kulkarni",
"Rachel Soh",
"Ivan Lobov",
"Zachary Charles",
"Sid Lall",
"Kazuma Hashimoto",
"Ido Kessler",
"Victor Gomes",
"Zelda Mariet",
"Danny Driess",
"Alessandro Agostini",
"Canfer Akbulut",
"Jingcao Hu",
"Marissa Ikonomidis",
"Emily Caveness",
"Kartik Audhkhasi",
"Saurabh Agrawal",
"Ioana Bica",
"Evan Senter",
"Jayaram Mudigonda",
"Kelly Chen",
"Jingchen Ye",
"Xuanhui Wang",
"James Svensson",
"Philipp Fränken",
"Josh Newlan",
"Li Lao",
"Eva Schnider",
"Sami Alabed",
"Joseph Kready",
"Jesse Emond",
"Afief Halumi",
"Tim Zaman",
"Chengxi Ye",
"Naina Raisinghani",
"Vilobh Meshram",
"Bo Chang",
"Ankit Singh Rawat",
"Axel Stjerngren",
"Sergey Levi",
"Rui Wang",
"Xiangzhu Long",
"Mitchelle Rasquinha",
"Steven Hand",
"Aditi Mavalankar",
"Lauren Agubuzu",
"Sudeshna Roy",
"Junquan Chen",
"Jarek Wilkiewicz",
"Hao Zhou",
"Michal Jastrzebski",
"Qiong Hu",
"Agustin Dal Lago",
"Ramya Sree Boppana",
"Wei-Jen Ko",
"Jennifer Prendki",
"Yao Su",
"Zhi Li",
"Eliza Rutherford",
"Girish Ramchandra Rao",
"Ramona Comanescu",
"Adrià Puigdomènech",
"Qihang Chen",
"Dessie Petrova",
"Christine Chan",
"Vedrana Milutinovic",
"Felipe Tiengo Ferreira",
"Chin-Yi Cheng",
"Ming Zhang",
"Tapomay Dey",
"Sherry Yang",
"Ramesh Sampath",
"Quoc Le",
"Howard Zhou",
"Chu-Cheng Lin",
"Hoi Lam",
"Christine Kaeser-Chen",
"Kai Hui",
"Dean Hirsch",
"Tom Eccles",
"Basil Mustafa",
"Shruti Rijhwani",
"Morgane Rivière",
"Yuanzhong Xu",
"Junjie Wang",
"Xinyang Geng",
"Xiance Si",
"Arjun Khare",
"Cheolmin Kim",
"Vahab Mirrokni",
"Kamyu Lee",
"Khuslen Baatarsukh",
"Nathaniel Braun",
"Lisa Wang",
"Pallavi LV",
"Richard Tanburn",
"Yuvein",
"Zhu",
"Fangda Li",
"Setareh Ariafar",
"Dan Goldberg",
"Ken Burke",
"Daniil Mirylenka",
"Meiqi Guo",
"Olaf Ronneberger",
"Hadas Natalie Vogel",
"Liqun Cheng",
"Nishita Shetty",
"Johnson Jia",
"Thomas Jimma",
"Corey Fry",
"Ted Xiao",
"Martin Sundermeyer",
"Ryan Burnell",
"Yannis Assael",
"Mario Pinto",
"JD Chen",
"Rohit Sathyanarayana",
"Donghyun Cho",
"Jing Lu",
"Rishabh Agarwal",
"Sugato Basu",
"Lucas Gonzalez",
"Dhruv Shah",
"Meng Wei",
"Dre Mahaarachchi",
"Rohan Agrawal",
"Tero Rissa",
"Yani Donchev",
"Ramiro Leal-Cavazos",
"Adrian Hutter",
"Markus Mircea",
"Alon Jacovi",
"Faruk Ahmed",
"Jiageng Zhang",
"Shuguang Hu",
"Bo-Juen Chen",
"Jonni Kanerva",
"Guillaume Desjardins",
"Andrew Lee",
"Nikos Parotsidis",
"Asier Mujika",
"Tobias Weyand",
"Jasper Snoek",
"Jo Chick",
"Kai Chen",
"Paul Chang",
"Ethan Mahintorabi",
"Zi Wang",
"Tolly Powell",
"Orgad Keller",
"Abhirut Gupta",
"Claire Sha",
"Kanav Garg",
"Nicolas Heess",
"Ágoston Weisz",
"Cassidy Hardin",
"Bartek Wydrowski",
"Ben Coleman",
"Karina Zainullina",
"Pankaj Joshi",
"Alessandro Epasto",
"Terry Spitz",
"Binbin Xiong",
"Kai Zhao",
"Arseniy Klimovskiy",
"Ivy Zheng",
"Johan Ferret",
"Itay Yona",
"Waleed Khawaja",
"Jean-Baptiste Lespiau",
"Maxim Krikun",
"Siamak Shakeri",
"Timothee Cour",
"Bonnie Li",
"Igor Krivokon",
"Dan Suh",
"Alex Hofer",
"Jad Al Abdallah",
"Nikita Putikhin",
"Oscar Akerlund",
"Silvio Lattanzi",
"Anurag Kumar",
"Shane Settle",
"Himanshu Srivastava",
"Folawiyo Campbell-Ajala",
"Edouard Rosseel",
"Mihai Dorin Istin",
"Nishanth Dikkala",
"Anand Rao",
"Nick Young",
"Kate Lin",
"Dhruva Bhaswar",
"Yiming Wang",
"Jaume Sanchez Elias",
"Kritika Muralidharan",
"James Keeling",
"Dayou Du",
"Siddharth Gopal",
"Gregory Dibb",
"Charles Blundell",
"Manolis Delakis",
"Jacky Liang",
"Marco Tulio Ribeiro",
"Georgi Karadzhov",
"Guillermo Garrido",
"Ankur Bapna",
"Jiawei Cao",
"Adam Sadovsky",
"Pouya Tafti",
"Arthur Guez",
"Coline Devin",
"Yixian Di",
"Jinwei Xing",
"Chuqiao",
"Xu",
"Hanzhao Lin",
"Chun-Te Chu",
"Sameera Ponda",
"Wesley Helmholz",
"Fan Yang",
"Yue Gao",
"Sara Javanmardi",
"Wael Farhan",
"Alex Ramirez",
"Ricardo Figueira",
"Khe Chai Sim",
"Yuval Bahat",
"Ashwin Vaswani",
"Liangzhe Yuan",
"Gufeng Zhang",
"Leland Rechis",
"Hanjun Dai",
"Tayo Oguntebi",
"Alexandra Cordell",
"Eugénie Rives",
"Kaan Tekelioglu",
"Naveen Kumar",
"Bing Zhang",
"Aurick Zhou",
"Nikolay Savinov",
"Andrew Leach",
"Alex Tudor",
"Sanjay Ganapathy",
"Yanyan Zheng",
"Mirko Rossini",
"Vera Axelrod",
"Arnaud Autef",
"Yukun Zhu",
"Zheng Zheng",
"Mingda Zhang",
"Baochen Sun",
"Jie Ren",
"Nenad Tomasev",
"Nithish Kannen",
"Amer Sinha",
"Charles Chen",
"Louis O'Bryan",
"Alex Pak",
"Aditya Kusupati",
"Weel Yang",
"Deepak Ramachandran",
"Patrick Griffin",
"Seokhwan Kim",
"Philipp Neubeck",
"Craig Schiff",
"Tammo Spalink",
"Mingyang Ling",
"Arun Nair",
"Ga-Young Joung",
"Linda Deng",
"Avishkar Bhoopchand",
"Lora Aroyo",
"Tom Duerig",
"Jordan Griffith",
"Gabe Barth-Maron",
"Jake Ades",
"Alex Haig",
"Ankur Taly",
"Yunting Song",
"Paul Michel",
"Dave Orr",
"Dean Weesner",
"Corentin Tallec",
"Carrie Grimes Bostock",
"Paul Niemczyk",
"Andy Twigg",
"Mudit Verma",
"Rohith Vallu",
"Henry Wang",
"Marco Gelmi",
"Kiranbir Sodhia",
"Aleksandr Chuklin",
"Omer Goldman",
"Jasmine George",
"Liang Bai",
"Kelvin Zhang",
"Petar Sirkovic",
"Efrat Nehoran",
"Golan Pundak",
"Jiaqi Mu",
"Alice Chen",
"Alex Greve",
"Paulo Zacchello",
"David Amos",
"Heming Ge",
"Eric Noland",
"Colton Bishop",
"Jeffrey Dudek",
"Youhei Namiki",
"Elena Buchatskaya",
"Jing Li",
"Dorsa Sadigh",
"Masha Samsikova",
"Dan Malkin",
"Damien Vincent",
"Robert David",
"Rob Willoughby",
"Phoenix Meadowlark",
"Shawn Gao",
"Yan Li",
"Raj Apte",
"Amit Jhindal",
"Stein Xudong Lin",
"Alex Polozov",
"Zhicheng Wang",
"Tomas Mery",
"Anirudh GP",
"Varun Yerram",
"Sage Stevens",
"Tianqi Liu",
"Noah Fiedel",
"Charles Sutton",
"Matthew Johnson",
"Xiaodan Song",
"Kate Baumli",
"Nir Shabat",
"Muqthar Mohammad",
"Hao liu",
"Marco Selvi",
"Yichao Zhou",
"Mehdi Hafezi Manshadi",
"Chu-ling Ko",
"Anthony Chen",
"Michael Bendersky",
"Jorge Gonzalez Mendez",
"Nisarg Kothari",
"Amir Zandieh",
"Yiling Huang",
"Daniel Andor",
"Ellie Pavlick",
"Idan Brusilovsky",
"Jitendra Harlalka",
"Sally Goldman",
"Andrew Lampinen",
"Guowang Li",
"Asahi Ushio",
"Somit Gupta",
"Lei Zhang",
"Chuyuan Kelly Fu",
"Madhavi Sewak",
"Timo Denk",
"Jed Borovik",
"Brendan Jou",
"Avital Zipori",
"Prateek Jain",
"Junwen Bai",
"Thang Luong",
"Jonathan Tompson",
"Alice Li",
"Li Liu",
"George Powell",
"Jiajun Shen",
"Alex Feng",
"Grishma Chole",
"Da Yu",
"Yinlam Chow",
"Tongxin Yin",
"Eric Malmi",
"Kefan Xiao",
"Yash Pande",
"Shachi Paul",
"Niccolò Dal Santo",
"Adil Dostmohamed",
"Sergio Guadarrama",
"Aaron Phillips",
"Thanumalayan Sankaranarayana Pillai",
"Gal Yona",
"Amin Ghafouri",
"Preethi Lahoti",
"Benjamin Lee",
"Dhruv Madeka",
"Eren Sezener",
"Simon Tokumine",
"Adrian Collister",
"Nicola De Cao",
"Richard Shin",
"Uday Kalra",
"Parker Beak",
"Emily Nottage",
"Ryo Nakashima",
"Ivan Jurin",
"Vikash Sehwag",
"Meenu Gaba",
"Junhao Zeng",
"Kevin R. McKee",
"Fernando Pereira",
"Tamar Yakar",
"Amayika Panda",
"Arka Dhar",
"Peilin Zhong",
"Daniel Sohn",
"Mark Brand",
"Lars Lowe Sjoesund",
"Viral Carpenter",
"Sharon Lin",
"Shantanu Thakoor",
"Marcus Wainwright",
"Ashwin Chaugule",
"Pranesh Srinivasan",
"Muye Zhu",
"Bernett Orlando",
"Jack Weber",
"Ayzaan Wahid",
"Gilles Baechler",
"Apurv Suman",
"Jovana Mitrović",
"Gabe Taubman",
"Honglin Yu",
"Helen King",
"Josh Dillon",
"Cathy Yip",
"Dhriti Varma",
"Tomas Izo",
"Levent Bolelli",
"Borja de Balle Pigem",
"Julia Di Trapani",
"Fotis Iliopoulos",
"Adam Paszke",
"Nishant Ranka",
"Joe Zou",
"Francesco Pongetti",
"Jed McGiffin",
"Alex Siegman",
"Rich Galt",
"Ross Hemsley",
"Goran Žužić",
"Victor Carbune",
"Tao Li",
"Myle Ott",
"Félix de Chaumont Quitry",
"David Vilar Torres",
"Yuri Chervonyi",
"Tomy Tsai",
"Prem Eruvbetine",
"Samuel Yang",
"Matthew Denton",
"Jake Walker",
"Slavica Andačić",
"Idan Heimlich Shtacher",
"Vittal Premachandran",
"Harshal Tushar Lehri",
"Cip Baetu",
"Damion Yates",
"Lampros Lamprou",
"Mariko Iinuma",
"Ioana Mihailescu",
"Ben Albrecht",
"Shachi Dave",
"Susie Sargsyan",
"Bryan Perozzi",
"Lucas Manning",
"Chiyuan Zhang",
"Denis Vnukov",
"Igor Mordatch",
"Raia Hadsell Wolfgang Macherey",
"Ryan Kappedal",
"Jim Stephan",
"Aditya Tripathi",
"Klaus Macherey",
"Jun Qian",
"Abhishek Bhowmick",
"Shekoofeh Azizi",
"Rémi Leblond",
"Shiva Mohan Reddy Garlapati",
"Timothy Knight",
"Matthew Wiethoff",
"Wei-Chih Hung",
"Anelia Angelova",
"Georgios Evangelopoulos",
"Pawel Janus",
"Dimitris Paparas",
"Matthew Rahtz",
"Ken Caluwaerts",
"Vivek Sampathkumar",
"Daniel Jarrett",
"Shadi Noghabi",
"Antoine Miech",
"Chak Yeung",
"Geoff Clark",
"Henry Prior",
"Fei Zheng",
"Jean Pouget-Abadie",
"Indro Bhattacharya",
"Kalpesh Krishna",
"Will Bishop",
"Zhe Yuan",
"Yunxiao Deng",
"Ashutosh Sathe",
"Kacper Krasowiak",
"Ciprian Chelba",
"Cho-Jui Hsieh",
"Kiran Vodrahalli",
"Buhuang Liu",
"Thomas Köppe",
"Amr Khalifa",
"Lubo Litchev",
"Pichi Charoenpanit",
"Reed Roberts",
"Sachin Yadav",
"Yasumasa Onoe",
"Desi Ivanov",
"Megha Mohabey",
"Vighnesh Birodkar",
"Nemanja Rakićević",
"Pierre Sermanet",
"Vaibhav Mehta",
"Krishan Subudhi",
"Travis Choma",
"Will Ng",
"Luheng He",
"Kathie Wang",
"Tasos Kementsietsidis",
"Shane Gu",
"Mansi Gupta",
"Andrew Nystrom",
"Mehran Kazemi",
"Timothy Chung",
"Nacho Cano",
"Nikhil Dhawan",
"YuFei Wang",
"Jiawei Xia",
"Trevor Yacovone",
"Eric Jia",
"Mingqing Chen",
"Simeon Ivanov",
"Ashrith Sheshan",
"Sid Dalmia",
"Paweł Stradomski",
"Pengcheng Yin",
"Salem Haykal",
"Congchao Wang",
"Dennis Duan",
"Neslihan Bulut",
"Greg Kochanski",
"Liam MacDermed",
"Namrata Godbole",
"Shitao Weng",
"Jingjing Chen",
"Rachana Fellinger",
"Ramin Mehran",
"Daniel Suo",
"Hisham Husain",
"Tong He",
"Kaushal Patel",
"Joshua Howland",
"Randall Parker",
"Kelvin Nguyen",
"Sharath Maddineni",
"Chris Rawles",
"Mina Khan",
"Shlomi Cohen-Ganor",
"Amol Mandhane",
"Xinyi Wu",
"Chenkai Kuang",
"Iulia Comşa",
"Ramya Ganeshan",
"Hanie Sedghi",
"Adam Bloniarz",
"Nuo Wang Pierse",
"Anton Briukhov",
"Petr Mitrichev",
"Anita Gergely",
"Serena Zhan",
"Allan Zhou",
"Nikita Saxena",
"Eva Lu",
"Josef Dean",
"Ashish Gupta",
"Nicolas Perez-Nieves",
"Renjie Wu",
"Cory McLean",
"Wei Liang",
"Disha Jindal",
"Anton Tsitsulin",
"Wenhao Yu",
"Kaiz Alarakyia",
"Tom Schaul",
"Piyush Patil",
"Peter Sung",
"Elijah Peake",
"Hongkun Yu",
"Feryal Behbahani",
"JD Co-Reyes",
"Alan Ansell",
"Sean Sun",
"Clara Barbu",
"Jonathan Lee",
"Seb Noury",
"James Allingham",
"Bilal Piot",
"Mohit Sharma",
"Christopher Yew",
"Ivan Korotkov",
"Bibo Xu",
"Demetra Brady",
"Goran Petrovic",
"Shibl Mourad",
"Claire Cui",
"Aditya Gupta",
"Parker Schuh",
"Saarthak Khanna",
"Anna Goldie",
"Abhinav Arora",
"Vadim Zubov",
"Amy Stuart",
"Mark Epstein",
"Yun Zhu",
"Jianqiao Liu",
"Yury Stuken",
"Ziyue Wang",
"Karolis Misiunas",
"Dee Guo",
"Ashleah Gill",
"Ale Hartman",
"Zaid Nabulsi",
"Aurko Roy",
"Aleksandra Faust",
"Jason Riesa",
"Ben Withbroe",
"Mengchao Wang",
"Marco Tagliasacchi",
"Andreea Marzoca",
"James Noraky",
"Serge Toropov",
"Malika Mehrotra",
"Bahram Raad",
"Sanja Deur",
"Steve Xu",
"Marianne Monteiro",
"Zhongru Wu",
"Yi Luan",
"Sam Ritter",
"Nick Li",
"Håvard Garnes",
"Yanzhang He",
"Martin Zlocha",
"Jifan Zhu",
"Matteo Hessel",
"Will Wu",
"Spandana Raj Babbula",
"Chizu Kawamoto",
"Yuanzhen Li",
"Mehadi Hassen",
"Yan Wang",
"Brian Wieder",
"James Freedman",
"Yin Zhang",
"Xinyi Bai",
"Tianli Yu",
"David Reitter",
"XiangHai Sheng",
"Mateo Wirth",
"Aditya Kini",
"Dima Damen",
"Mingcen Gao",
"Rachel Hornung",
"Michael Voznesensky",
"Brian Roark",
"Adhi Kuncoro",
"Yuxiang Zhou",
"Rushin Shah",
"Anthony Brohan",
"Kuangyuan Chen",
"James Wendt",
"David Rim",
"Paul Kishan Rubenstein",
"Jonathan Halcrow",
"Michelle Liu",
"Ty Geri",
"YunHsuan Sung",
"Jane Shapiro",
"Shaan Bijwadia",
"Chris Duvarney",
"Christina Sorokin",
"Paul Natsev",
"Reeve Ingle",
"Pramod Gupta",
"Young Maeng",
"Ndaba Ndebele",
"Kexin Zhu",
"Valentin Anklin",
"Katherine Lee",
"YuAn Liu",
"Yaroslav Akulov",
"Shaleen Gupta",
"Guolong Su",
"Flavien Prost",
"Tianlin Liu",
"Vitaly Kovalev",
"Pol Moreno",
"Martin Scholz",
"Sam Redmond",
"Zongwei Zhou",
"Alex Castro-Ros",
"André Susano Pinto",
"Dia Kharrat",
"Michal Yarom",
"Rachel Saputro",
"Jannis Bulian",
"Ben Caine",
"Ji Liu",
"Abbas Abdolmaleki",
"Shariq Iqbal",
"Tautvydas Misiunas",
"Mikhail Sirotenko",
"Shefali Garg",
"Guy Bensky",
"Huan Gui",
"Xuezhi Wang",
"Raphael Koster",
"Mike Bernico",
"Da Huang",
"Romal Thoppilan",
"Trevor Cohn",
"Ben Golan",
"Wenlei Zhou",
"Andrew Rosenberg",
"Markus Freitag",
"Tynan Gangwani",
"Vincent Tsang",
"Anand Shukla",
"Xiaoqi Ren",
"Minh Giang",
"Chi Zou",
"Andre Elisseeff",
"Charline Le Lan",
"Dheeru Dua",
"Shuba Lall",
"Pranav Shyam",
"Frankie Garcia",
"Sarah Nguyen",
"Michael Guzman",
"AJ Maschinot",
"Marcello Maggioni",
"Ming-Wei Chang",
"Karol Gregor",
"Lotte Weerts",
"Kumaran Venkatesan",
"Bogdan Damoc",
"Leon Liu",
"Jan Wassenberg",
"Lewis Ho",
"Becca Roelofs",
"Majid Hadian",
"François-Xavier Aubet",
"Yu Liang",
"Sami Lachgar",
"Danny Karmon",
"Yong Cheng",
"Amelio Vázquez-Reina",
"Angie Chen",
"Zhuyun Dai",
"Andy Brock",
"Shubham Agrawal",
"Chenxi Pang",
"Peter Garst",
"Mariella Sanchez-Vargas",
"Ivor Rendulic",
"Aditya Ayyar",
"Andrija Ražnatović",
"Olivia Ma",
"Roopali Vij",
"Neha Sharma",
"Ashwin Balakrishna",
"Bingyuan Liu",
"Ian Mackinnon",
"Sorin Baltateanu",
"Petra Poklukar",
"Gabriel Ibagon",
"Colin Ji",
"Hongyang Jiao",
"Isaac Noble",
"Wojciech Stokowiec",
"Zhihao LI",
"Jeff Dean",
"David Lindner",
"Mark Omernick",
"Kristen Chiafullo",
"Mason Dimarco",
"Vitor Rodrigues",
"Vittorio Selo",
"Garrett Honke",
"Xintian",
"Wu",
"wei he",
"Adam Hillier",
"Anhad Mohananey",
"Vihari Piratla",
"Chang Ye",
"Chase Malik",
"Sebastian Riedel",
"Samuel Albanie",
"Zi Yang",
"Kenny Vassigh",
"Maria Bauza",
"Sheng Li",
"Yiqing Tao",
"Nevan Wichers",
"Andrii Maksai",
"Abe Ittycheriah",
"Ross Mcilroy",
"Bryan Seybold",
"Noah Goodman",
"Romina Datta",
"Steven M. Hernandez",
"Tian Shi",
"Yony Kochinski",
"Anna Bulanova",
"Ken Franko",
"Mikita Sazanovich",
"Nicholas FitzGerald",
"Praneeth Kacham",
"Shubha Srinivas Raghvendra",
"Vincent Hellendoorn",
"Alexander Grushetsky",
"Julian Salazar",
"Angeliki Lazaridou",
"Jason Chang",
"Jan-Thorsten Peter",
"Sushant Kafle",
"Yann Dauphin",
"Abhishek Rao",
"Filippo Graziano",
"Izhak Shafran",
"Yuguo Liao",
"Tianli Ding",
"Geng Yan",
"Grace Chu",
"Zhao Fu",
"Vincent Roulet",
"Gabriel Rasskin",
"Duncan Williams",
"Shahar Drath",
"Alex Mossin",
"Raphael Hoffmann",
"Jordi Orbay",
"Francesco Bertolini",
"Hila Sheftel",
"Justin Chiu",
"Siyang Xue",
"Yuheng Kuang",
"Ferjad Naeem",
"Swaroop Nath",
"Nana Nti",
"Phil Culliton",
"Kashyap Krishnakumar",
"Michael Isard",
"Pei Sun",
"Ayan Chakrabarti",
"Nathan Clement",
"Regev Cohen",
"Arissa Wongpanich",
"GS Oh",
"Ashwin Murthy",
"Hao Zheng",
"Jessica Hamrick",
"Oskar Bunyan",
"Suhas Ganesh",
"Nitish Gupta",
"Roy Frostig",
"John Wieting",
"Yury Malkov",
"Pierre Marcenac",
"Zhixin",
"Lai",
"Xiaodan Tang",
"Mohammad Saleh",
"Fedir Zubach",
"Chinmay Kulkarni",
"Huanjie Zhou",
"Vicky Zayats",
"Nan Ding",
"Anshuman Tripathi",
"Arijit Pramanik",
"Patrik Zochbauer",
"Harish Ganapathy",
"Vedant Misra",
"Zach Behrman",
"Hugo Vallet",
"Mingyang Zhang",
"Mukund Sridhar",
"Ye Jin",
"Mohammad Babaeizadeh",
"Siim Põder",
"Megha Goel",
"Divya Jain",
"Tajwar Nasir",
"Shubham Mittal",
"Tim Dozat",
"Diego Ardila",
"Aliaksei Severyn",
"Fabio Pardo",
"Sammy Jerome",
"Siyang Qin",
"Louis Rouillard",
"Amir Yazdanbakhsh",
"Zizhao Zhang",
"Shivani Agrawal",
"Kaushik Shivakumar",
"Caden Lu",
"Praveen Kallakuri",
"Rachita Chhaparia",
"Kanishka Rao",
"Charles Kwong",
"Asya Fadeeva",
"Shitij Nigam",
"Yan Virin",
"Yuan Zhang",
"Balaji Venkatraman",
"Beliz Gunel",
"Marc Wilson",
"Huiyu Wang",
"Abhinav Gupta",
"Xiaowei Xu",
"Adrien Ali Taïga",
"Kareem Mohamed",
"Doug Fritz",
"Daniel Rodriguez",
"Zoubin Ghahramani",
"Harry Askham",
"Lior Belenki",
"James Zhao",
"Rahul Gupta",
"Krzysztof Jastrzębski",
"Takahiro Kosakai",
"Kaan Katircioglu",
"Jon Schneider",
"Rina Panigrahy",
"Konstantinos Bousmalis",
"Peter Grabowski",
"Prajit Ramachandran",
"Chaitra Hegde",
"Mihaela Rosca",
"Angelo Scorza Scarpati",
"Kyriakos Axiotis",
"Ying Xu",
"Zach Gleicher",
"Assaf Hurwitz Michaely",
"Mandar Sharma",
"Sanil Jain",
"Christoph Hirnschall",
"Tal Marian",
"Xuhui Jia",
"Kevin Mather",
"Kilol Gupta",
"Linhai Qiu",
"Nigamaa Nayakanti",
"Lucian Ionita",
"Steven Zheng",
"Lucia Loher",
"Kurt Shuster",
"Igor Petrovski",
"Roshan Sharma",
"Rahma Chaabouni",
"Angel Yeh",
"James An",
"Arushi Gupta",
"Steven Schwarcz",
"Seher Ellis",
"Sam Conway-Rahman",
"Javier Snaider",
"Alex Zhai",
"James Atwood",
"Daniel Golovin",
"Liqian Peng",
"Te I",
"Vivian Xia",
"Salvatore Scellato",
"Mahan Malihi",
"Arthur Bražinskas",
"Vlad-Doru Ion",
"Younghoon Jun",
"James Swirhun",
"Soroosh Mariooryad",
"Jiao Sun",
"Steve Chien",
"Rey Coaguila",
"Ariel Brand",
"Yi Gao",
"Tom Kwiatkowski",
"Roee Aharoni",
"Cheng-Chun Lee",
"Mislav Žanić",
"Yichi Zhang",
"Dan Ethier",
"Vitaly Nikolaev",
"Pranav Nair",
"Yoav Ben Shalom",
"Hen Fitoussi",
"Jai Gupta",
"Hongbin Liu",
"Dee Cattle",
"Tolga Bolukbasi",
"Ben Murdoch",
"Fantine Huot",
"Yin Li",
"Chris Hahn",
"Urvashi Khandelwal",
"Frederik Benzing",
"Arthur Conmy",
"Andrey Simanovsky",
"Françoise Beaufays",
"Eugene Weinstein",
"Tongzhou Chen",
"Luke Leonhard",
"Bhuvana Ramabhadran"
] |
[] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-systematic-analysis-of-hybrid-linear
|
2507.06457
| null | null |
A Systematic Analysis of Hybrid Linear Attention
|
Transformers face quadratic complexity and memory issues with long sequences, prompting the adoption of linear attention mechanisms using fixed-size hidden states. However, linear models often suffer from limited recall performance, leading to hybrid architectures that combine linear and full attention layers. Despite extensive hybrid architecture research, the choice of linear attention component has not been deeply explored. We systematically evaluate various linear attention models across generations - vector recurrences to advanced gating mechanisms - both standalone and hybridized. To enable this comprehensive analysis, we trained and open-sourced 72 models: 36 at 340M parameters (20B tokens) and 36 at 1.3B parameters (100B tokens), covering six linear attention variants across five hybridization ratios. Benchmarking on standard language modeling and recall tasks reveals that superior standalone linear models do not necessarily excel in hybrids. While language modeling remains stable across linear-to-full attention ratios, recall significantly improves with increased full attention layers, particularly below a 3:1 ratio. Our study highlights selective gating, hierarchical recurrence, and controlled forgetting as critical for effective hybrid models. We recommend architectures such as HGRN-2 or GatedDeltaNet with a linear-to-full ratio between 3:1 and 6:1 to achieve Transformer-level recall efficiently. Our models are open-sourced at https://huggingface.co/collections/m-a-p/hybrid-linear-attention-research-686c488a63d609d2f20e2b1e.
| null |
https://arxiv.org/abs/2507.06457v1
|
https://arxiv.org/pdf/2507.06457v1.pdf
| null |
[
"Dustin Wang",
"Rui-Jie Zhu",
"Steven Abreu",
"Yong Shan",
"Taylor Kergan",
"Yuqi Pan",
"Yuhong Chou",
"Zheng Li",
"Ge Zhang",
"Wenhao Huang",
"Jason Eshraghian"
] |
[
"Benchmarking",
"Language Modeling",
"Language Modelling"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptive-termination-for-multi-round-parallel
|
2507.06829
| null | null |
Adaptive Termination for Multi-round Parallel Reasoning: An Universal Semantic Entropy-Guided Framework
|
Recent advances in large language models (LLMs) have accelerated progress toward artificial general intelligence, with inference-time scaling emerging as a key technique. Contemporary approaches leverage either sequential reasoning (iteratively extending chains of thought) or parallel reasoning (generating multiple solutions simultaneously) to scale inference. However, both paradigms face fundamental limitations: sequential scaling typically relies on arbitrary token budgets for termination, leading to inefficiency or premature cutoff; while parallel scaling often lacks coordination among parallel branches and requires intrusive fine-tuning to perform effectively. In light of these challenges, we aim to design a flexible test-time collaborative inference framework that exploits the complementary strengths of both sequential and parallel reasoning paradigms. Towards this goal, the core challenge lies in developing an efficient and accurate intrinsic quality metric to assess model responses during collaborative inference, enabling dynamic control and early termination of the reasoning trace. To address this challenge, we introduce semantic entropy (SE), which quantifies the semantic diversity of parallel model responses and serves as a robust indicator of reasoning quality due to its strong negative correlation with accuracy...
| null |
https://arxiv.org/abs/2507.06829v1
|
https://arxiv.org/pdf/2507.06829v1.pdf
| null |
[
"Zenan Xu",
"Zexuan Qiu",
"Guanhua Huang",
"Kun Li",
"Siheng Li",
"Chenchen Zhang",
"Kejiao Li",
"Qi Yi",
"Yuhao Jiang",
"Bo Zhou",
"Fengzong Lian",
"Zhanhui Kang"
] |
[
"Collaborative Inference"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/spindlekv-a-novel-kv-cache-reduction-method
|
2507.06517
| null | null |
SpindleKV: A Novel KV Cache Reduction Method Balancing Both Shallow and Deep Layers
|
Large Language Models (LLMs) have achieved impressive accomplishments in recent years. However, the increasing memory consumption of KV cache has possessed a significant challenge to the inference system. Eviction methods have revealed the inherent redundancy within the KV cache, demonstrating its potential for reduction, particularly in deeper layers. However, KV cache reduction for shallower layers has been found to be insufficient. Based on our observation that, the KV cache exhibits a high degree of similarity. Based on this observation, we proposed a novel KV cache reduction method, SpindleKV, which balances both shallow and deep layers. For deep layers, we employ an attention weight based eviction method, while for shallow layers, we apply a codebook based replacement approach which is learnt by similarity and merging policy. Moreover, SpindleKV addressed the Grouped-Query Attention (GQA) dilemma faced by other attention based eviction methods. Experiments on two common benchmarks with three different LLMs shown that SpindleKV obtained better KV cache reduction effect compared to baseline methods, while preserving similar or even better model performance.
| null |
https://arxiv.org/abs/2507.06517v1
|
https://arxiv.org/pdf/2507.06517v1.pdf
| null |
[
"Zicong Tang",
"Shi Luohe",
"Zuchao Li",
"Baoyuan Qi",
"Guoming Liu",
"Lefei Zhang",
"Ping Wang"
] |
[] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mquant-unleashing-the-inference-potential-of
|
2502.00425
| null | null |
MQuant: Unleashing the Inference Potential of Multimodal Large Language Models via Full Static Quantization
|
Multimodal large language models (MLLMs) have garnered widespread attention due to their ability to understand multimodal input. However, their large parameter sizes and substantial computational demands severely hinder their practical deployment and application.While quantization is an effective way to reduce model size and inference latency, its application to MLLMs remains underexplored. In this paper, we propose MQuant, a post-training quantization (PTQ) framework designed to tackle the unique challenges of multimodal large language models (MLLMs). Conventional quantization often struggles with MLLMs because of (a) high inference latency from large visual token counts, (b) distributional disparities between visual and textual tokens, and (c) extreme outliers introduced by Hadamard-based transformations. To address these issues, MQuant introduces: Modality-Specific Static Quantization (MSQ), assigning distinct static scales for visual vs. textual tokens; Attention-Invariant Flexible Switching (AIFS), reordering tokens to preserve casual attention while eliminating expensive token-wise scale computations; Rotation Magnitude Suppression (RMS), mitigating weight outliers arising from online Hadamard rotations. On five mainstream MLLMs (including Qwen-VL, MiniCPM-V, CogVLM2), MQuant under W4A8 achieves near-floating-point accuracy (<1% degradation) while reducing inference latency by up to 30%, significantly outperforming existing PTQ baselines. Our MQuant effectively bridges the gap for efficient and accurate MLLMs inference in resource-constrained devices. Code will be released.
| null |
https://arxiv.org/abs/2502.00425v1
|
https://arxiv.org/pdf/2502.00425v1.pdf
| null |
[
"Jiangyong Yu",
"Sifan Zhou",
"Dawei Yang",
"Shuo Wang",
"Shuoyu Li",
"Xing Hu",
"Chen Xu",
"Zukang Xu",
"Changyong Shu",
"Zhihang Yuan"
] |
[
"Quantization"
] | 2025-02-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hvi-cidnet-beyond-extreme-darkness-for-low
|
2507.06814
| null | null |
HVI-CIDNet+: Beyond Extreme Darkness for Low-Light Image Enhancement
|
Low-Light Image Enhancement (LLIE) aims to restore vivid content and details from corrupted low-light images. However, existing standard RGB (sRGB) color space-based LLIE methods often produce color bias and brightness artifacts due to the inherent high color sensitivity. While Hue, Saturation, and Value (HSV) color space can decouple brightness and color, it introduces significant red and black noise artifacts. To address this problem, we propose a new color space for LLIE, namely Horizontal/Vertical-Intensity (HVI), defined by the HV color map and learnable intensity. The HV color map enforces small distances for the red coordinates to remove red noise artifacts, while the learnable intensity compresses the low-light regions to remove black noise artifacts. Additionally, we introduce the Color and Intensity Decoupling Network+ (HVI-CIDNet+), built upon the HVI color space, to restore damaged content and mitigate color distortion in extremely dark regions. Specifically, HVI-CIDNet+ leverages abundant contextual and degraded knowledge extracted from low-light images using pre-trained vision-language models, integrated via a novel Prior-guided Attention Block (PAB). Within the PAB, latent semantic priors can promote content restoration, while degraded representations guide precise color correction, both particularly in extremely dark regions through the meticulously designed cross-attention fusion mechanism. Furthermore, we construct a Region Refinement Block that employs convolution for information-rich regions and self-attention for information-scarce regions, ensuring accurate brightness adjustments. Comprehensive results from benchmark experiments demonstrate that the proposed HVI-CIDNet+ outperforms the state-of-the-art methods on 10 datasets.
| null |
https://arxiv.org/abs/2507.06814v1
|
https://arxiv.org/pdf/2507.06814v1.pdf
| null |
[
"Qingsen Yan",
"Kangbiao Shi",
"Yixu Feng",
"Tao Hu",
"Peng Wu",
"Guansong Pang",
"Yanning Zhang"
] |
[
"Image Enhancement",
"Low-Light Image Enhancement"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/opendpdv2-a-unified-learning-and-optimization
|
2507.06849
| null | null |
OpenDPDv2: A Unified Learning and Optimization Framework for Neural Network Digital Predistortion
|
Neural network (NN)-based Digital Predistortion (DPD) stands out in improving signal quality in wideband radio frequency (RF) power amplifiers (PAs) employing complex modulation. However, NN DPDs usually rely on a large number of parameters for effective linearization and can significantly contribute to the energy consumption of the digital back-end in RF systems. This paper presents OpenDPDv2, a unified framework for PA modeling, DPD learning, and model optimization to reduce power consumption while maintaining high linearization performance. The optimization techniques feature a novel DPD algorithm, TRes-DeltaGRU, alongside two energy-efficient methods. The top-performing 32-bit floating-point (FP32) TRes-DeltaGRU-DPD model achieves an Adjacent Channel Power Ratio (ACPR) of -59.4 dBc and Error Vector Magnitude (EVM) of -42.1 dBc. By exploiting fixed-point quantization and dynamic temporal sparsity of input signals and hidden neurons, the inference energy of our model can be reduced by 4.5X while still maintaining -50.3 dBc ACPR and -35.2 dB EVM with 56% temporal sparsity. This was evaluated using a TM3.1a 200 MHz bandwidth 256-QAM OFDM signal applied to a 3.5 GHz GaN Doherty RF PA. OpenDPDv2 code, datasets, and documentation are publicly accessible at: https://github.com/lab-emi/OpenDPD.
|
However, NN DPDs usually rely on a large number of parameters for effective linearization and can significantly contribute to the energy consumption of the digital back-end in RF systems.
|
https://arxiv.org/abs/2507.06849v1
|
https://arxiv.org/pdf/2507.06849v1.pdf
| null |
[
"Yizhuo Wu",
"Ang Li",
"Chang Gao"
] |
[
"Model Optimization",
"Quantization"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Extreme Value Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "EVM",
"source_title": "The Extreme Value Machine",
"source_url": "http://arxiv.org/abs/1506.06112v4"
}
] |
https://paperswithcode.com/paper/usvtrack-usv-based-4d-radar-camera-tracking
|
2506.18737
| null | null |
USVTrack: USV-Based 4D Radar-Camera Tracking Dataset for Autonomous Driving in Inland Waterways
|
Object tracking in inland waterways plays a crucial role in safe and cost-effective applications, including waterborne transportation, sightseeing tours, environmental monitoring and surface rescue. Our Unmanned Surface Vehicle (USV), equipped with a 4D radar, a monocular camera, a GPS, and an IMU, delivers robust tracking capabilities in complex waterborne environments. By leveraging these sensors, our USV collected comprehensive object tracking data, which we present as USVTrack, the first 4D radar-camera tracking dataset tailored for autonomous driving in new generation waterborne transportation systems. Our USVTrack dataset presents rich scenarios, featuring diverse various waterways, varying times of day, and multiple weather and lighting conditions. Moreover, we present a simple but effective radar-camera matching method, termed RCM, which can be plugged into popular two-stage association trackers. Experimental results utilizing RCM demonstrate the effectiveness of the radar-camera matching in improving object tracking accuracy and reliability for autonomous driving in waterborne environments. The USVTrack dataset is public on https://usvtrack.github.io.
| null |
https://arxiv.org/abs/2506.18737v1
|
https://arxiv.org/pdf/2506.18737v1.pdf
| null |
[
"Shanliang Yao",
"Runwei Guan",
"Yi Ni",
"Sen Xu",
"Yong Yue",
"Xiaohui Zhu",
"Ryan Wen Liu"
] |
[
"Autonomous Driving",
"Object",
"Object Tracking"
] | 2025-06-23T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Greedy Policy Search** (GPS) is a simple algorithm that learns a policy for test-time data augmentation based on the predictive performance on a validation set. GPS starts with an empty policy and builds it in an iterative fashion. Each step selects a sub-policy that provides the largest improvement in calibrated log-likelihood of ensemble predictions and adds it to the current policy.",
"full_name": "Greedy Policy Search",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Data Augmentation** refers to a class of methods that augment an image dataset to increase the effective size of the training set, or as a form of regularization to help the network learn more effective representations.",
"name": "Image Data Augmentation",
"parent": null
},
"name": "GPS",
"source_title": "Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation",
"source_url": "https://arxiv.org/abs/2002.09103v2"
}
] |
https://paperswithcode.com/paper/ai-research-agents-for-machine-learning
|
2507.02554
| null | null |
AI Research Agents for Machine Learning: Search, Exploration, and Generalization in MLE-bench
|
AI research agents are demonstrating great potential to accelerate scientific progress by automating the design, implementation, and training of machine learning models. We focus on methods for improving agents' performance on MLE-bench, a challenging benchmark where agents compete in Kaggle competitions to solve real-world machine learning problems. We formalize AI research agents as search policies that navigate a space of candidate solutions, iteratively modifying them using operators. By designing and systematically varying different operator sets and search policies (Greedy, MCTS, Evolutionary), we show that their interplay is critical for achieving high performance. Our best pairing of search strategy and operator set achieves a state-of-the-art result on MLE-bench lite, increasing the success rate of achieving a Kaggle medal from 39.6% to 47.7%. Our investigation underscores the importance of jointly considering the search strategy, operator design, and evaluation methodology in advancing automated machine learning.
| null |
https://arxiv.org/abs/2507.02554v1
|
https://arxiv.org/pdf/2507.02554v1.pdf
| null |
[
"Edan Toledo",
"Karen Hambardzumyan",
"Martin Josifoski",
"Rishi Hazra",
"Nicolas Baldwin",
"Alexis Audran-Reiss",
"Michael Kuchnik",
"Despoina Magka",
"Minqi Jiang",
"Alisia Maria Lupidi",
"Andrei Lupu",
"Roberta Raileanu",
"Kelvin Niu",
"Tatiana Shavrina",
"Jean-Christophe Gagnon-Audet",
"Michael Shvartsman",
"Shagun Sodhani",
"Alexander H. Miller",
"Abhishek Charnalia",
"Derek Dunfield",
"Carole-Jean Wu",
"Pontus Stenetorp",
"Nicola Cancedda",
"Jakob Nicolaus Foerster",
"Yoram Bachrach"
] |
[
"Navigate"
] | 2025-07-03T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/rethinking-verification-for-llm-code
|
2507.06920
| null | null |
Rethinking Verification for LLM Code Generation: From Generation to Testing
|
Large language models (LLMs) have recently achieved notable success in code-generation benchmarks such as HumanEval and LiveCodeBench. However, a detailed examination reveals that these evaluation suites often comprise only a limited number of homogeneous test cases, resulting in subtle faults going undetected. This not only artificially inflates measured performance but also compromises accurate reward estimation in reinforcement learning frameworks utilizing verifiable rewards (RLVR). To address these critical shortcomings, we systematically investigate the test-case generation (TCG) task by proposing multi-dimensional metrics designed to rigorously quantify test-suite thoroughness. Furthermore, we introduce a human-LLM collaborative method (SAGA), leveraging human programming expertise with LLM reasoning capability, aimed at significantly enhancing both the coverage and the quality of generated test cases. In addition, we develop a TCGBench to facilitate the study of the TCG task. Experiments show that SAGA achieves a detection rate of 90.62% and a verifier accuracy of 32.58% on TCGBench. The Verifier Accuracy (Verifier Acc) of the code generation evaluation benchmark synthesized by SAGA is 10.78% higher than that of LiveCodeBench-v6. These results demonstrate the effectiveness of our proposed method. We hope this work contributes to building a scalable foundation for reliable LLM code evaluation, further advancing RLVR in code generation, and paving the way for automated adversarial test synthesis and adaptive benchmark integration.
| null |
https://arxiv.org/abs/2507.06920v2
|
https://arxiv.org/pdf/2507.06920v2.pdf
| null |
[
"Zihan Ma",
"Taolin Zhang",
"Maosong Cao",
"Junnan Liu",
"Wenwei Zhang",
"Minnan Luo",
"Songyang Zhang",
"Kai Chen"
] |
[
"Code Generation",
"HumanEval"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "SAGA is a method in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem.",
"full_name": "SAGA",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Optimization",
"parent": null
},
"name": "SAGA",
"source_title": "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives",
"source_url": "http://arxiv.org/abs/1407.0202v3"
}
] |
https://paperswithcode.com/paper/disappearing-ink-obfuscation-breaks-n-gram
|
2507.05512
| null | null |
Disappearing Ink: Obfuscation Breaks N-gram Code Watermarks in Theory and Practice
|
Distinguishing AI-generated code from human-written code is becoming crucial for tasks such as authorship attribution, content tracking, and misuse detection. Based on this, N-gram-based watermarking schemes have emerged as prominent, which inject secret watermarks to be detected during the generation. However, their robustness in code content remains insufficiently evaluated. Most claims rely solely on defenses against simple code transformations or code optimizations as a simulation of attack, creating a questionable sense of robustness. In contrast, more sophisticated schemes already exist in the software engineering world, e.g., code obfuscation, which significantly alters code while preserving functionality. Although obfuscation is commonly used to protect intellectual property or evade software scanners, the robustness of code watermarking techniques against such transformations remains largely unexplored. In this work, we formally model the code obfuscation and prove the impossibility of N-gram-based watermarking's robustness with only one intuitive and experimentally verified assumption, distribution consistency, satisfied. Given the original false positive rate of the watermarking detection, the ratio that the detector failed on the watermarked code after obfuscation will increase to 1 - fpr. The experiments have been performed on three SOTA watermarking schemes, two LLMs, two programming languages, four code benchmarks, and four obfuscators. Among them, all watermarking detectors show coin-flipping detection abilities on obfuscated codes (AUROC tightly surrounds 0.5). Among all models, watermarking schemes, and datasets, both programming languages own obfuscators that can achieve attack effects with no detection AUROC higher than 0.6 after the attack. Based on the theoretical and practical observations, we also proposed a potential path of robust code watermarking.
| null |
https://arxiv.org/abs/2507.05512v1
|
https://arxiv.org/pdf/2507.05512v1.pdf
| null |
[
"Gehao Zhang",
"Eugene Bagdasarian",
"Juan Zhai",
"Shiqing Ma"
] |
[
"Authorship Attribution"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/estimating-correctness-without-oracles-in-llm
|
2507.00057
| null | null |
Estimating Correctness Without Oracles in LLM-Based Code Generation
|
Generating code from natural language specifications is one of the most successful applications of Large Language Models (LLMs). Yet, they hallucinate: LLMs produce outputs that may be grammatically correct but are factually incorrect. Without an existing, correct implementation (i.e., an oracle), can we quantify how likely the generated program is correct? In this paper, we propose a measure of incorrectness, called incoherence, that can be estimated efficiently in the absence of an oracle and provides a lower bound on the error, i.e., the probability that the LLM-generated program for that specification is incorrect. Our experiments demonstrate an extraordinary effectiveness. For the average code generation task, our incoherence-based methodology can automatically identify about two-thirds of incorrect programs without reports of false positives. In fact, an oracle-based evaluation of LLMs can be reliably replaced by an incoherence-based evaluation. In particular, we find a very strong agreement between the ranking of LLMs by the number of programs deemed correct via an oracle (pass@1) and the ranking of LLMs by the number of programs deemed correct via our incoherence.
| null |
https://arxiv.org/abs/2507.00057v1
|
https://arxiv.org/pdf/2507.00057v1.pdf
| null |
[
"Thomas Valentin",
"Ardi Madadi",
"Gaetano Sapia",
"Marcel Böhme"
] |
[
"Code Generation"
] | 2025-06-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/artifactsbench-bridging-the-visual
|
2507.04952
| null | null |
ArtifactsBench: Bridging the Visual-Interactive Gap in LLM Code Generation Evaluation
|
The generative capabilities of Large Language Models (LLMs) are rapidly expanding from static code to dynamic, interactive visual artifacts. This progress is bottlenecked by a critical evaluation gap: established benchmarks focus on algorithmic correctness and are blind to the visual fidelity and interactive integrity that define modern user experiences. To bridge this gap, we introduce ArtifactsBench, a new benchmark and paradigm for the automated, multimodal evaluation of visual code generation. Our framework programmatically renders each generated artifact and captures its dynamic behavior through temporal screenshots. This visual evidence, alongside the source code, is then assessed by a Multimodal LLM (MLLM)-as-Judge, which is rigorously guided by a fine-grained, per-task checklist to ensure holistic and reproducible scoring. We construct a new benchmark of 1,825 diverse tasks and evaluate over 30 leading LLMs. Our automated evaluation achieves a striking 94.4% ranking consistency with WebDev Arena, the gold-standard for human preference in web development, and over 90% pairwise agreement with human experts. This establishes ArtifactsBench as the first framework to reliably automate the assessment of human-perceived quality at scale. Our analysis provides a high-resolution map of the current SOTA, revealing that generalist models often outperform domain-specific ones. We open-source ArtifactsBench, including the benchmark, evaluation harness, and baseline results at https://artifactsbenchmark.github.io/, to provide the community with a scalable and accurate tool to accelerate the development of user-centric generative models.
| null |
https://arxiv.org/abs/2507.04952v1
|
https://arxiv.org/pdf/2507.04952v1.pdf
| null |
[
"Chenchen Zhang",
"Yuhang Li",
"Can Xu",
"Jiaheng Liu",
"Ao Liu",
"Shihui Hu",
"Dengpeng Wu",
"Guanhua Huang",
"Kejiao Li",
"Qi Yi",
"Ruibin Xiong",
"Haotian Zhu",
"Yuanxing Zhang",
"Yuhao Jiang",
"Yue Zhang",
"Zenan Xu",
"Bohui Zhai",
"Guoxiang He",
"Hebin Li",
"Jie Zhao",
"Le Zhang",
"Lingyun Tan",
"Pengyu Guo",
"Xianshu Pang",
"Yang Ruan",
"Zhifeng Zhang",
"Zhonghu Wang",
"Ziyan Xu",
"Zuopu Yin",
"Wiggin Zhou",
"Chayse Zhou",
"Fengzong Lian"
] |
[
"Code Generation"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/corecodebench-a-configurable-multi-scenario
|
2507.05281
| null | null |
CoreCodeBench: A Configurable Multi-Scenario Repository-Level Benchmark
|
As Large Language Models (LLMs) demonstrate increasingly sophisticated code processing capabilities, evaluating their performance on engineering-level code remains challenging. Existing repository-level benchmarks primarily focus on single scenarios, such as code generation or bug fixing, without adequately capturing the diversity and complexity of real-world software or project engineering workflows. Furthermore, these benchmarks suffer from limited controllability in question positioning and reliability issues in their generated test cases. To address these limitations, we present CorePipe, a fully automated pipeline that converts repositories into comprehensive test cases, and introduce CoreCodeBench, a configurable multi-scenario repository-level benchmark. To simulate real engineering scenarios, CorePipe generates three types of atomic questions (Development, BugFix, and Test-Driven Development) specifically targeting core code segments. These atomic questions are further combined into three types of composite questions, with difficulty levels flexibly adjusted through hyperparameter tuning. CoreCodeBench provides a comprehensive and extensive repository-level benchmark to investigate the applicability of LLMs in real-world engineering projects. Experiments with 16 LLMs across diverse scenarios reveal varying capabilities and offer multi-dimensional insights into LLM performance in engineering contexts. The code for CorePipe is available at https://github.com/AGI-Eval-Official/CoreCodeBench, and the data for CoreCodeBench can be accessed at https://huggingface.co/collections/tubehhh/corecodebench-68256d2faabf4b1610a08caa.
| null |
https://arxiv.org/abs/2507.05281v1
|
https://arxiv.org/pdf/2507.05281v1.pdf
| null |
[
"Lingyue Fu",
"Hao Guan",
"Bolun Zhang",
"Haowei Yuan",
"Yaoming Zhu",
"Jun Xu",
"ZongYu Wang",
"Lin Qiu",
"Xunliang Cai",
"Xuezhi Cao",
"Weiwen Liu",
"Weinan Zhang",
"Yong Yu"
] |
[
"Bug fixing",
"Code Generation",
"test driven development"
] | 2025-07-04T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/lira-inferring-segmentation-in-large-multi
|
2507.06272
| null | null |
LIRA: Inferring Segmentation in Large Multi-modal Models with Local Interleaved Region Assistance
|
While large multi-modal models (LMMs) demonstrate promising capabilities in segmentation and comprehension, they still struggle with two limitations: inaccurate segmentation and hallucinated comprehension. These challenges stem primarily from constraints in weak visual comprehension and a lack of fine-grained perception. To alleviate these limitations, we propose LIRA, a framework that capitalizes on the complementary relationship between visual comprehension and segmentation via two key components: (1) Semantic-Enhanced Feature Extractor (SEFE) improves object attribute inference by fusing semantic and pixel-level features, leading to more accurate segmentation; (2) Interleaved Local Visual Coupling (ILVC) autoregressively generates local descriptions after extracting local features based on segmentation masks, offering fine-grained supervision to mitigate hallucinations. Furthermore, we find that the precision of object segmentation is positively correlated with the latent related semantics of the <seg> token. To quantify this relationship and the model's potential semantic inferring ability, we introduce the Attributes Evaluation (AttrEval) dataset. Our experiments show that LIRA achieves state-of-the-art performance in both segmentation and comprehension tasks. Code will be available at https://github.com/echo840/LIRA.
| null |
https://arxiv.org/abs/2507.06272v2
|
https://arxiv.org/pdf/2507.06272v2.pdf
| null |
[
"Zhang Li",
"Biao Yang",
"Qiang Liu",
"Zhiyin Ma",
"Shuo Zhang",
"Liang Yin",
"Linger Deng",
"Yabo Sun",
"Yuliang Liu",
"Xiang Bai"
] |
[
"Attribute",
"Segmentation",
"Semantic Segmentation"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hnoseg-xs-extremely-small-hartley-neural
|
2507.08205
| null | null |
HNOSeg-XS: Extremely Small Hartley Neural Operator for Efficient and Resolution-Robust 3D Image Segmentation
|
In medical image segmentation, convolutional neural networks (CNNs) and transformers are dominant. For CNNs, given the local receptive fields of convolutional layers, long-range spatial correlations are captured through consecutive convolutions and pooling. However, as the computational cost and memory footprint can be prohibitively large, 3D models can only afford fewer layers than 2D models with reduced receptive fields and abstract levels. For transformers, although long-range correlations can be captured by multi-head attention, its quadratic complexity with respect to input size is computationally demanding. Therefore, either model may require input size reduction to allow more filters and layers for better segmentation. Nevertheless, given their discrete nature, models trained with patch-wise training or image downsampling may produce suboptimal results when applied on higher resolutions. To address this issue, here we propose the resolution-robust HNOSeg-XS architecture. We model image segmentation by learnable partial differential equations through the Fourier neural operator which has the zero-shot super-resolution property. By replacing the Fourier transform by the Hartley transform and reformulating the problem in the frequency domain, we created the HNOSeg-XS model, which is resolution robust, fast, memory efficient, and extremely parameter efficient. When tested on the BraTS'23, KiTS'23, and MVSeg'23 datasets with a Tesla V100 GPU, HNOSeg-XS showed its superior resolution robustness with fewer than 34.7k model parameters. It also achieved the overall best inference time (< 0.24 s) and memory efficiency (< 1.8 GiB) compared to the tested CNN and transformer models.
|
For transformers, although long-range correlations can be captured by multi-head attention, its quadratic complexity with respect to input size is computationally demanding.
|
https://arxiv.org/abs/2507.08205v1
|
https://arxiv.org/pdf/2507.08205v1.pdf
| null |
[
"Ken C. L. Wong",
"Hongzhi Wang",
"Tanveer Syeda-Mahmood"
] |
[
"GPU",
"Image Segmentation",
"Medical Image Segmentation",
"Semantic Segmentation",
"Super-Resolution"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-ud-newscrawl-treebank-reflections-and
|
2505.20428
| null | null |
The UD-NewsCrawl Treebank: Reflections and Challenges from a Large-scale Tagalog Syntactic Annotation Project
|
This paper presents UD-NewsCrawl, the largest Tagalog treebank to date, containing 15.6k trees manually annotated according to the Universal Dependencies framework. We detail our treebank development process, including data collection, pre-processing, manual annotation, and quality assurance procedures. We provide baseline evaluations using multiple transformer-based models to assess the performance of state-of-the-art dependency parsers on Tagalog. We also highlight challenges in the syntactic analysis of Tagalog given its distinctive grammatical properties, and discuss its implications for the annotation of this treebank. We anticipate that UD-NewsCrawl and our baseline model implementations will serve as valuable resources for advancing computational linguistics research in underrepresented languages like Tagalog.
|
This paper presents UD-NewsCrawl, the largest Tagalog treebank to date, containing 15. 6k trees manually annotated according to the Universal Dependencies framework.
|
https://arxiv.org/abs/2505.20428v1
|
https://arxiv.org/pdf/2505.20428v1.pdf
| null |
[
"Angelina A. Aquino",
"Lester James V. Miranda",
"Elsie Marie T. Or"
] |
[] | 2025-05-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gr-llms-recent-advances-in-generative
|
2507.06507
| null | null |
GR-LLMs: Recent Advances in Generative Recommendation Based on Large Language Models
|
In the past year, Generative Recommendations (GRs) have undergone substantial advancements, especially in leveraging the powerful sequence modeling and reasoning capabilities of Large Language Models (LLMs) to enhance overall recommendation performance. LLM-based GRs are forming a new paradigm that is distinctly different from discriminative recommendations, showing strong potential to replace traditional recommendation systems heavily dependent on complex hand-crafted features. In this paper, we provide a comprehensive survey aimed at facilitating further research of LLM-based GRs. Initially, we outline the general preliminaries and application cases of LLM-based GRs. Subsequently, we introduce the main considerations when LLM-based GRs are applied in real industrial scenarios. Finally, we explore promising directions for LLM-based GRs. We hope that this survey contributes to the ongoing advancement of the GR domain.
| null |
https://arxiv.org/abs/2507.06507v2
|
https://arxiv.org/pdf/2507.06507v2.pdf
| null |
[
"Zhen Yang",
"Haitao Lin",
"Jiawei Xue",
"Ziji Zhang"
] |
[
"Recommendation Systems",
"Survey"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/boosting-parameter-efficiency-in-llm-based
|
2507.07064
| null | null |
Boosting Parameter Efficiency in LLM-Based Recommendation through Sophisticated Pruning
|
LLM-based recommender systems have made significant progress; however, the deployment cost associated with the large parameter volume of LLMs still hinders their real-world applications. This work explores parameter pruning to improve parameter efficiency while maintaining recommendation quality, thereby enabling easier deployment. Unlike existing approaches that focus primarily on inter-layer redundancy, we uncover intra-layer redundancy within components such as self-attention and MLP modules. Building on this analysis, we propose a more fine-grained pruning approach that integrates both intra-layer and layer-wise pruning. Specifically, we introduce a three-stage pruning strategy that progressively prunes parameters at different levels and parts of the model, moving from intra-layer to layer-wise pruning, or from width to depth. Each stage also includes a performance restoration step using distillation techniques, helping to strike a balance between performance and parameter efficiency. Empirical results demonstrate the effectiveness of our approach: across three datasets, our models achieve an average of 88% of the original model's performance while pruning more than 95% of the non-embedding parameters. This underscores the potential of our method to significantly reduce resource requirements without greatly compromising recommendation quality. Our code will be available at: https://github.com/zheng-sl/PruneRec
| null |
https://arxiv.org/abs/2507.07064v1
|
https://arxiv.org/pdf/2507.07064v1.pdf
| null |
[
"Shanle Zheng",
"Keqin Bao",
"Jizhi Zhang",
"Yang Zhang",
"Fuli Feng",
"Xiangnan He"
] |
[
"Recommendation Systems"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/a-language-driven-framework-for-improving
|
2507.07251
| null | null |
A Language-Driven Framework for Improving Personalized Recommendations: Merging LLMs with Traditional Algorithms
|
Traditional recommendation algorithms are not designed to provide personalized recommendations based on user preferences provided through text, e.g., "I enjoy light-hearted comedies with a lot of humor". Large Language Models (LLMs) have emerged as one of the most promising tools for natural language processing in recent years. This research proposes a novel framework that mimics how a close friend would recommend items based on their knowledge of an individual's tastes. We leverage LLMs to enhance movie recommendation systems by refining traditional algorithm outputs and integrating them with language-based user preference inputs. We employ Singular Value Decomposition (SVD) or SVD++ algorithms to generate initial movie recommendations, implemented using the Surprise Python library and trained on the MovieLens-Latest-Small dataset. We compare the performance of the base algorithms with our LLM-enhanced versions using leave-one-out validation hit rates and cumulative hit rates. Additionally, to compare the performance of our framework against the current state-of-the-art recommendation systems, we use rating and ranking metrics with an item-based stratified 0.75 train, 0.25 test split. Our framework can generate preference profiles automatically based on users' favorite movies or allow manual preference specification for more personalized results. Using an automated approach, our framework overwhelmingly surpassed SVD and SVD++ on every evaluation metric used (e.g., improvements of up to ~6x in cumulative hit rate, ~3.7x in NDCG, etc.), albeit at the cost of a slight increase in computational overhead.
| null |
https://arxiv.org/abs/2507.07251v1
|
https://arxiv.org/pdf/2507.07251v1.pdf
| null |
[
"Aaron Goldstein",
"Ayan Dutta"
] |
[
"Movie Recommendation",
"Recommendation Systems"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
}
] |
https://paperswithcode.com/paper/llm-driven-dual-level-multi-interest-modeling
|
2507.10917
| null | null |
LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation
|
Recently, much effort has been devoted to modeling users' multi-interests based on their behaviors or auxiliary signals. However, existing methods often rely on heuristic assumptions, e.g., co-occurring items indicate the same interest of users, failing to capture user multi-interests aligning with real-world scenarios. While large language models (LLMs) show significant potential for multi-interest analysis due to their extensive knowledge and powerful reasoning capabilities, two key challenges remain. First, the granularity of LLM-driven multi-interests is agnostic, possibly leading to overly fine or coarse interest grouping. Second, individual user analysis provides limited insights due to the data sparsity issue. In this paper, we propose an LLM-driven dual-level multi-interest modeling framework for more effective recommendation. At the user-individual level, we exploit LLMs to flexibly allocate items engaged by users into different semantic clusters, indicating their diverse and distinct interests. To alleviate the agnostic generation of LLMs, we adaptively assign these semantic clusters to users' collaborative multi-interests learned from global user-item interactions, allowing the granularity to be automatically adjusted according to the user's behaviors using an alignment module. To alleviate the limited insights derived from individual users' behaviors, at the user-crowd level, we propose aggregating user cliques into synthesized users with rich behaviors for more comprehensive LLM-driven multi-interest analysis. We formulate a max covering problem to ensure the compactness and representativeness of synthesized users' behaviors, and then conduct contrastive learning based on their LLM-driven multi-interests to disentangle item representations among different interests. Experiments on real-world datasets show the superiority of our approach against state-of-the-art methods.
| null |
https://arxiv.org/abs/2507.10917v2
|
https://arxiv.org/pdf/2507.10917v2.pdf
| null |
[
"Ziyan Wang",
"Yingpeng Du",
"Zhu Sun",
"Jieyi Bi",
"Haoyan Chua",
"Tianjun Wei",
"Jie Zhang"
] |
[
"Contrastive Learning"
] | 2025-07-15T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/modality-independent-teachers-meet-weakly-1
|
2305.17343
| null |
p8gTWkFIvx
|
Modality-Independent Teachers Meet Weakly-Supervised Audio-Visual Event Parser
|
Audio-visual learning has been a major pillar of multi-modal machine learning, where the community mostly focused on its modality-aligned setting, i.e., the audio and visual modality are both assumed to signal the prediction target. With the Look, Listen, and Parse dataset (LLP), we investigate the under-explored unaligned setting, where the goal is to recognize audio and visual events in a video with only weak labels observed. Such weak video-level labels only tell what events happen without knowing the modality they are perceived (audio, visual, or both). To enhance learning in this challenging setting, we incorporate large-scale contrastively pre-trained models as the modality teachers. A simple, effective, and generic method, termed Visual-Audio Label Elaboration (VALOR), is innovated to harvest modality labels for the training events. Empirical studies show that the harvested labels significantly improve an attentional baseline by 8.0 in average F-score (Type@AV). Surprisingly, we found that modality-independent teachers outperform their modality-fused counterparts since they are noise-proof from the other potentially unaligned modality. Moreover, our best model achieves the new state-of-the-art on all metrics of LLP by a substantial margin (+5.4 F-score for Type@AV). VALOR is further generalized to Audio-Visual Event Localization and achieves the new state-of-the-art as well. Code is available at: https://github.com/Franklin905/VALOR.
|
Audio-visual learning has been a major pillar of multi-modal machine learning, where the community mostly focused on its modality-aligned setting, i. e., the audio and visual modality are both assumed to signal the prediction target.
|
https://arxiv.org/abs/2305.17343v2
|
https://arxiv.org/pdf/2305.17343v2.pdf
|
NeurIPS 2023 11
|
[
"Yung-Hsuan Lai",
"Yen-Chun Chen",
"Yu-Chiang Frank Wang"
] |
[
"audio-visual event localization",
"audio-visual learning"
] | 2023-05-27T00:00:00 |
https://openreview.net/forum?id=p8gTWkFIvx
|
https://openreview.net/pdf?id=p8gTWkFIvx
|
modality-independent-teachers-meet-weakly
| null |
[] |
https://paperswithcode.com/paper/jarvisart-liberating-human-artistic
|
2506.17612
| null | null |
JarvisArt: Liberating Human Artistic Creativity via an Intelligent Photo Retouching Agent
|
Photo retouching has become integral to contemporary visual storytelling, enabling users to capture aesthetics and express creativity. While professional tools such as Adobe Lightroom offer powerful capabilities, they demand substantial expertise and manual effort. In contrast, existing AI-based solutions provide automation but often suffer from limited adjustability and poor generalization, failing to meet diverse and personalized editing needs. To bridge this gap, we introduce JarvisArt, a multi-modal large language model (MLLM)-driven agent that understands user intent, mimics the reasoning process of professional artists, and intelligently coordinates over 200 retouching tools within Lightroom. JarvisArt undergoes a two-stage training process: an initial Chain-of-Thought supervised fine-tuning to establish basic reasoning and tool-use skills, followed by Group Relative Policy Optimization for Retouching (GRPO-R) to further enhance its decision-making and tool proficiency. We also propose the Agent-to-Lightroom Protocol to facilitate seamless integration with Lightroom. To evaluate performance, we develop MMArt-Bench, a novel benchmark constructed from real-world user edits. JarvisArt demonstrates user-friendly interaction, superior generalization, and fine-grained control over both global and local adjustments, paving a new avenue for intelligent photo retouching. Notably, it outperforms GPT-4o with a 60% improvement in average pixel-level metrics on MMArt-Bench for content fidelity, while maintaining comparable instruction-following capabilities. Project Page: https://jarvisart.vercel.app/.
| null |
https://arxiv.org/abs/2506.17612v1
|
https://arxiv.org/pdf/2506.17612v1.pdf
| null |
[
"Yunlong Lin",
"Zixu Lin",
"Kunjie Lin",
"Jinbin Bai",
"Panwang Pan",
"Chenxin Li",
"Haoyu Chen",
"Zhongdao Wang",
"Xinghao Ding",
"Wenbo Li",
"Shuicheng Yan"
] |
[
"Instruction Following",
"Large Language Model",
"Photo Retouching",
"Visual Storytelling"
] | 2025-06-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/are-vision-foundation-models-ready-for-out-of
|
2507.11569
| null | null |
Are Vision Foundation Models Ready for Out-of-the-Box Medical Image Registration?
|
Foundation models, pre-trained on large image datasets and capable of capturing rich feature representations, have recently shown potential for zero-shot image registration. However, their performance has mostly been tested in the context of rigid or less complex structures, such as the brain or abdominal organs, and it remains unclear whether these models can handle more challenging, deformable anatomy. Breast MRI registration is particularly difficult due to significant anatomical variation between patients, deformation caused by patient positioning, and the presence of thin and complex internal structure of fibroglandular tissue, where accurate alignment is crucial. Whether foundation model-based registration algorithms can address this level of complexity remains an open question. In this study, we provide a comprehensive evaluation of foundation model-based registration algorithms for breast MRI. We assess five pre-trained encoders, including DINO-v2, SAM, MedSAM, SSLSAM, and MedCLIP, across four key breast registration tasks that capture variations in different years and dates, sequences, modalities, and patient disease status (lesion versus no lesion). Our results show that foundation model-based algorithms such as SAM outperform traditional registration baselines for overall breast alignment, especially under large domain shifts, but struggle with capturing fine details of fibroglandular tissue. Interestingly, additional pre-training or fine-tuning on medical or breast-specific images in MedSAM and SSLSAM, does not improve registration performance and may even decrease it in some cases. Further work is needed to understand how domain-specific training influences registration and to explore targeted strategies that improve both global alignment and fine structure accuracy. We also publicly release our code at \href{https://github.com/mazurowski-lab/Foundation-based-reg}{Github}.
| null |
https://arxiv.org/abs/2507.11569v1
|
https://arxiv.org/pdf/2507.11569v1.pdf
| null |
[
"Hanxue Gu",
"Yaqian Chen",
"Nicholas Konz",
"Qihang Li",
"Maciej A. Mazurowski"
] |
[
"Anatomy",
"Image Registration",
"Medical Image Registration"
] | 2025-07-15T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
}
] |
https://paperswithcode.com/paper/regcl-continual-adaptation-of-segment
|
2507.12297
| null | null |
RegCL: Continual Adaptation of Segment Anything Model via Model Merging
|
To address the performance limitations of the Segment Anything Model (SAM) in specific domains, existing works primarily adopt adapter-based one-step adaptation paradigms. However, some of these methods are specific developed for specific domains. If used on other domains may lead to performance degradation. This issue of catastrophic forgetting severely limits the model's scalability. To address this issue, this paper proposes RegCL, a novel non-replay continual learning (CL) framework designed for efficient multi-domain knowledge integration through model merging. Specifically, RegCL incorporates the model merging algorithm into the continual learning paradigm by merging the parameters of SAM's adaptation modules (e.g., LoRA modules) trained on different domains. The merging process is guided by weight optimization, which minimizes prediction discrepancies between the merged model and each of the domain-specific models. RegCL effectively consolidates multi-domain knowledge while maintaining parameter efficiency, i.e., the model size remains constant regardless of the number of tasks, and no historical data storage is required. Experimental results demonstrate that RegCL achieves favorable continual learning performance across multiple downstream datasets, validating its effectiveness in dynamic scenarios.
| null |
https://arxiv.org/abs/2507.12297v1
|
https://arxiv.org/pdf/2507.12297v1.pdf
| null |
[
"Yuan-Chen Shu",
"Zhiwei Lin",
"Yongtao Wang"
] |
[
"Continual Learning",
"model"
] | 2025-07-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "ADaptive gradient method with the OPTimal convergence rate",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "ADOPT",
"source_title": "ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate",
"source_url": "https://arxiv.org/abs/2411.02853v3"
}
] |
https://paperswithcode.com/paper/efficient-calisthenics-skills-classification
|
2507.12292
| null | null |
Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation
|
Calisthenics skill classification is the computer vision task of inferring the skill performed by an athlete from images, enabling automatic performance assessment and personalized analytics. Traditional methods for calisthenics skill recognition are based on pose estimation methods to determine the position of skeletal data from images, which is later fed to a classification algorithm to infer the performed skill. Despite the progress in human pose estimation algorithms, they still involve high computational costs, long inference times, and complex setups, which limit the applicability of such approaches in real-time applications or mobile devices. This work proposes a direct approach to calisthenics skill recognition, which leverages depth estimation and athlete patch retrieval to avoid the computationally expensive human pose estimation module. Using Depth Anything V2 for depth estimation and YOLOv10 for athlete localization, we segment the subject from the background rather than relying on traditional pose estimation techniques. This strategy increases efficiency, reduces inference time, and improves classification accuracy. Our approach significantly outperforms skeleton-based methods, achieving 38.3x faster inference with RGB image patches and improved classification accuracy with depth patches (0.837 vs. 0.815). Beyond these performance gains, the modular design of our pipeline allows for flexible replacement of components, enabling future enhancements and adaptation to real-world applications.
| null |
https://arxiv.org/abs/2507.12292v1
|
https://arxiv.org/pdf/2507.12292v1.pdf
| null |
[
"Antonio Finocchiaro",
"Giovanni Maria Farinella",
"Antonino Furnari"
] |
[
"Classification",
"Depth Estimation",
"Pose Estimation"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/samst-a-transformer-framework-based-on-sam
|
2507.11994
| null | null |
SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation
|
Public remote sensing datasets often face limitations in universality due to resolution variability and inconsistent land cover category definitions. To harness the vast pool of unlabeled remote sensing data, we propose SAMST, a semi-supervised semantic segmentation method. SAMST leverages the strengths of the Segment Anything Model (SAM) in zero-shot generalization and boundary detection. SAMST iteratively refines pseudo-labels through two main components: supervised model self-training using both labeled and pseudo-labeled data, and a SAM-based Pseudo-label Refiner. The Pseudo-label Refiner comprises three modules: a Threshold Filter Module for preprocessing, a Prompt Generation Module for extracting connected regions and generating prompts for SAM, and a Label Refinement Module for final label stitching. By integrating the generalization power of large models with the training efficiency of small models, SAMST improves pseudo-label accuracy, thereby enhancing overall model performance. Experiments on the Potsdam dataset validate the effectiveness and feasibility of SAMST, demonstrating its potential to address the challenges posed by limited labeled data in remote sensing semantic segmentation.
| null |
https://arxiv.org/abs/2507.11994v1
|
https://arxiv.org/pdf/2507.11994v1.pdf
| null |
[
"Jun Yin",
"Fei Wu",
"Yupeng Ren",
"Jisheng Huang",
"Qiankun Li",
"Heng Jin",
"Jianhai Fu",
"Chanjie Cui"
] |
[
"Boundary Detection",
"Pseudo Label",
"Pseudo Label Filtering",
"Semantic Segmentation",
"Semi-Supervised Semantic Segmentation",
"Zero-shot Generalization"
] | 2025-07-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
}
] |
https://paperswithcode.com/paper/landmark-detection-for-medical-images-using-a
|
2507.11551
| null | null |
Landmark Detection for Medical Images using a General-purpose Segmentation Model
|
Radiographic images are a cornerstone of medical diagnostics in orthopaedics, with anatomical landmark detection serving as a crucial intermediate step for information extraction. General-purpose foundational segmentation models, such as SAM (Segment Anything Model), do not support landmark segmentation out of the box and require prompts to function. However, in medical imaging, the prompts for landmarks are highly specific. Since SAM has not been trained to recognize such landmarks, it cannot generate accurate landmark segmentations for diagnostic purposes. Even MedSAM, a medically adapted variant of SAM, has been trained to identify larger anatomical structures, such as organs and their parts, and lacks the fine-grained precision required for orthopaedic pelvic landmarks. To address this limitation, we propose leveraging another general-purpose, non-foundational model: YOLO. YOLO excels in object detection and can provide bounding boxes that serve as input prompts for SAM. While YOLO is efficient at detection, it is significantly outperformed by SAM in segmenting complex structures. In combination, these two models form a reliable pipeline capable of segmenting not only a small pilot set of eight anatomical landmarks but also an expanded set of 72 landmarks and 16 regions with complex outlines, such as the femoral cortical bone and the pelvic inlet. By using YOLO-generated bounding boxes to guide SAM, we trained the hybrid model to accurately segment orthopaedic pelvic radiographs. Our results show that the proposed combination of YOLO and SAM yields excellent performance in detecting anatomical landmarks and intricate outlines in orthopaedic pelvic radiographs.
| null |
https://arxiv.org/abs/2507.11551v1
|
https://arxiv.org/pdf/2507.11551v1.pdf
| null |
[
"Ekaterina Stansfield",
"Jennifer A. Mitterer",
"Abdulrahman Altahhan"
] |
[
"Anatomical Landmark Detection",
"Diagnostic"
] | 2025-07-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
},
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/calibrated-and-robust-foundation-models-for
|
2507.09222
| null | null |
Calibrated and Robust Foundation Models for Vision-Language and Medical Image Tasks Under Distribution Shift
|
Foundation models like CLIP and SAM have transformed computer vision and medical imaging via low-shot transfer learning. However, deployment of these models hindered by two key challenges: \textit{distribution shift} between training and test data, and \textit{confidence misalignment} that leads to overconfident incorrect predictions. These issues manifest differently in vision-language classification and medical segmentation tasks, yet existing solutions remain domain-specific. We propose \textit{StaRFM}, a unified framework addressing both challenges. It introduces a Fisher information penalty (FIP), extended to 3D medical data via patch-wise regularization, to reduce covariate shift in CLIP and SAM embeddings. Additionally, a confidence misalignment penalty (CMP), reformulated for voxel-level predictions, calibrates uncertainty in segmentation tasks. We theoretically derive PAC-Bayes bounds showing FIP controls generalization via the Fisher-Rao norm, while CMP minimizes calibration error through Brier score optimization. StaRFM shows consistent performance like \texttt{+}3.5\% accuracy and 28\% lower ECE on 19 vision datasets (e.g., ImageNet, Office-Home), 84.7\% DSC and 4.8mm HD95 in medical segmentation (e.g., BraTS, ATLAS), and 40\% lower cross-domain performance gap compared to prior benchmarking methods. The framework is plug-and-play, requiring minimal architectural changes for seamless integration with foundation models. Code and models will be released at https://anonymous.4open.science/r/StaRFM-C0CD/README.md
| null |
https://arxiv.org/abs/2507.09222v1
|
https://arxiv.org/pdf/2507.09222v1.pdf
| null |
[
"Behraj Khan",
"Tahir Syed"
] |
[
"Benchmarking",
"Transfer Learning"
] | 2025-07-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
},
{
"code_snippet_url": "https://github.com/OpenAI/CLIP",
"description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)",
"full_name": "Contrastive Language-Image Pre-training",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Representations",
"parent": null
},
"name": "CLIP",
"source_title": "Learning Transferable Visual Models From Natural Language Supervision",
"source_url": "https://arxiv.org/abs/2103.00020v1"
}
] |
https://paperswithcode.com/paper/dearli-decoupled-enhancement-of-recognition
|
2507.10118
| null | null |
DEARLi: Decoupled Enhancement of Recognition and Localization for Semi-supervised Panoptic Segmentation
|
Pixel-level annotation is expensive and time-consuming. Semi-supervised segmentation methods address this challenge by learning models on few labeled images alongside a large corpus of unlabeled images. Although foundation models could further account for label scarcity, effective mechanisms for their exploitation remain underexplored. We address this by devising a novel semi-supervised panoptic approach fueled by two dedicated foundation models. We enhance recognition by complementing unsupervised mask-transformer consistency with zero-shot classification of CLIP features. We enhance localization by class-agnostic decoder warm-up with respect to SAM pseudo-labels. The resulting decoupled enhancement of recognition and localization (DEARLi) particularly excels in the most challenging semi-supervised scenarios with large taxonomies and limited labeled data. Moreover, DEARLi outperforms the state of the art in semi-supervised semantic segmentation by a large margin while requiring 8x less GPU memory, in spite of being trained only for the panoptic objective. We observe 29.9 PQ and 38.9 mIoU on ADE20K with only 158 labeled images. The source code is available at https://github.com/helen1c/DEARLi.
| null |
https://arxiv.org/abs/2507.10118v1
|
https://arxiv.org/pdf/2507.10118v1.pdf
| null |
[
"Ivan Martinović",
"Josip Šarić",
"Marin Oršić",
"Matej Kristan",
"Siniša Šegvić"
] |
[
"Decoder",
"GPU",
"Panoptic Segmentation",
"Semantic Segmentation",
"Semi-Supervised Semantic Segmentation",
"zero-shot-classification",
"Zero-Shot Learning"
] | 2025-07-14T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
},
{
"code_snippet_url": "https://github.com/OpenAI/CLIP",
"description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)",
"full_name": "Contrastive Language-Image Pre-training",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Representations",
"parent": null
},
"name": "CLIP",
"source_title": "Learning Transferable Visual Models From Natural Language Supervision",
"source_url": "https://arxiv.org/abs/2103.00020v1"
}
] |
https://paperswithcode.com/paper/test-time-canonicalization-by-foundation
|
2507.10375
| null | null |
Test-Time Canonicalization by Foundation Models for Robust Perception
|
Real-world visual perception requires invariance to diverse transformations, yet current methods rely heavily on specialized architectures or training on predefined augmentations, limiting generalization. We propose FOCAL, a test-time, data-driven framework that achieves robust perception by leveraging internet-scale visual priors from foundation models. By generating and optimizing candidate transformations toward visually typical, "canonical" views, FOCAL enhances robustness without re-training or architectural changes. Our experiments demonstrate improved robustness of CLIP and SAM across challenging transformations, including 2D/3D rotations, illumination shifts (contrast and color), and day-night variations. We also highlight potential applications in active vision. Our approach challenges the assumption that transform-specific training is necessary, instead offering a scalable path to invariance. Our code is available at: https://github.com/sutkarsh/focal.
| null |
https://arxiv.org/abs/2507.10375v1
|
https://arxiv.org/pdf/2507.10375v1.pdf
| null |
[
"Utkarsh Singhal",
"Ryan Feng",
"Stella X. Yu",
"Atul Prakash"
] |
[] | 2025-07-14T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
},
{
"code_snippet_url": "https://github.com/OpenAI/CLIP",
"description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)",
"full_name": "Contrastive Language-Image Pre-training",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Representations",
"parent": null
},
"name": "CLIP",
"source_title": "Learning Transferable Visual Models From Natural Language Supervision",
"source_url": "https://arxiv.org/abs/2103.00020v1"
}
] |
https://paperswithcode.com/paper/inter2former-dynamic-hybrid-attention-for
|
2507.09612
| null | null |
Inter2Former: Dynamic Hybrid Attention for Efficient High-Precision Interactive
|
Interactive segmentation (IS) improves annotation efficiency by segmenting target regions from user prompts, with widespread applications in real-world scenarios. Current approaches face a critical trade-off: dense-token methods achieve superior accuracy and detail preservation but suffer from prohibitively slow processing on CPU devices, while the Segment Anything Model (SAM) advances the field with sparse prompt tokens for fast inference but compromises segmentation quality. In this paper, we propose Inter2Former to address this challenge by optimizing computation allocation in dense-token processing, which introduces four key enhancements. First, we propose Dynamic Prompt Embedding (DPE) that adaptively processes only regions of interest while avoiding additional overhead from background tokens. Second, we introduce Dynamic Hybrid Attention (DHA), which leverages previous segmentation masks to route tokens through either full attention (O(N2)) for boundary regions or our proposed efficient BSQ attention (O(N)) for non-boundary regions. Third, we develop Hybrid Mixture of Experts (HMoE), which applies similar adaptive computation strategies in FFN modules with CPU-optimized parallel processing. Finally, we present Dynamic Local Upsampling (DLU), a reverse operation of DPE, which localizes objects with a lightweight MLP and performs fine-grained upsampling only in detected regions. Experimental results on high-precision IS benchmarks demonstrate that Inter2Former achieves SOTA performance with high efficiency on CPU devices.
| null |
https://arxiv.org/abs/2507.09612v1
|
https://arxiv.org/pdf/2507.09612v1.pdf
| null |
[
"You Huang",
"Lichao Chen",
"Jiayi Ji",
"Liujuan Cao",
"Shengchuan Zhang",
"Rongrong Ji"
] |
[
"CPU",
"Interactive Segmentation",
"Mixture-of-Experts"
] | 2025-07-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/memory-augmented-sam2-for-training-free
|
2507.09577
| null | null |
Memory-Augmented SAM2 for Training-Free Surgical Video Segmentation
|
Surgical video segmentation is a critical task in computer-assisted surgery, essential for enhancing surgical quality and patient outcomes. Recently, the Segment Anything Model 2 (SAM2) framework has demonstrated remarkable advancements in both image and video segmentation. However, the inherent limitations of SAM2's greedy selection memory design are amplified by the unique properties of surgical videos-rapid instrument movement, frequent occlusion, and complex instrument-tissue interaction-resulting in diminished performance in the segmentation of complex, long videos. To address these challenges, we introduce Memory Augmented (MA)-SAM2, a training-free video object segmentation strategy, featuring novel context-aware and occlusion-resilient memory models. MA-SAM2 exhibits strong robustness against occlusions and interactions arising from complex instrument movements while maintaining accuracy in segmenting objects throughout videos. Employing a multi-target, single-loop, one-prompt inference further enhances the efficiency of the tracking process in multi-instrument videos. Without introducing any additional parameters or requiring further training, MA-SAM2 achieved performance improvements of 4.36% and 6.1% over SAM2 on the EndoVis2017 and EndoVis2018 datasets, respectively, demonstrating its potential for practical surgical applications.
| null |
https://arxiv.org/abs/2507.09577v1
|
https://arxiv.org/pdf/2507.09577v1.pdf
| null |
[
"Ming Yin",
"Fu Wang",
"Xujiong Ye",
"Yanda Meng",
"Zeyu Fu"
] |
[
"Segmentation",
"Semantic Segmentation",
"Video Object Segmentation",
"Video Segmentation",
"Video Semantic Segmentation"
] | 2025-07-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/prompt-engineering-in-segment-anything-model
|
2507.09562
| null | null |
Prompt Engineering in Segment Anything Model: Methodologies, Applications, and Emerging Challenges
|
The Segment Anything Model (SAM) has revolutionized image segmentation through its innovative prompt-based approach, yet the critical role of prompt engineering in its success remains underexplored. This paper presents the first comprehensive survey focusing specifically on prompt engineering techniques for SAM and its variants. We systematically organize and analyze the rapidly growing body of work in this emerging field, covering fundamental methodologies, practical applications, and key challenges. Our review reveals how prompt engineering has evolved from simple geometric inputs to sophisticated multimodal approaches, enabling SAM's adaptation across diverse domains including medical imaging and remote sensing. We identify unique challenges in prompt optimization and discuss promising research directions. This survey fills an important gap in the literature by providing a structured framework for understanding and advancing prompt engineering in foundation models for segmentation.
| null |
https://arxiv.org/abs/2507.09562v1
|
https://arxiv.org/pdf/2507.09562v1.pdf
| null |
[
"Yidong Jiang"
] |
[
"Image Segmentation",
"Prompt Engineering",
"Semantic Segmentation",
"Survey"
] | 2025-07-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
}
] |
https://paperswithcode.com/paper/compress-any-segment-anything-model-sam
|
2507.08765
| null | null |
Compress Any Segment Anything Model (SAM)
|
Due to the excellent performance in yielding high-quality, zero-shot segmentation, Segment Anything Model (SAM) and its variants have been widely applied in diverse scenarios such as healthcare and intelligent manufacturing. Therefore, effectively compressing SAMs has become an increasingly pressing practical need. In this study, we propose Birkhoff, a novel data-free compression algorithm for SAM and its variants. Unlike quantization, pruning, distillation, and other compression methods, Birkhoff embodies versatility across model types, agility in deployment, faithfulness to the original model, and compactness in model size. Specifically, Birkhoff introduces a novel compression algorithm: Hyper-Compression, whose core principle is to find a dense trajectory to turn a high-dimensional parameter vector into a low-dimensional scalar. Furthermore, Birkhoff designs a dedicated linear layer operator, HyperLinear, to fuse decompression and matrix multiplication to significantly accelerate inference of the compressed SAMs. Extensive experiments on 18 SAMs in the COCO, LVIS, and SA-1B datasets show that Birkhoff performs consistently and competitively in compression time, compression ratio, post-compression performance, and inference speed. For example, Birkhoff can achieve a compression ratio of 5.17x on SAM2-B, with less than 1% performance drop without using any fine-tuning data. Moreover, the compression is finished within 60 seconds for all models.
| null |
https://arxiv.org/abs/2507.08765v1
|
https://arxiv.org/pdf/2507.08765v1.pdf
| null |
[
"Juntong Fan",
"Zhiwei Hao",
"Jianqiang Shen",
"Shang-Ling Jui",
"Yi Zhang",
"Jing-Xiao Liao",
"Feng-Lei Fan"
] |
[
"model",
"Quantization",
"Zero Shot Segmentation"
] | 2025-07-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
},
{
"code_snippet_url": null,
"description": "A **Linear Layer** is a projection $\\mathbf{XW + b}$.",
"full_name": "Linear Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Linear Layer",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/seg-wild-interactive-segmentation-based-on-3d
|
2507.07395
| null | null |
Seg-Wild: Interactive Segmentation based on 3D Gaussian Splatting for Unconstrained Image Collections
|
Reconstructing and segmenting scenes from unconstrained photo collections obtained from the Internet is a novel but challenging task. Unconstrained photo collections are easier to get than well-captured photo collections. These unconstrained images suffer from inconsistent lighting and transient occlusions, which makes segmentation challenging. Previous segmentation methods cannot address transient occlusions or accurately restore the scene's lighting conditions. Therefore, we propose Seg-Wild, an interactive segmentation method based on 3D Gaussian Splatting for unconstrained image collections, suitable for in-the-wild scenes. We integrate multi-dimensional feature embeddings for each 3D Gaussian and calculate the feature similarity between the feature embeddings and the segmentation target to achieve interactive segmentation in the 3D scene. Additionally, we introduce the Spiky 3D Gaussian Cutter (SGC) to smooth abnormal 3D Gaussians. We project the 3D Gaussians onto a 2D plane and calculate the ratio of 3D Gaussians that need to be cut using the SAM mask. We also designed a benchmark to evaluate segmentation quality in in-the-wild scenes. Experimental results demonstrate that compared to previous methods, Seg-Wild achieves better segmentation results and reconstruction quality. Our code will be available at https://github.com/Sugar0725/Seg-Wild.
| null |
https://arxiv.org/abs/2507.07395v1
|
https://arxiv.org/pdf/2507.07395v1.pdf
| null |
[
"Yongtang Bao",
"Chengjie Tang",
"Yuze Wang",
"Haojie Li"
] |
[
"Interactive Segmentation",
"Segmentation"
] | 2025-07-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
}
] |
https://paperswithcode.com/paper/raps-3d-efficient-interactive-segmentation
|
2507.07730
| null | null |
RAPS-3D: Efficient interactive segmentation for 3D radiological imaging
|
Promptable segmentation, introduced by the Segment Anything Model (SAM), is a promising approach for medical imaging, as it enables clinicians to guide and refine model predictions interactively. However, SAM's architecture is designed for 2D images and does not extend naturally to 3D volumetric data such as CT or MRI scans. Adapting 2D models to 3D typically involves autoregressive strategies, where predictions are propagated slice by slice, resulting in increased inference complexity. Processing large 3D volumes also requires significant computational resources, often leading existing 3D methods to also adopt complex strategies like sliding-window inference to manage memory usage, at the cost of longer inference times and greater implementation complexity. In this paper, we present a simplified 3D promptable segmentation method, inspired by SegVol, designed to reduce inference time and eliminate prompt management complexities associated with sliding windows while achieving state-of-the-art performance.
| null |
https://arxiv.org/abs/2507.07730v1
|
https://arxiv.org/pdf/2507.07730v1.pdf
| null |
[
"Théo Danielou",
"Daniel Tordjman",
"Pierre Manceron",
"Corentin Dancette"
] |
[
"Interactive Segmentation",
"Management"
] | 2025-07-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "ADaptive gradient method with the OPTimal convergence rate",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "ADOPT",
"source_title": "ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate",
"source_url": "https://arxiv.org/abs/2411.02853v3"
}
] |
https://paperswithcode.com/paper/assay2mol-large-language-model-based-drug
|
2507.12574
| null | null |
Assay2Mol: large language model-based drug design using BioAssay context
|
Scientific databases aggregate vast amounts of quantitative data alongside descriptive text. In biochemistry, molecule screening assays evaluate the functional responses of candidate molecules against disease targets. Unstructured text that describes the biological mechanisms through which these targets operate, experimental screening protocols, and other attributes of assays offer rich information for new drug discovery campaigns but has been untapped because of that unstructured format. We present Assay2Mol, a large language model-based workflow that can capitalize on the vast existing biochemical screening assays for early-stage drug discovery. Assay2Mol retrieves existing assay records involving targets similar to the new target and generates candidate molecules using in-context learning with the retrieved assay screening data. Assay2Mol outperforms recent machine learning approaches that generate candidate ligand molecules for target protein structures, while also promoting more synthesizable molecule generation.
|
Scientific databases aggregate vast amounts of quantitative data alongside descriptive text.
|
https://arxiv.org/abs/2507.12574v1
|
https://arxiv.org/pdf/2507.12574v1.pdf
| null |
[
"Yifan Deng",
"Spencer S. Ericksen",
"Anthony Gitter"
] |
[
"Descriptive",
"Drug Design",
"Drug Discovery",
"In-Context Learning",
"Language Modeling",
"Language Modelling",
"Large Language Model"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/beyond-task-specific-reasoning-a-unified
|
2507.11761
| null | null |
Beyond Task-Specific Reasoning: A Unified Conditional Generative Framework for Abstract Visual Reasoning
|
Abstract visual reasoning (AVR) enables humans to quickly discover and generalize abstract rules to new scenarios. Designing intelligent systems with human-like AVR abilities has been a long-standing topic in the artificial intelligence community. Deep AVR solvers have recently achieved remarkable success in various AVR tasks. However, they usually use task-specific designs or parameters in different tasks. In such a paradigm, solving new tasks often means retraining the model, and sometimes retuning the model architectures, which increases the cost of solving AVR problems. In contrast to task-specific approaches, this paper proposes a novel Unified Conditional Generative Solver (UCGS), aiming to address multiple AVR tasks in a unified framework. First, we prove that some well-known AVR tasks can be reformulated as the problem of estimating the predictability of target images in problem panels. Then, we illustrate that, under the proposed framework, training one conditional generative model can solve various AVR tasks. The experiments show that with a single round of multi-task training, UCGS demonstrates abstract reasoning ability across various AVR tasks. Especially, UCGS exhibits the ability of zero-shot reasoning, enabling it to perform abstract reasoning on problems from unseen AVR tasks in the testing phase.
|
Then, we illustrate that, under the proposed framework, training one conditional generative model can solve various AVR tasks.
|
https://arxiv.org/abs/2507.11761v1
|
https://arxiv.org/pdf/2507.11761v1.pdf
| null |
[
"Fan Shi",
"Bin Li",
"xiangyang xue"
] |
[
"Visual Reasoning"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/real-time-bayesian-detection-of-drift-evasive
|
2507.11173
| null | null |
Real-Time Bayesian Detection of Drift-Evasive GNSS Spoofing in Reinforcement Learning Based UAV Deconfliction
|
Autonomous unmanned aerial vehicles (UAVs) rely on global navigation satellite system (GNSS) pseudorange measurements for accurate real-time localization and navigation. However, this dependence exposes them to sophisticated spoofing threats, where adversaries manipulate pseudoranges to deceive UAV receivers. Among these, drift-evasive spoofing attacks subtly perturb measurements, gradually diverting the UAVs trajectory without triggering conventional signal-level anti-spoofing mechanisms. Traditional distributional shift detection techniques often require accumulating a threshold number of samples, causing delays that impede rapid detection and timely response. Consequently, robust temporal-scale detection methods are essential to identify attack onset and enable contingency planning with alternative sensing modalities, improving resilience against stealthy adversarial manipulations. This study explores a Bayesian online change point detection (BOCPD) approach that monitors temporal shifts in value estimates from a reinforcement learning (RL) critic network to detect subtle behavioural deviations in UAV navigation. Experimental results show that this temporal value-based framework outperforms conventional GNSS spoofing detectors, temporal semi-supervised learning frameworks, and the Page-Hinkley test, achieving higher detection accuracy and lower false-positive and false-negative rates for drift-evasive spoofing attacks.
| null |
https://arxiv.org/abs/2507.11173v1
|
https://arxiv.org/pdf/2507.11173v1.pdf
| null |
[
"Deepak Kumar Panda",
"Weisi Guo"
] |
[
"Change Point Detection",
"Reinforcement Learning (RL)"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-trigger-poisoning-amplifies-backdoor
|
2507.11112
| null | null |
Multi-Trigger Poisoning Amplifies Backdoor Vulnerabilities in LLMs
|
Recent studies have shown that Large Language Models (LLMs) are vulnerable to data poisoning attacks, where malicious training examples embed hidden behaviours triggered by specific input patterns. However, most existing works assume a phrase and focus on the attack's effectiveness, offering limited understanding of trigger mechanisms and how multiple triggers interact within the model. In this paper, we present a framework for studying poisoning in LLMs. We show that multiple distinct backdoor triggers can coexist within a single model without interfering with each other, enabling adversaries to embed several triggers concurrently. Using multiple triggers with high embedding similarity, we demonstrate that poisoned triggers can achieve robust activation even when tokens are substituted or separated by long token spans. Our findings expose a broader and more persistent vulnerability surface in LLMs. To mitigate this threat, we propose a post hoc recovery method that selectively retrains specific model components based on a layer-wise weight difference analysis. Our method effectively removes the trigger behaviour with minimal parameter updates, presenting a practical and efficient defence against multi-trigger poisoning.
| null |
https://arxiv.org/abs/2507.11112v1
|
https://arxiv.org/pdf/2507.11112v1.pdf
| null |
[
"Sanhanat Sivapiromrat",
"Caiqi Zhang",
"Marco Basaldella",
"Nigel Collier"
] |
[
"Data Poisoning"
] | 2025-07-15T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/UCSC-REAL/HOC",
"description": "",
"full_name": "High-Order Consensuses",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Value Function Estimation",
"parent": null
},
"name": "HOC",
"source_title": "Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels",
"source_url": "https://arxiv.org/abs/2102.05291v2"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/the-devil-behind-the-mask-an-emergent-safety
|
2507.11097
| null | null |
The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs
|
Diffusion-based large language models (dLLMs) have recently emerged as a powerful alternative to autoregressive LLMs, offering faster inference and greater interactivity via parallel decoding and bidirectional modeling. However, despite strong performance in code generation and text infilling, we identify a fundamental safety concern: existing alignment mechanisms fail to safeguard dLLMs against context-aware, masked-input adversarial prompts, exposing novel vulnerabilities. To this end, we present DIJA, the first systematic study and jailbreak attack framework that exploits unique safety weaknesses of dLLMs. Specifically, our proposed DIJA constructs adversarial interleaved mask-text prompts that exploit the text generation mechanisms of dLLMs, i.e., bidirectional modeling and parallel decoding. Bidirectional modeling drives the model to produce contextually consistent outputs for masked spans, even when harmful, while parallel decoding limits model dynamic filtering and rejection sampling of unsafe content. This causes standard alignment mechanisms to fail, enabling harmful completions in alignment-tuned dLLMs, even when harmful behaviors or unsafe instructions are directly exposed in the prompt. Through comprehensive experiments, we demonstrate that DIJA significantly outperforms existing jailbreak methods, exposing a previously overlooked threat surface in dLLM architectures. Notably, our method achieves up to 100% keyword-based ASR on Dream-Instruct, surpassing the strongest prior baseline, ReNeLLM, by up to 78.5% in evaluator-based ASR on JailbreakBench and by 37.7 points in StrongREJECT score, while requiring no rewriting or hiding of harmful content in the jailbreak prompt. Our findings underscore the urgent need for rethinking safety alignment in this emerging class of language models. Code is available at https://github.com/ZichenWen1/DIJA.
| null |
https://arxiv.org/abs/2507.11097v1
|
https://arxiv.org/pdf/2507.11097v1.pdf
| null |
[
"Zichen Wen",
"Jiashu Qu",
"Dongrui Liu",
"Zhiyuan Liu",
"Ruixi Wu",
"Yicun Yang",
"Xiangqi Jin",
"Haoyun Xu",
"Xuyang Liu",
"Weijia Li",
"Chaochao Lu",
"Jing Shao",
"Conghui He",
"Linfeng Zhang"
] |
[
"Code Generation",
"Safety Alignment",
"Text Generation",
"Text Infilling"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/crafting-imperceptible-on-manifold
|
2507.10998
| null | null |
Crafting Imperceptible On-Manifold Adversarial Attacks for Tabular Data
|
Adversarial attacks on tabular data present fundamental challenges distinct from image or text domains due to the heterogeneous nature of mixed categorical and numerical features. Unlike images where pixel perturbations maintain visual similarity, tabular data lacks intuitive similarity metrics, making it difficult to define imperceptible modifications. Additionally, traditional gradient-based methods prioritise $\ell_p$-norm constraints, often producing adversarial examples that deviate from the original data distributions, making them detectable. We propose a latent space perturbation framework using a mixed-input Variational Autoencoder (VAE) to generate imperceptible adversarial examples. The proposed VAE integrates categorical embeddings and numerical features into a unified latent manifold, enabling perturbations that preserve statistical consistency. We specify In-Distribution Success Rate (IDSR) to measure the proportion of adversarial examples that remain statistically indistinguishable from the input distribution. Evaluation across six publicly available datasets and three model architectures demonstrates that our method achieves substantially lower outlier rates and more consistent performance compared to traditional input-space attacks and other VAE-based methods adapted from image domain approaches. Our comprehensive analysis includes hyperparameter sensitivity, sparsity control mechanisms, and generative architectural comparisons, revealing that VAE-based attacks depend critically on reconstruction quality but offer superior practical utility when sufficient training data is available. This work highlights the importance of on-manifold perturbations for realistic adversarial attacks on tabular data, offering a robust approach for practical deployment. The source code can be accessed through https://github.com/ZhipengHe/VAE-TabAttack.
| null |
https://arxiv.org/abs/2507.10998v1
|
https://arxiv.org/pdf/2507.10998v1.pdf
| null |
[
"Zhipeng He",
"Alexander Stevens",
"Chun Ouyang",
"Johannes De Smedt",
"Alistair Barros",
"Catarina Moreira"
] |
[] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/some-remarks-on-gradient-dominance-and-lqr
|
2507.10452
| null | null |
Some remarks on gradient dominance and LQR policy optimization
|
Solutions of optimization problems, including policy optimization in reinforcement learning, typically rely upon some variant of gradient descent. There has been much recent work in the machine learning, control, and optimization communities applying the Polyak-{\L}ojasiewicz Inequality (PLI) to such problems in order to establish an exponential rate of convergence (a.k.a. ``linear convergence'' in the local-iteration language of numerical analysis) of loss functions to their minima under the gradient flow. Often, as is the case of policy iteration for the continuous-time LQR problem, this rate vanishes for large initial conditions, resulting in a mixed globally linear / locally exponential behavior. This is in sharp contrast with the discrete-time LQR problem, where there is global exponential convergence. That gap between CT and DT behaviors motivates the search for various generalized PLI-like conditions, and this talk will address that topic. Moreover, these generalizations are key to understanding the transient and asymptotic effects of errors in the estimation of the gradient, errors which might arise from adversarial attacks, wrong evaluation by an oracle, early stopping of a simulation, inaccurate and very approximate digital twins, stochastic computations (algorithm ``reproducibility''), or learning by sampling from limited data. We describe an ``input to state stability'' (ISS) analysis of this issue. The second part discusses convergence and PLI-like properties of ``linear feedforward neural networks'' in feedback control. Much of the work described here was done in collaboration with Arthur Castello B. de Oliveira, Leilei Cui, Zhong-Ping Jiang, and Milad Siami.
| null |
https://arxiv.org/abs/2507.10452v2
|
https://arxiv.org/pdf/2507.10452v2.pdf
| null |
[
"Eduardo D. Sontag"
] |
[] | 2025-07-14T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Early Stopping** is a regularization technique for deep neural networks that stops training when parameter updates no longer begin to yield improves on a validation set. In essence, we store and update the current best parameters during training, and when parameter updates no longer yield an improvement (after a set number of iterations) we stop training and use the last best parameters. It works as a regularizer by restricting the optimization procedure to a smaller volume of parameter space.\r\n\r\nImage Source: [Ramazan Gençay](https://www.researchgate.net/figure/Early-stopping-based-on-cross-validation_fig1_3302948)",
"full_name": "Early Stopping",
"introduced_year": 1995,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Early Stopping",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/transferring-styles-for-reduced-texture-bias
|
2507.10239
| null | null |
Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks
|
Recent research has investigated the shape and texture biases of deep neural networks (DNNs) in image classification which influence their generalization capabilities and robustness. It has been shown that, in comparison to regular DNN training, training with stylized images reduces texture biases in image classification and improves robustness with respect to image corruptions. In an effort to advance this line of research, we examine whether style transfer can likewise deliver these two effects in semantic segmentation. To this end, we perform style transfer with style varying across artificial image areas. Those random areas are formed by a chosen number of Voronoi cells. The resulting style-transferred data is then used to train semantic segmentation DNNs with the objective of reducing their dependence on texture cues while enhancing their reliance on shape-based features. In our experiments, it turns out that in semantic segmentation, style transfer augmentation reduces texture bias and strongly increases robustness with respect to common image corruptions as well as adversarial attacks. These observations hold for convolutional neural networks and transformer architectures on the Cityscapes dataset as well as on PASCAL Context, showing the generality of the proposed method.
| null |
https://arxiv.org/abs/2507.10239v1
|
https://arxiv.org/pdf/2507.10239v1.pdf
| null |
[
"Ben Hamscher",
"Edgar Heinert",
"Annika Mütze",
"Kira Maag",
"Matthias Rottmann"
] |
[
"image-classification",
"Image Classification",
"Segmentation",
"Semantic Segmentation",
"Style Transfer"
] | 2025-07-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/3dgaa-realistic-and-robust-3d-gaussian-based
|
2507.09993
| null | null |
3DGAA: Realistic and Robust 3D Gaussian-based Adversarial Attack for Autonomous Driving
|
Camera-based object detection systems play a vital role in autonomous driving, yet they remain vulnerable to adversarial threats in real-world environments. While existing 2D and 3D physical attacks typically optimize texture, they often struggle to balance physical realism and attack robustness. In this work, we propose 3D Gaussian-based Adversarial Attack (3DGAA), a novel adversarial object generation framework that leverages the full 14-dimensional parameterization of 3D Gaussian Splatting (3DGS) to jointly optimize geometry and appearance in physically realizable ways. Unlike prior works that rely on patches or texture, 3DGAA jointly perturbs both geometric attributes (shape, scale, rotation) and appearance attributes (color, opacity) to produce physically realistic and transferable adversarial objects. We further introduce a physical filtering module to preserve geometric fidelity, and a physical augmentation module to simulate complex physical scenarios, thus enhancing attack generalization under real-world conditions. We evaluate 3DGAA on both virtual benchmarks and physical-world setups using miniature vehicle models. Experimental results show that 3DGAA achieves to reduce the detection mAP from 87.21% to 7.38%, significantly outperforming existing 3D physical attacks. Moreover, our method maintains high transferability across different physical conditions, demonstrating a new state-of-the-art in physically realizable adversarial attacks. These results validate 3DGAA as a practical attack framework for evaluating the safety of perception systems in autonomous driving.
| null |
https://arxiv.org/abs/2507.09993v1
|
https://arxiv.org/pdf/2507.09993v1.pdf
| null |
[
"Yixun Zhang",
"Lizhi Wang",
"Junjun Zhao",
"Wending Zhao",
"Feng Zhou",
"Yonghao Dang",
"Jianqin Yin"
] |
[
"3DGS",
"Adversarial Attack",
"Autonomous Driving"
] | 2025-07-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/game-theory-meets-llm-and-agentic-ai
|
2507.10621
| null | null |
Game Theory Meets LLM and Agentic AI: Reimagining Cybersecurity for the Age of Intelligent Threats
|
Protecting cyberspace requires not only advanced tools but also a shift in how we reason about threats, trust, and autonomy. Traditional cybersecurity methods rely on manual responses and brittle heuristics. To build proactive and intelligent defense systems, we need integrated theoretical frameworks and software tools. Game theory provides a rigorous foundation for modeling adversarial behavior, designing strategic defenses, and enabling trust in autonomous systems. Meanwhile, software tools process cyber data, visualize attack surfaces, verify compliance, and suggest mitigations. Yet a disconnect remains between theory and practical implementation. The rise of Large Language Models (LLMs) and agentic AI offers a new path to bridge this gap. LLM-powered agents can operationalize abstract strategies into real-world decisions. Conversely, game theory can inform the reasoning and coordination of these agents across complex workflows. LLMs also challenge classical game-theoretic assumptions, such as perfect rationality or static payoffs, prompting new models aligned with cognitive and computational realities. This co-evolution promises richer theoretical foundations and novel solution concepts. Agentic AI also reshapes software design: systems must now be modular, adaptive, and trust-aware from the outset. This chapter explores the intersection of game theory, agentic AI, and cybersecurity. We review key game-theoretic frameworks (e.g., static, dynamic, Bayesian, and signaling games) and solution concepts. We then examine how LLM agents can enhance cyber defense and introduce LLM-driven games that embed reasoning into AI agents. Finally, we explore multi-agent workflows and coordination games, outlining how this convergence fosters secure, intelligent, and adaptive cyber systems.
| null |
https://arxiv.org/abs/2507.10621v1
|
https://arxiv.org/pdf/2507.10621v1.pdf
| null |
[
"Quanyan Zhu"
] |
[] | 2025-07-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-activation-patching-a-framework
|
2507.09406
| null | null |
Adversarial Activation Patching: A Framework for Detecting and Mitigating Emergent Deception in Safety-Aligned Transformers
|
Large language models (LLMs) aligned for safety through techniques like reinforcement learning from human feedback (RLHF) often exhibit emergent deceptive behaviors, where outputs appear compliant but subtly mislead or omit critical information. This paper introduces adversarial activation patching, a novel mechanistic interpretability framework that leverages activation patching as an adversarial tool to induce, detect, and mitigate such deception in transformer-based models. By sourcing activations from "deceptive" prompts and patching them into safe forward passes at specific layers, we simulate vulnerabilities and quantify deception rates. Through toy neural network simulations across multiple scenarios (e.g., 1000 trials per setup), we demonstrate that adversarial patching increases deceptive outputs to 23.9% from a 0% baseline, with layer-specific variations supporting our hypotheses. We propose six hypotheses, including transferability across models, exacerbation in multimodal settings, and scaling effects. An expanded literature review synthesizes over 20 key works in interpretability, deception, and adversarial attacks. Mitigation strategies, such as activation anomaly detection and robust fine-tuning, are detailed, alongside ethical considerations and future research directions. This work advances AI safety by highlighting patching's dual-use potential and provides a roadmap for empirical studies on large-scale models.
| null |
https://arxiv.org/abs/2507.09406v1
|
https://arxiv.org/pdf/2507.09406v1.pdf
| null |
[
"Santhosh Kumar Ravindran"
] |
[
"Anomaly Detection"
] | 2025-07-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Activation patching studies the model's computation by altering its latent representations, the token embeddings in transformer-based language models, during the inference process",
"full_name": "Activation Patching",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Inference Extrapolation",
"parent": null
},
"name": "Patching",
"source_title": "Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models",
"source_url": "https://arxiv.org/abs/2401.06102v4"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.