paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/a-novel-hybrid-grey-wolf-differential
|
2507.03022
| null | null |
A Novel Hybrid Grey Wolf Differential Evolution Algorithm
|
Grey wolf optimizer (GWO) is a nature-inspired stochastic meta-heuristic of the swarm intelligence field that mimics the hunting behavior of grey wolves. Differential evolution (DE) is a popular stochastic algorithm of the evolutionary computation field that is well suited for global optimization. In this part, we introduce a new algorithm based on the hybridization of GWO and two DE variants, namely the GWO-DE algorithm. We evaluate the new algorithm by applying various numerical benchmark functions. The numerical results of the comparative study are quite satisfactory in terms of performance and solution quality.
| null |
https://arxiv.org/abs/2507.03022v1
|
https://arxiv.org/pdf/2507.03022v1.pdf
| null |
[
"Ioannis D. Bougas",
"Pavlos Doanis",
"Maria S. Papadopoulou",
"Achilles D. Boursianis",
"Sotirios P. Sotiroudis",
"Zaharias D. Zaharis",
"George Koudouridis",
"Panagiotis Sarigiannidis",
"Mohammad Abdul Matint",
"George Karagiannidis",
"Sotirios K. Goudos"
] |
[
"global-optimization"
] | 2025-07-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mambafusion-height-fidelity-dense-global
|
2507.04369
| null | null |
MambaFusion: Height-Fidelity Dense Global Fusion for Multi-modal 3D Object Detection
|
We present the first work demonstrating that a pure Mamba block can achieve efficient Dense Global Fusion, meanwhile guaranteeing top performance for camera-LiDAR multi-modal 3D object detection. Our motivation stems from the observation that existing fusion strategies are constrained by their inability to simultaneously achieve efficiency, long-range modeling, and retaining complete scene information. Inspired by recent advances in state-space models (SSMs) and linear attention, we leverage their linear complexity and long-range modeling capabilities to address these challenges. However, this is non-trivial since our experiments reveal that simply adopting efficient linear-complexity methods does not necessarily yield improvements and may even degrade performance. We attribute this degradation to the loss of height information during multi-modal alignment, leading to deviations in sequence order. To resolve this, we propose height-fidelity LiDAR encoding that preserves precise height information through voxel compression in continuous space, thereby enhancing camera-LiDAR alignment. Subsequently, we introduce the Hybrid Mamba Block, which leverages the enriched height-informed features to conduct local and global contextual learning. By integrating these components, our method achieves state-of-the-art performance with the top-tire NDS score of 75.0 on the nuScenes validation benchmark, even surpassing methods that utilize high-resolution inputs. Meanwhile, our method maintains efficiency, achieving faster inference speed than most recent state-of-the-art methods.
|
We present the first work demonstrating that a pure Mamba block can achieve efficient Dense Global Fusion, meanwhile guaranteeing top performance for camera-LiDAR multi-modal 3D object detection.
|
https://arxiv.org/abs/2507.04369v1
|
https://arxiv.org/pdf/2507.04369v1.pdf
| null |
[
"Hanshi Wang",
"Jin Gao",
"Weiming Hu",
"Zhipeng Zhang"
] |
[
"3D Object Detection",
"Attribute",
"Long-range modeling",
"Mamba",
"object-detection",
"Object Detection",
"State Space Models"
] | 2025-07-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/state-spaces/mamba",
"description": "Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers’ computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5× higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pre-training and downstream evaluation.",
"full_name": "Mamba: Linear-Time Sequence Modeling with Selective State Spaces",
"introduced_year": 2000,
"main_collection": null,
"name": "Mamba",
"source_title": "Mamba: Linear-Time Sequence Modeling with Selective State Spaces",
"source_url": "https://arxiv.org/abs/2312.00752v2"
},
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/skfolio-portfolio-optimization-in-python
|
2507.04176
| null | null |
skfolio: Portfolio Optimization in Python
|
Portfolio optimization is a fundamental challenge in quantitative finance, requiring robust computational tools that integrate statistical rigor with practical implementation. We present skfolio, an open-source Python library for portfolio construction and risk management that seamlessly integrates with the scikit-learn ecosystem. skfolio provides a unified framework for diverse allocation strategies, from classical mean-variance optimization to modern clustering-based methods, state-of-the-art financial estimators with native interfaces, and advanced cross-validation techniques tailored for financial time series. By adhering to scikit-learn's fit-predict-transform paradigm, the library enables researchers and practitioners to leverage machine learning workflows for portfolio optimization, promoting reproducibility and transparency in quantitative finance.
|
Portfolio optimization is a fundamental challenge in quantitative finance, requiring robust computational tools that integrate statistical rigor with practical implementation.
|
https://arxiv.org/abs/2507.04176v1
|
https://arxiv.org/pdf/2507.04176v1.pdf
| null |
[
"Carlo Nicolini",
"Matteo Manzi",
"Hugo Delatte"
] |
[
"Management",
"Portfolio Optimization",
"Time Series"
] | 2025-07-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-deep-learning-based-approach-to-progressive
| null | null | null |
A Deep Learning-Based Approach to Progressive Vehicle Re-identification for Urban Surveillance
|
While re-identification (Re-Id) of persons has attracted intensive attention, vehicle, which is a significant object class in urban video surveillance, is often overlooked by vision community. Most existing methods for vehicle Re-Id only achieve limited performance, as they predominantly focus on the generic appearance of vehicle while neglecting some unique identities of vehicle (e.g., license plate). In this paper, we propose a novel deep learning-based approach to PROgressive Vehicle re-ID, called “PROVID”. Our approach treats vehicle Re-Id as two specific progressive search processes: coarse-to-fine search in the feature space, and near-to-distant search in the real world surveillance environment. The first search process employs the appearance attributes of vehicle for a coarse filtering, and then exploits the Siamese Neural Network for license plate verification to accurately identify vehicles. The near-to-distant search process retrieves vehicles in a manner like human beings, by searching from near to faraway cameras and from close to distant time. Moreover, to facilitate progressive vehicle Re-Id research, we collect to-date the largest dataset named VeRi-776 from large-scale urban surveillance videos, which contains not only massive vehicles with diverse attributes and high recurrence rate, but also sufficient license plates and spatiotemporal labels. A comprehensive evaluation on the VeRi-776 shows that our approach outperforms the state-of-the-art methods by 9.28 % improvements in term of mAP.
| null |
https://link.springer.com/chapter/10.1007/978-3-319-46475-6_53
|
https://link.springer.com/chapter/10.1007/978-3-319-46475-6_53
|
ECCV 2016 9
|
[
"Xinchen Liu",
"Wu Liu",
"Tao Mei",
"Huadong Ma"
] |
[
"Unsupervised Domain Adaptation",
"Vehicle Re-Identification"
] | 2016-09-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/emergent-semantics-beyond-token-embeddings
|
2507.04886
| null | null |
Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations
|
Understanding the locus of semantic representation in large language models (LLMs) is crucial for interpretability and architectural innovation. The dominant paradigm posits that trainable input embeddings serve as foundational "meaning vectors." This paper challenges that view. We construct Transformer models where the embedding layer is entirely frozen, with vectors derived not from data, but from the visual structure of Unicode glyphs. These non-semantic, precomputed visual embeddings are fixed throughout training. Our method is compatible with any tokenizer, including a novel Unicode-centric tokenizer we introduce to ensure universal text coverage. Despite the absence of trainable, semantically initialized embeddings, our models converge, generate coherent text, and, critically, outperform architecturally identical models with trainable embeddings on the MMLU reasoning benchmark. We attribute this to "representational interference" in conventional models, where the embedding layer is burdened with learning both structural and semantic features. Our results indicate that high-level semantics are not inherent to input embeddings but are an emergent property of the Transformer's compositional architecture and data scale. This reframes the role of embeddings from meaning containers to structural primitives. We release all code and models to foster further research.
|
We attribute this to "representational interference" in conventional models, where the embedding layer is burdened with learning both structural and semantic features.
|
https://arxiv.org/abs/2507.04886v1
|
https://arxiv.org/pdf/2507.04886v1.pdf
| null |
[
"A. Bochkov"
] |
[
"Attribute",
"MMLU"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
}
] |
https://paperswithcode.com/paper/extpose-robust-and-coherent-pose-estimation
| null | null | null |
ExtPose: Robust and Coherent Pose Estimation by Extending ViTs
|
Vision Transformers (ViT) are remarkable at 3D pose estimation, yet they still encounter certain challenges. One issue is that the popular ViT architecture for pose estimation is limited to images and lacks temporal information. Another challenge is that the prediction often fails to maintain pixel alignment with the original images. To address these issues, we propose a systematic framework for 3D pose estimation, called ExtPose. ExtPose extends image ViT to the challenging scenario and video setting by taking in additional 2D pose evidence and capturing temporal information in a full attention-based manner. We use 2D human skeleton images to integrate structured 2D pose information. By sharing parameters and attending across modalities and frames, we enhance the consistency between 3D poses and 2D videos without introducing additional parameters. We achieve state-of-the-art (SOTA) performance on multiple human and hand pose estimation benchmarks with substantial improvements to 34.0mm (-23%) on 3DPW and 4.9mm (-18%) on FreiHAND in PA-MPJPE over the other ViT-based methods respectively.
| null |
https://openreview.net/forum?id=hm9FNEZZ6z
|
https://openreview.net/pdf?id=hm9FNEZZ6z
|
International Conference on Machine Learning 2025 6
|
[
"Rongyu Chen",
"Li'an Zhuo",
"Linlin Yang",
"Qi Wang",
"Liefeng Bo",
"Bang Zhang",
"Angela Yao"
] |
[
"3D Hand Pose Estimation",
"3D Human Pose Estimation",
"3D Pose Estimation",
"Hand Pose Estimation",
"Pose Estimation"
] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cyberrag-an-agentic-rag-cyber-attack
|
2507.02424
| null | null |
CyberRAG: An agentic RAG cyber attack classification and reporting tool
|
Intrusion Detection and Prevention Systems (IDS/IPS) in large enterprises can generate hundreds of thousands of alerts per hour, overwhelming security analysts with logs that demand deep, rapidly evolving domain expertise. Conventional machine-learning detectors trim the alert volume but still yield high false-positive rates, while standard single-pass Retrieval-Augmented Generation (RAG) pipelines often retrieve irrelevant context and fail to justify their predictions. To overcome these shortcomings, we present CyberRAG, a modular, agent-based RAG framework that delivers real-time classification, explanation, and structured reporting for cyber-attacks. A central LLM agent orchestrates (i) a pool of fine-tuned specialized classifiers, each tailored to a distinct attack family; (ii) tool adapters for enrichment and alerting; and (iii) an iterative retrieval-and-reason loop that continuously queries a domain-specific knowledge base until the evidence is both relevant and self-consistent. Unlike traditional RAG systems, CyberRAG embraces an agentic design that enables dynamic control flow and adaptive reasoning. This agent-centric architecture refines its threat labels and natural-language justifications autonomously, reducing false positives and enhancing interpretability. The framework is fully extensible: new attack types can be supported by simply adding a classifier without retraining the core agent. CyberRAG has been evaluated achieving over 94% accuracy per class and pushing final classification accuracy to 94.92% through semantic orchestration. Generated explanations score up to 0.94 in BERTScore and 4.9/5 in GPT-4-based expert evaluation. These results show that agentic, specialist-oriented RAG can pair high detection accuracy with trustworthy, SOC-ready prose, offering a practical and scalable path toward semi-autonomous cyber-defence workflows.
| null |
https://arxiv.org/abs/2507.02424v1
|
https://arxiv.org/pdf/2507.02424v1.pdf
| null |
[
"Francesco Blefari",
"Cristian Cosentino",
"Francesco Aurelio Pironti",
"Angelo Furfaro",
"Fabrizio Marozzo"
] |
[
"Intrusion Detection",
"RAG",
"Retrieval-augmented Generation"
] | 2025-07-03T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.",
"full_name": "Linear Warmup With Linear Decay",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Linear Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!",
"full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v5"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
},
{
"code_snippet_url": null,
"description": "**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://paperswithcode.com/method/transformer)-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like [BERT](https://paperswithcode.com/method/bert)) and a left-to-right decoder (like [GPT](https://paperswithcode.com/method/gpt)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like [GPT2](https://paperswithcode.com/method/gpt-2).",
"full_name": "BART",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "BART",
"source_title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension",
"source_url": "https://arxiv.org/abs/1910.13461v1"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
},
{
"code_snippet_url": "",
"description": "**Retriever-Augmented Generation**, or **RAG**, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z\\_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents.",
"full_name": "RAG",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "RAG",
"source_title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks",
"source_url": "https://arxiv.org/abs/2005.11401v4"
}
] |
https://paperswithcode.com/paper/adversarial-manipulation-of-reasoning-models
|
2507.03167
| null | null |
Adversarial Manipulation of Reasoning Models using Internal Representations
|
Reasoning models generate chain-of-thought (CoT) tokens before their final output, but how this affects their vulnerability to jailbreak attacks remains unclear. While traditional language models make refusal decisions at the prompt-response boundary, we find evidence that DeepSeek-R1-Distill-Llama-8B makes these decisions within its CoT generation. We identify a linear direction in activation space during CoT token generation that predicts whether the model will refuse or comply -- termed the "caution" direction because it corresponds to cautious reasoning patterns in the generated text. Ablating this direction from model activations increases harmful compliance, effectively jailbreaking the model. We additionally show that intervening only on CoT token activations suffices to control final outputs, and that incorporating this direction into prompt-based attacks improves success rates. Our findings suggest that the chain-of-thought itself is a promising new target for adversarial manipulation in reasoning models. Code available at https://github.com/ky295/reasoning-manipulation
|
Reasoning models generate chain-of-thought (CoT) tokens before their final output, but how this affects their vulnerability to jailbreak attacks remains unclear.
|
https://arxiv.org/abs/2507.03167v1
|
https://arxiv.org/pdf/2507.03167v1.pdf
| null |
[
"Kureha Yamaguchi",
"Benjamin Etheridge",
"Andy Arditi"
] |
[] | 2025-07-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quantum-stochastic-walks-for-portfolio
|
2507.03963
| null | null |
Quantum Stochastic Walks for Portfolio Optimization: Theory and Implementation on Financial Networks
|
Financial markets are noisy yet contain a latent graph-theoretic structure that can be exploited for superior risk-adjusted returns. We propose a quantum stochastic walk (QSW) optimizer that embeds assets in a weighted graph: nodes represent securities while edges encode the return-covariance kernel. Portfolio weights are derived from the walk's stationary distribution. Three empirical studies support the approach. (i) For the top 100 S\&P 500 constituents over 2016-2024, six scenario portfolios calibrated on 1- and 2-year windows lift the out-of-sample Sharpe ratio by up to 27\% while cutting annual turnover from 480\% (mean-variance) to 2-90%. (ii) A $5^{4}=625$-point grid search identifies a robust sweet spot, $\alpha,\lambda\lesssim0.5$ and $\omega\in[0.2,0.4]$, that delivers Sharpe $\approx0.97$ at $\le 5\%$ turnover and Herfindahl-Hirschman index $\sim0.01$. (iii) Repeating the full grid on 50 random 100-stock subsets of the S\&P 500 adds 31\,350 back-tests: the best-per-draw QSW beats re-optimised mean-variance on Sharpe in 54\% of cases and always wins on trading efficiency, with median turnover 36\% versus 351\%. Overall, QSW raises the annualized Sharpe ratio by 15\% and cuts turnover by 90\% relative to classical optimisation, all while respecting the UCITS 5/10/40 rule. These results show that hybrid quantum-classical dynamics can uncover non-linear dependencies overlooked by quadratic models and offer a practical, low-cost weighting engine for themed ETFs and other systematic mandates.
| null |
https://arxiv.org/abs/2507.03963v1
|
https://arxiv.org/pdf/2507.03963v1.pdf
| null |
[
"Yen Jui Chang",
"Wei-Ting Wang",
"Yun-Yuan Wang",
"Chen-Yu Liu",
"Kuan-Cheng Chen",
"Ching-Ray Chang"
] |
[
"Portfolio Optimization"
] | 2025-07-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ai-driven-cytomorphology-image-synthesis-for
|
2507.05063
| null | null |
AI-Driven Cytomorphology Image Synthesis for Medical Diagnostics
|
Biomedical datasets often contain a large sample imbalance and are subject to strict privacy constraints, which together hinder the development of accurate machine learning models. One potential solution is to generate synthetic images, as this can improve data availability while preserving patient privacy. However, it remains difficult to generate synthetic images of sufficient quality for training robust classifiers. In this work, we focus on the classification of single white blood cells, a key component in the diagnosis of hematological diseases such as acute myeloid leukemia (AML), a severe blood cancer. We demonstrate how synthetic images generated with a fine-tuned stable diffusion model using LoRA weights when guided by real few-shot samples of the target white blood cell classes, can enhance classifier performance for limited data. When training a ResNet classifier, accuracy increased from 27.3\% to 78.4\% (+51.1\%) by adding 5000 synthetic images per class to a small and highly imbalanced real dataset. For a CLIP-based classifier, the accuracy improved from 61.8\% to 76.8\% (+15.0\%). The synthetic images are highly similar to real images, and they can help overcome dataset limitations, enhancing model generalization. Our results establish synthetic images as a tool in biomedical research, improving machine learning models, and facilitating medical diagnosis and research.
|
When training a ResNet classifier, accuracy increased from 27. 3\% to 78. 4\% (+51. 1\%) by adding 5000 synthetic images per class to a small and highly imbalanced real dataset.
|
https://arxiv.org/abs/2507.05063v1
|
https://arxiv.org/pdf/2507.05063v1.pdf
| null |
[
"Jan Carreras Boada",
"Rao Muhammad Umer",
"Carsten Marr"
] |
[
"Image Generation",
"Medical Diagnosis"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/design-and-implementation-of-online-clearance
| null | null | null |
DESIGN AND IMPLEMENTATION OF ONLINE CLEARANCE REPORT.
|
Online clearance system is a research work that will help build an effective
information management for schools. It is aimed at developing a system for
making clearance after graduation . The designed software will serve as a
more reliable and effective means of undertaking students clearance, remove
all forms of delay and stress as well as enable you understand the procedures
involved as well as how to do your clearance online. This project work made
use of data collected from the University, materials and journals from
various authors and software was developed to effectively achieve the aims
of this project. In this project, the implementation of the computer-based
system was carried out using PHP, JAVASCRIPT, CSS, APACHE and
MYSQL for the database. In conclusion, the work met all the objectives
intended. It is, however, recommended for use by all tertiary institutions.
| null |
https://doi.org/10.5281/zenodo.15825774
|
https://doi.org/10.5281/zenodo.15825774
|
Zenodo 2025 7
|
[
"Kamal Acharya"
] |
[
"All",
"Management"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/anti-phishing-in-android-phone-project-report
| null | null | null |
ANTI-PHISHING IN ANDROID PHONE PROJECT REPORT.
|
Phishing is a new word produced from 'fishing', it refers to the act that the attacker allure users
to visit a faked Web site by sending them faked e-mails (or instant messages), and stealthily
get victim's personal information such as user name, password, and national security ID, etc.
This information then can be used for future target advertisements or even identity theft attacks
(e.g., transfer money from victims' bank account). The frequently used attack method is to send
e-mails to potential victims, which seemed to be sent by banks, online organizations, or ISPs.
In these e-mails, they will make up some causes, e.g. the password of your credit card had been
mis-entered for many times, or they are providing upgrading services, to allure you visit their
Web site to conform or modify your account number and password through the hyperlink
provided in the e-mail (Leon, 2008).
| null |
https://doi.org/10.5281/zenodo.15825778
|
https://doi.org/10.5281/zenodo.15825778
|
Zenodo 2025 7
|
[
"Kamal Acharya"
] |
[] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/digital-clock-and-calender-management-system
| null | null | null |
Digital clock and calender management system project report.
|
The design and operational principle of the digital calendar and digital clock is
the subject of this thesis. The system as realized in this book is made up of six
units.
The power supply unit supplies the required regulated voltage to the appropriate
connections of the overall circuit. The micro controller controls the general
operation of the circuit; all the other units are interfaced to it. The input unit is
activated by pressing the keypad (buttons) unit to call up the subroutine program
that controls the set date, month, year and time with alarm instructions. The
decoder with the aid of the micro controller executes the instruction codes,
which are transmitted to the light emitting diode that corresponds to the actual
day that is to display. The display unit consist of a twelve seven segment display
in which their common anode is driven by a NPN switching transistor; displaying
digits in a multiplex mode.
| null |
https://doi.org/10.5281/zenodo.15825785
|
https://doi.org/10.5281/zenodo.15825785
|
Zenodo 2025 7
|
[
"Kamal Acharya"
] |
[
"Decoder",
"Management"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/online-bidding-management-system-project
| null | null | null |
ONLINE BIDDING MANAGEMENT SYSTEM PROJECT.
|
The online bidding system is a flexible solution for supporting lot- based online
bidding. The thesis explains the construction of a bidding website. The system has been
designed to be highly scalable and capable of supporting large numbers of bidders in
inactive bidding. The online bidding system lets you easily browse lots and place bids
using a secure server. All cost of mailing lots will be paid by the buyer. The objective
is to develop a user-friendly bidding site where any kind of product can be bid and
provide value added services to the bidders and the sellers. The products will be
authenticated, and the site provides a safe environment for online users.
| null |
https://doi.org/10.5281/zenodo.15830535
|
https://doi.org/10.5281/zenodo.15830535
|
Zenodo 2025 7
|
[
"Kamal Acharya"
] |
[
"Management"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dental-clinic-management-system-project
| null | null | null |
Dental clinic management system project report.
|
Dental Clinics, Dental department or other specialty department in a general
hospital can use this software. Though this software was designed primarily with the
inputs from dental Clinics, it could be adoptable in other specialty hospitals also,
with some little changes in the medical terms, database etc.
DCMS software has been developed to provide comprehensive software
solution for the clinics. But there are clinics that cannot afford to run such
comprehensive system or may not be required due to the volume of work handled.
Still to encourage such clinics to use computers for generating useful information to
run the organization efficiently, we provide the following Software from which one
can choose according to their requirement.
It is a system which will help dentist to keep track patient dental problems,
from time to time. This system allow dentist to help patient to improve their
awareness and take care about their oral health. The data regarding the patient dental
information will help the patient in order to apply for the next treatment and also to
be use for the future.
DCMS can analysis the data that had been captured and come out with the
analysis report to summarize the dental score for each patient. The reports able to
summarize the patient dental healthcare performance from time to time according to
the treatment made. Other than help the patient to upgrade their awareness regarding
the oral health, these reports will also give benefits to patients as they can view their
dental health performance.
| null |
https://doi.org/10.5281/zenodo.15830550
|
https://doi.org/10.5281/zenodo.15830550
|
Zenodo 2025 7
|
[
"Kamal Acharya"
] |
[
"Management"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/force-imu-fusion-based-sensing-acupuncture
|
2507.04821
| null | null |
Force-IMU Fusion-Based Sensing Acupuncture Needle and Quantitative Analysis System for Acupuncture Manipulations
|
Acupuncture, one of the key therapeutic methods in Traditional Chinese Medicine (TCM), has been widely adopted in various clinical fields. Quantitative research on acupuncture manipulation parameters is critical to achieve standardized techniques. However, quantitative mechanical detection of acupuncture parameters remains limited. This study establishes a kinematic and dynamic model of acupuncture, identifying key parameters such as lifting-thrusting force, acceleration, velocity, displacement, as well as twirling-rotating angular velocity and angle. To measure these critical parameters, we propose a quantitative system comprising a sensing needle equipped with a force sensor and an inertial measurement unit (IMU), as well as an external camera module to capture image information. By fusing visual and IMU data, we accurately identify the stationary or motion states of the needle, enabling segmented computation of lifting-thrusting velocity and displacement. The experimental results demonstrate that the sensing needle achieves comprehensive detection with high precision, featuring a nonlinearity error of 0.45% in force measurement and an RMSE of 1.2 mm in displacement. The extracted parameters provide an objective description of the operational characteristics and motion patterns of the four basic acupuncture manipulations. These findings provide valuable tools and methods for research in acupuncture standardization.
| null |
https://arxiv.org/abs/2507.04821v1
|
https://arxiv.org/pdf/2507.04821v1.pdf
| null |
[
"Peng Tian",
"Kang Yu",
"Tianyun Jiang",
"Yuqi Wang",
"Haiying Zhang",
"Hao Yang",
"Yunfeng Wang",
"Jun Zhang",
"Shuo Gao",
"Junhong Gao"
] |
[] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unsupervised-anomaly-detection-through-mass
|
2502.12793
| null | null |
Unsupervised Anomaly Detection through Mass Repulsing Optimal Transport
|
Detecting anomalies in datasets is a longstanding problem in machine learning. In this context, anomalies are defined as a sample that significantly deviates from the remaining data. Meanwhile, optimal transport (OT) is a field of mathematics concerned with the transportation, between two probability measures, at least effort. In classical OT, the optimal transportation strategy of a measure to itself is the identity. In this paper, we tackle anomaly detection by forcing samples to displace its mass, while keeping the least effort objective. We call this new transportation problem Mass Repulsing Optimal Transport (MROT). Naturally, samples lying in low density regions of space will be forced to displace mass very far, incurring a higher transportation cost. We use these concepts to design a new anomaly score. Through a series of experiments in existing benchmarks, and fault detection problems, we show that our algorithm improves over existing methods.
|
Meanwhile, optimal transport (OT) is a field of mathematics concerned with the transportation, between two probability measures, at least effort.
|
https://arxiv.org/abs/2502.12793v1
|
https://arxiv.org/pdf/2502.12793v1.pdf
| null |
[
"Eduardo Fernandes Montesuma",
"Adel El Habazi",
"Fred Ngole Mboula"
] |
[
"Anomaly Detection",
"Fault Detection",
"Unsupervised Anomaly Detection"
] | 2025-02-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/empirical-analysis-of-heuristic-and
|
2507.01076
| null | null |
Empirical Analysis Of Heuristic and Approximation Algorithms for the The Mutual-Visibility Problem
|
The NP-complete mutual-visibility (MV) problem currently lacks empirical analysis on its practical behaviour despite theoretical studies. This paper addresses this gap by implementing and evaluating three distinct algorithms - a direct greedy heuristic, a hypergraph-based approximation, and a genetic algorithm - on diverse synthetic graph datasets, including those with analytically known $\mu(G)$ values and general graph models. Our results demonstrate that for smaller graphs, the algorithms consistently achieve MV set sizes aligning with theoretical bounds. However, for larger instances, achieved solution sizes notably diverge from theoretical limits; this, combined with the absence of tight bounds, complicates absolute quality assessment. Nevertheless, validation on known optimal graphs showed the Genetic Algorithm and other heuristics empirically performing best among tested methods.
|
The NP-complete mutual-visibility (MV) problem currently lacks empirical analysis on its practical behaviour despite theoretical studies.
|
https://arxiv.org/abs/2507.01076v1
|
https://arxiv.org/pdf/2507.01076v1.pdf
| null |
[
"Vanja Stojanović",
"Bor Pangeršič"
] |
[] | 2025-07-01T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/vote-vision-language-action-optimization-with
|
2507.05116
| null | null |
VOTE: Vision-Language-Action Optimization with Trajectory Ensemble Voting
|
Recent large-scale Vision Language Action (VLA) models have shown superior performance in robotic manipulation tasks guided by natural language. However, their generalization remains limited when applied to novel objects or unfamiliar environments that lie outside the training distribution. To address this, many existing approaches integrate additional components such as depth estimation, segmentation, or even diffusion to improve generalization, at the cost of adding significant computation overhead, resulting in low efficiency. This motivates the exploration of efficient action prediction methods, which are independent of additional high-level visual representations or diffusion techniques. In this work, we propose VOTE, an efficient and general framework for the optimization and acceleration of VLA models. In details, we propose a novel tokenizer-free fine-tuning approach for parallel accurate action prediction, which reduces computational overhead and accelerates inference speed. Additionally, we adopt an ensemble voting strategy for the action sampling, which significantly improves model performance and enhances generalization. Experimental results show that our method achieves state-of-the-art performance with 35$\times$ faster inference and 145 Hz throughput. All the details and codes will be open-sourced.
|
In this work, we propose VOTE, an efficient and general framework for the optimization and acceleration of VLA models.
|
https://arxiv.org/abs/2507.05116v1
|
https://arxiv.org/pdf/2507.05116v1.pdf
| null |
[
"Juyi Lin",
"Amir Taherin",
"Arash Akbari",
"Arman Akbari",
"Lei Lu",
"Guangyu Chen",
"Taskin Padir",
"Xiaomeng Yang",
"Weiwei Chen",
"Yiqian Li",
"Xue Lin",
"David Kaeli",
"Pu Zhao",
"Yanzhi Wang"
] |
[
"Depth Estimation",
"Vision-Language-Action"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "ADaptive gradient method with the OPTimal convergence rate",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "ADOPT",
"source_title": "ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate",
"source_url": "https://arxiv.org/abs/2411.02853v3"
},
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/disambiguation-centric-finetuning-makes
|
2507.03336
| null | null |
Disambiguation-Centric Finetuning Makes Enterprise Tool-Calling LLMs More Realistic and Less Risky
|
Large language models (LLMs) are increasingly tasked with invoking enterprise APIs, yet they routinely falter when near-duplicate tools vie for the same user intent or when required arguments are left underspecified. We introduce DiaFORGE (Dialogue Framework for Organic Response Generation & Evaluation), a disambiguation-centric, three-stage pipeline that (i) synthesizes persona-driven, multi-turn dialogues in which the assistant must distinguish among highly similar tools, (ii) performs supervised fine-tuning of open-source models with reasoning traces across 3B - 70B parameters, and (iii) evaluates real-world readiness via a dynamic suite that redeploys each model in a live agentic loop and reports end-to-end goal completion alongside conventional static metrics. On our dynamic benchmark DiaBENCH, models trained with DiaFORGE raise tool-invocation success by 27 pp over GPT-4o and by 49 pp over Claude-3.5-Sonnet, both under optimized prompting. To spur further research, we release an open corpus of 5000 production-grade enterprise API specifications paired with rigorously validated, disambiguation-focused dialogues, offering a practical blueprint for building reliable, enterprise-ready tool-calling agents.
| null |
https://arxiv.org/abs/2507.03336v1
|
https://arxiv.org/pdf/2507.03336v1.pdf
| null |
[
"Ashutosh Hathidara",
"Julien Yu",
"Sebastian Schreiber"
] |
[
"Response Generation"
] | 2025-07-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptive-gate-aware-mamba-networks-for
|
2507.03369
| null | null |
Adaptive Gate-Aware Mamba Networks for Magnetic Resonance Fingerprinting
|
Magnetic Resonance Fingerprinting (MRF) enables fast quantitative imaging by matching signal evolutions to a predefined dictionary. However, conventional dictionary matching suffers from exponential growth in computational cost and memory usage as the number of parameters increases, limiting its scalability to multi-parametric mapping. To address this, recent work has explored deep learning-based approaches as alternatives to DM. We propose GAST-Mamba, an end-to-end framework that combines a dual Mamba-based encoder with a Gate-Aware Spatial-Temporal (GAST) processor. Built on structured state-space models, our architecture efficiently captures long-range spatial dependencies with linear complexity. On 5 times accelerated simulated MRF data (200 frames), GAST-Mamba achieved a T1 PSNR of 33.12~dB, outperforming SCQ (31.69~dB). For T2 mapping, it reached a PSNR of 30.62~dB and SSIM of 0.9124. In vivo experiments further demonstrated improved anatomical detail and reduced artifacts. Ablation studies confirmed that each component contributes to performance, with the GAST module being particularly important under strong undersampling. These results demonstrate the effectiveness of GAST-Mamba for accurate and robust reconstruction from highly undersampled MRF acquisitions, offering a scalable alternative to traditional DM-based methods.
| null |
https://arxiv.org/abs/2507.03369v1
|
https://arxiv.org/pdf/2507.03369v1.pdf
| null |
[
"Tianyi Ding",
"Hongli Chen",
"Yang Gao",
"Zhuang Xiong",
"Feng Liu",
"Martijn A. Cloos",
"Hongfu Sun"
] |
[
"Magnetic Resonance Fingerprinting",
"Mamba",
"SSIM",
"State Space Models"
] | 2025-07-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/taylor-model-physics-informed-neural-networks
|
2507.03860
| null | null |
Taylor-Model Physics-Informed Neural Networks (PINNs) for Ordinary Differential Equations
|
We study the problem of learning neural network models for Ordinary Differential Equations (ODEs) with parametric uncertainties. Such neural network models capture the solution to the ODE over a given set of parameters, initial conditions, and range of times. Physics-Informed Neural Networks (PINNs) have emerged as a promising approach for learning such models that combine data-driven deep learning with symbolic physics models in a principled manner. However, the accuracy of PINNs degrade when they are used to solve an entire family of initial value problems characterized by varying parameters and initial conditions. In this paper, we combine symbolic differentiation and Taylor series methods to propose a class of higher-order models for capturing the solutions to ODEs. These models combine neural networks and symbolic terms: they use higher order Lie derivatives and a Taylor series expansion obtained symbolically, with the remainder term modeled as a neural network. The key insight is that the remainder term can itself be modeled as a solution to a first-order ODE. We show how the use of these higher order PINNs can improve accuracy using interesting, but challenging ODE benchmarks. We also show that the resulting model can be quite useful for situations such as controlling uncertain physical systems modeled as ODEs.
|
These models combine neural networks and symbolic terms: they use higher order Lie derivatives and a Taylor series expansion obtained symbolically, with the remainder term modeled as a neural network.
|
https://arxiv.org/abs/2507.03860v1
|
https://arxiv.org/pdf/2507.03860v1.pdf
| null |
[
"Chandra Kanth Nagesh",
"Sriram Sankaranarayanan",
"Ramneet Kaur",
"Tuhin Sahai",
"Susmit Jha"
] |
[] | 2025-07-05T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/flow-through-tensors-a-unified-computational
|
2507.02961
| null | null |
Flow-Through Tensors: A Unified Computational Graph Architecture for Multi-Layer Transportation Network Optimization
|
Modern transportation network modeling increasingly involves the integration of diverse methodologies including sensor-based forecasting, reinforcement learning, classical flow optimization, and demand modeling that have traditionally been developed in isolation. This paper introduces Flow Through Tensors (FTT), a unified computational graph architecture that connects origin destination flows, path probabilities, and link travel times as interconnected tensors. Our framework makes three key contributions: first, it establishes a consistent mathematical structure that enables gradient-based optimization across previously separate modeling elements; second, it supports multidimensional analysis of traffic patterns over time, space, and user groups with precise quantification of system efficiency; third, it implements tensor decomposition techniques that maintain computational tractability for large scale applications. These innovations collectively enable real time control strategies, efficient coordination between multiple transportation modes and operators, and rigorous enforcement of physical network constraints. The FTT framework bridges the gap between theoretical transportation models and practical deployment needs, providing a foundation for next generation integrated mobility systems.
| null |
https://arxiv.org/abs/2507.02961v1
|
https://arxiv.org/pdf/2507.02961v1.pdf
| null |
[
"Xuesong",
"Zhou",
"Taehooie Kim",
"Mostafa Ameli",
"Henan",
"Zhu",
"Yu- dai Honma",
"Ram M. Pendyala"
] |
[
"Tensor Decomposition"
] | 2025-06-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fuzzy-classification-aggregation-for-a
|
2507.05297
| null | null |
Fuzzy Classification Aggregation for a Continuum of Agents
|
We prove that any optimal, independent, and zero unanimous fuzzy classification aggregation function of a continuum of individual classifications of $m\ge 3$ objects into $2\le p\le m$ types must be a weighted arithmetic mean.
| null |
https://arxiv.org/abs/2507.05297v1
|
https://arxiv.org/pdf/2507.05297v1.pdf
| null |
[
"Zijun Meng"
] |
[
"Classification"
] | 2025-07-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multiple-integration-model-for-single-source
| null | null | null |
Multiple Integration Model for Single-source Domain Generalizable Person Re-identification
|
Domain generalizable (DG) person re-identification (re-ID) aims to train a model on labeled source domains which can perform well on invisible target domains. Because of the distribution shifts between different domains, it is a challenging task. Existing methods address this challenge by using multiple source domains to train a model which requires more data, manual labor, and computation. In contrast, we pay attention to the single-source DG re-ID task, that is, only one source domain data is accessible for training. However, due to the limited availability of training data, this task is more difficult. In this paper, a novel MulTiple Integration (MTI) model is introduced for single-source DG person re-ID. By integrating multiple reliable perturbations, the generalization performance can be improved. Specifically, MTI model contains two types of integration modules, one is shallow-level compensation (SLC) and the other is deep-level integration (DLI). For SLC, according to the idea of continual learning, the shallow-level information of the ImageNet pre-trained ResNet-50 branch is introduced and fused with the shallow-level information of our backbone network. In this way, massive information in ImageNet can be used to prevent the disastrous forgetting of the pre-trained information, and information compensation can be provided for backbone network. Additionally, we propose a hybrid integrated normalization layer to fuse information and improve the model’s generalization performance. For DLI, a wave transformer block is introduced in the deep layer of the backbone, which can integrate the information of a batch images and contain reliable disturbance, so that the robustness of the model can be promoted. Extensive experimental results demonstrate the superiority of our model.
| null |
https://www.sciencedirect.com/science/article/abs/pii/S1047320323002870
|
https://www.sciencedirect.com/science/article/abs/pii/S1047320323002870
|
JVCIR 2024 2
|
[
"Jia Sun",
"Yanfeng Li",
"Luyifu Chen",
"Houjin Chen",
"Wanru Peng"
] |
[
"Continual Learning",
"Generalizable Person Re-identification",
"Person Re-Identification"
] | 2024-02-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/chat-ghosting-a-comparative-study-of-methods
|
2507.05940
| null | null |
Chat-Ghosting: A Comparative Study of Methods for Auto-Completion in Dialog Systems
|
Ghosting, the ability to predict a user's intended text input for inline query auto-completion, is an invaluable feature for modern search engines and chat interfaces, greatly enhancing user experience. By suggesting completions to incomplete queries (or prefixes), ghosting aids users with slow typing speeds, disabilities, or limited language proficiency. Ghosting is a challenging problem and has become more important with the ubiquitousness of chat-based systems like ChatGPT, Copilot, etc. Despite the increasing prominence of chat-based systems utilizing ghosting, this challenging problem of Chat-Ghosting has received little attention from the NLP/ML research community. There is a lack of standardized benchmarks and relative performance analysis of deep learning and non-deep learning methods. We address this through an open and thorough study of this problem using four publicly available dialog datasets: two human-human (DailyDialog and DSTC7-Ubuntu) and two human-bot (Open Assistant and ShareGPT). We experiment with various existing query auto-completion methods (using tries), n-gram methods and deep learning methods, with and without dialog context. We also propose a novel entropy-based dynamic early stopping strategy. Our analysis finds that statistical n-gram models and tries outperform deep learning based models in terms of both model performance and inference efficiency for seen prefixes. For unseen queries, neural models like T5 and Phi-2 lead to better results. Adding conversational context leads to significant improvements in ghosting quality, especially for Open-Assistant and ShareGPT. We make code and data publicly available
| null |
https://arxiv.org/abs/2507.05940v1
|
https://arxiv.org/pdf/2507.05940v1.pdf
| null |
[
"Sandeep Mishra",
"Anubhab Mandal",
"Bishal Santra",
"Tushar Abhishek",
"Pawan Goyal",
"Manish Gupta"
] |
[
"Deep Learning"
] | 2025-07-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!",
"full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v5"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "A Gated Linear Unit, or GLU computes:\r\n\r\n$$\r\n\\mathrm{GLU}(a, b) = a \\otimes \\sigma(b)\r\n$$\r\n\r\nIt is used in natural language processing architectures, for example the Gated CNN, because here $\\sigma(b)$ is the gate that control what information from $a$ is passed up to the following layer. Intuitively, for a language modeling task, the gating mechanism allows selection of words or features that are important for predicting the next word. The GLU also has non-linear capabilities, but has a linear path for the gradient so diminishes the vanishing gradient problem.",
"full_name": "Gated Linear Unit",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Gated Linear Unit",
"source_title": "Language Modeling with Gated Convolutional Networks",
"source_url": "http://arxiv.org/abs/1612.08083v3"
},
{
"code_snippet_url": "https://github.com/DeadAt0m/adafactor-pytorch/blob/561e627239c29c0be11256171a795b49e0404098/adafactor.py#L7",
"description": "**Adafactor** is a stochastic optimization method based on [Adam](https://paperswithcode.com/method/adam) that reduces memory usage while retaining the empirical benefits of adaptivity. This is achieved through maintaining a factored representation of the squared gradient accumulator across training steps. Specifically, by tracking moving averages of the row and column sums of the squared gradients for matrix-valued variables, we are able to reconstruct a low-rank approximation of the exponentially smoothed accumulator at each training step that is optimal with respect to the generalized Kullback-Leibler divergence. For an $n \\times m$ matrix, this reduces the memory requirements from $O(n m)$ to $O(n + m)$. \r\n\r\nInstead of defining the optimization algorithm in terms of absolute step sizes {$\\alpha_t$}$\\_{t=1}^T$, the authors define the optimization algorithm in terms of relative step sizes {$\\rho_t$}$\\_{t=1}^T$, which get multiplied by the scale of the parameters. The scale of a parameter vector or matrix is defined as the root-mean-square of its components, lower-bounded by a small constant $\\epsilon_2$. The reason for this lower bound is to allow zero-initialized parameters to escape 0. \r\n\r\nProposed hyperparameters are: $\\epsilon\\_{1} = 10^{-30}$, $\\epsilon\\_{2} = 10^{-3}$, $d=1$, $p\\_{t} = \\min\\left(10^{-2}, \\frac{1}{\\sqrt{t}}\\right)$, $\\hat{\\beta}\\_{2\\_{t}} = 1 - t^{-0.8}$.",
"full_name": "Adafactor",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adafactor",
"source_title": "Adafactor: Adaptive Learning Rates with Sublinear Memory Cost",
"source_url": "http://arxiv.org/abs/1804.04235v1"
},
{
"code_snippet_url": null,
"description": "**Inverse Square Root** is a learning rate schedule 1 / $\\sqrt{\\max\\left(n, k\\right)}$ where\r\n$n$ is the current training iteration and $k$ is the number of warm-up steps. This sets a constant learning rate for the first $k$ steps, then exponentially decays the learning rate until pre-training is over.",
"full_name": "Inverse Square Root Schedule",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Inverse Square Root Schedule",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**T5**, or **Text-to-Text Transfer Transformer**, is a [Transformer](https://paperswithcode.com/method/transformer) based architecture that uses a text-to-text approach. Every task – including translation, question answering, and classification – is cast as feeding the model text as input and training it to generate some target text. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks. The changes compared to [BERT](https://paperswithcode.com/method/bert) include:\r\n\r\n- adding a *causal* decoder to the bidirectional architecture.\r\n- replacing the fill-in-the-blank cloze task with a mix of alternative pre-training tasks.",
"full_name": "T5",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "T5",
"source_title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer",
"source_url": "https://arxiv.org/abs/1910.10683v4"
},
{
"code_snippet_url": "",
"description": "**Early Stopping** is a regularization technique for deep neural networks that stops training when parameter updates no longer begin to yield improves on a validation set. In essence, we store and update the current best parameters during training, and when parameter updates no longer yield an improvement (after a set number of iterations) we stop training and use the last best parameters. It works as a regularizer by restricting the optimization procedure to a smaller volume of parameter space.\r\n\r\nImage Source: [Ramazan Gençay](https://www.researchgate.net/figure/Early-stopping-based-on-cross-validation_fig1_3302948)",
"full_name": "Early Stopping",
"introduced_year": 1995,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Early Stopping",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/relayout-integrating-relation-reasoning-for
|
2507.05568
| null | null |
ReLayout: Integrating Relation Reasoning for Content-aware Layout Generation with Multi-modal Large Language Models
|
Content-aware layout aims to arrange design elements appropriately on a given canvas to convey information effectively. Recently, the trend for this task has been to leverage large language models (LLMs) to generate layouts automatically, achieving remarkable performance. However, existing LLM-based methods fail to adequately interpret spatial relationships among visual themes and design elements, leading to structural and diverse problems in layout generation. To address this issue, we introduce ReLayout, a novel method that leverages relation-CoT to generate more reasonable and aesthetically coherent layouts by fundamentally originating from design concepts. Specifically, we enhance layout annotations by introducing explicit relation definitions, such as region, salient, and margin between elements, with the goal of decomposing the layout into smaller, structured, and recursive layouts, thereby enabling the generation of more structured layouts. Furthermore, based on these defined relationships, we introduce a layout prototype rebalance sampler, which defines layout prototype features across three dimensions and quantifies distinct layout styles. This sampler addresses uniformity issues in generation that arise from data bias in the prototype distribution balance process. Extensive experimental results verify that ReLayout outperforms baselines and can generate structural and diverse layouts that are more aligned with human aesthetics and more explainable.
| null |
https://arxiv.org/abs/2507.05568v1
|
https://arxiv.org/pdf/2507.05568v1.pdf
| null |
[
"Jiaxu Tian",
"Xuehui Yu",
"Yaoxing Wang",
"Pan Wang",
"Guangqian Guo",
"Shan Gao"
] |
[
"Layout Generation"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/skywork-r1v3-technical-report
|
2507.06167
| null | null |
Skywork-R1V3 Technical Report
|
We introduce Skywork-R1V3, an advanced, open-source vision-language model (VLM) that pioneers a new approach to visual reasoning. Its key innovation lies in effectively transferring reasoning skills from text-only Large Language Models (LLMs) to visual tasks. The strong performance of Skywork-R1V3 primarily stems from our elaborate post-training RL framework, which effectively activates and enhances the model's reasoning ability, without the need for additional continue pre-training. Through this framework, we further uncover the fundamental role of the connector module in achieving robust cross-modal alignment for multimodal reasoning models. In addition, we introduce a unique indicator of reasoning capability, the entropy of critical reasoning tokens, which has proven highly effective for checkpoint selection during RL training. Skywork-R1V3 achieves state-of-the-art results on MMMU, significantly improving from 64.3% to 76.0%. This performance matches entry-level human capabilities. Remarkably, our RL-powered post-training approach enables even the 38B parameter model to rival top closed-source VLMs. The implementation successfully transfers mathematical reasoning to other subject-related reasoning tasks. We also include an analysis of curriculum learning and reinforcement finetuning strategies, along with a broader discussion on multimodal reasoning. Skywork-R1V3 represents a significant leap in multimodal reasoning, showcasing RL as a powerful engine for advancing open-source VLM capabilities.
|
The strong performance of Skywork-R1V3 primarily stems from our elaborate post-training RL framework, which effectively activates and enhances the model's reasoning ability, without the need for additional continue pre-training.
|
https://arxiv.org/abs/2507.06167v1
|
https://arxiv.org/pdf/2507.06167v1.pdf
| null |
[
"Wei Shen",
"Jiangbo Pei",
"Yi Peng",
"Xuchen Song",
"Yang Liu",
"Jian Peng",
"Haofeng Sun",
"Yunzhuo Hao",
"Peiyu Wang",
"Yahui Zhou"
] |
[
"cross-modal alignment",
"Mathematical Reasoning",
"Multimodal Reasoning",
"Visual Reasoning"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unsupervised-multi-source-domain-adaptation-2
| null | null | null |
Unsupervised multi-source domain adaptation for person re-identification via feature fusion and pseudo-label refinement
|
The objective of unsupervised domain adaptation (UDA) for person re-identification (re-ID) is to associate person in images captured from heterogeneous camera perspectives. Currently, mainstream UDA methods for person re-ID are mainly conducted in single-source and single-target domain scenarios. Moreover, most of these methods do not take the repercussions of pseudo-label noise on model performance into consideration. Therefore, we put forward an unsupervised multi-source domain adaptation (UMDA) method for person re-ID via feature fusion and pseudo-label refinement. Firstly, our method is designed for scenarios where there exist several source domains and only one target domain. We suggest using feature fusion techniques to minimize the domain disparity among the source domains, and employ pseudo-label refinement techniques to ameliorate the ramifications of label noise on model predictions. To substantiate the effectiveness of the recommended methodology, we carry out a succession of experiments on multiple datasets. The advantage and preeminence of our proposed method can be manifested by the experimental outcomes.
| null |
https://www.sciencedirect.com/science/article/abs/pii/S0045790623004536?via%3Dihub
|
https://www.sciencedirect.com/science/article/abs/pii/S0045790623004536?via%3Dihub
|
Comput. Electr. Eng 2024 1
|
[
"Qing Tian",
"Yao Cheng",
"Sizhen He",
"Jixin Sun"
] |
[
"Domain Adaptation",
"Person Re-Identification",
"Pseudo Label",
"Unsupervised Domain Adaptation"
] | 2024-01-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/enhancing-scientific-visual-question
|
2507.06183
| null | null |
Enhancing Scientific Visual Question Answering through Multimodal Reasoning and Ensemble Modeling
|
Technical reports and articles often contain valuable information in the form of semi-structured data like charts, and figures. Interpreting these and using the information from them is essential for downstream tasks such as question answering (QA). Current approaches to visual question answering often struggle with the precision required for scientific data interpretation, particularly in handling numerical values, multi-step reasoning over visual elements, and maintaining consistency between visual observation and textual reasoning. We present our approach to the SciVQA 2025 shared task, focusing on answering visual and non-visual questions grounded in scientific figures from scholarly articles. We conducted a series of experiments using models with 5B to 8B parameters. Our strongest individual model, InternVL3, achieved ROUGE-1 and ROUGE-L F1 scores of \textbf{0.740} and a BERTScore of \textbf{0.983} on the SciVQA test split. We also developed an ensemble model with multiple vision language models (VLMs). Through error analysis on the validation split, our ensemble approach improved performance compared to most individual models, though InternVL3 remained the strongest standalone performer. Our findings underscore the effectiveness of prompt optimization, chain-of-thought reasoning and ensemble modeling in improving the model's ability in visual question answering.
| null |
https://arxiv.org/abs/2507.06183v1
|
https://arxiv.org/pdf/2507.06183v1.pdf
| null |
[
"Prahitha Movva",
"Naga Harshita Marupaka"
] |
[
"Articles",
"Multimodal Reasoning",
"Question Answering",
"Visual Question Answering"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unsupervised-domain-adaptation-for-cross-3
| null | null | null |
Unsupervised Domain Adaptation for Cross-Regional Scenes Person Re-identification
|
In large-scale surveillance systems, the absence of positive cross-camera pedestrian samples in cross-regional scenes poses a limitation on the performance of person re-identification models. To tackle this challenge, an unsupervised domain adaptive person re-identification method incorporating multi-granularity feature mining and domain-invariant feature learning is proposed. The method comprises a multi-granularity feature learning module and a domain distribution alignment module. Within the multi-granularity feature learning module, global discriminant features of pedestrians are extracted through global features learning. To further enhance the discriminative features of pedestrians, a local consistency feature learning module is proposed to strengthen interactions among local features. Through the learning of both global and local features, the network is encouraged to extract multi-granularity discriminative features, thereby elevating the performance of the person re-identification model. Additionally, this study incorporates a domain distribution alignment module, conducting style transfer to construct positive samples with diverse styles across cameras for target domain. This not only addresses the issue of the lack of positive samples across cameras in cross-regional scenes but also enhances the domain adaptation capabilities of the model. Extensive experiments conducted on the Market-1501, DukeMTMC, CUHK03 and MSMT17 datasets demonstrate the effectiveness of the proposed method compared to state-of-the-art person re-identification methods.
| null |
https://xuebao.sjtu.edu.cn/EN/10.16183/j.cnki.jsjtu.2023.635
|
https://xuebao.sjtu.edu.cn/EN/10.16183/j.cnki.jsjtu.2023.635
|
Shanghai Jiao Tong Univ 2023 3
|
[
"Mao Yanmei",
"Li Huafeng",
"Zhang Yafei"
] |
[
"Domain Adaptation",
"Domain Adaptive Person Re-Identification",
"Person Re-Identification",
"Style Transfer",
"Unsupervised Domain Adaptation"
] | 2023-03-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/modelcitizens-representing-community-voices
|
2507.05455
| null | null |
ModelCitizens:Representing Community Voices in Online Safety
|
Automatic toxic language detection is critical for creating safe, inclusive online spaces. However, it is a highly subjective task, with perceptions of toxic language shaped by community norms and lived experience. Existing toxicity detection models are typically trained on annotations that collapse diverse annotator perspectives into a single ground truth, erasing important context-specific notions of toxicity such as reclaimed language. To address this, we introduce MODELCITIZENS, a dataset of 6.8K social media posts and 40K toxicity annotations across diverse identity groups. To capture the role of conversational context on toxicity, typical of social media posts, we augment MODELCITIZENS posts with LLM-generated conversational scenarios. State-of-the-art toxicity detection tools (e.g. OpenAI Moderation API, GPT-o4-mini) underperform on MODELCITIZENS, with further degradation on context-augmented posts. Finally, we release LLAMACITIZEN-8B and GEMMACITIZEN-12B, LLaMA- and Gemma-based models finetuned on MODELCITIZENS, which outperform GPT-o4-mini by 5.5% on in-distribution evaluations. Our findings highlight the importance of community-informed annotation and modeling for inclusive content moderation.
| null |
https://arxiv.org/abs/2507.05455v1
|
https://arxiv.org/pdf/2507.05455v1.pdf
| null |
[
"Ashima Suvarna",
"Christina Chance",
"Hamid Palangi",
"Sophie Hao",
"Thomas Hartvigsen",
"Saadia Gabriel"
] |
[] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ec-flow-enabling-versatile-robotic
|
2507.06224
| null | null |
EC-Flow: Enabling Versatile Robotic Manipulation from Action-Unlabeled Videos via Embodiment-Centric Flow
|
Current language-guided robotic manipulation systems often require low-level action-labeled datasets for imitation learning. While object-centric flow prediction methods mitigate this issue, they remain limited to scenarios involving rigid objects with clear displacement and minimal occlusion. In this work, we present Embodiment-Centric Flow (EC-Flow), a framework that directly learns manipulation from action-unlabeled videos by predicting embodiment-centric flow. Our key insight is that incorporating the embodiment's inherent kinematics significantly enhances generalization to versatile manipulation scenarios, including deformable object handling, occlusions, and non-object-displacement tasks. To connect the EC-Flow with language instructions and object interactions, we further introduce a goal-alignment module by jointly optimizing movement consistency and goal-image prediction. Moreover, translating EC-Flow to executable robot actions only requires a standard robot URDF (Unified Robot Description Format) file to specify kinematic constraints across joints, which makes it easy to use in practice. We validate EC-Flow on both simulation (Meta-World) and real-world tasks, demonstrating its state-of-the-art performance in occluded object handling (62% improvement), deformable object manipulation (45% improvement), and non-object-displacement tasks (80% improvement) than prior state-of-the-art object-centric flow methods. For more information, see our project website at https://ec-flow1.github.io .
| null |
https://arxiv.org/abs/2507.06224v1
|
https://arxiv.org/pdf/2507.06224v1.pdf
| null |
[
"Yixiang Chen",
"Peiyan Li",
"Yan Huang",
"Jiabing Yang",
"Kehan Chen",
"Liang Wang"
] |
[
"Deformable Object Manipulation",
"Imitation Learning",
"Object"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/beyond-appearance-geometric-cues-for-robust
|
2507.05948
| null | null |
Beyond Appearance: Geometric Cues for Robust Video Instance Segmentation
|
Video Instance Segmentation (VIS) fundamentally struggles with pervasive challenges including object occlusions, motion blur, and appearance variations during temporal association. To overcome these limitations, this work introduces geometric awareness to enhance VIS robustness by strategically leveraging monocular depth estimation. We systematically investigate three distinct integration paradigms. Expanding Depth Channel (EDC) method concatenates the depth map as input channel to segmentation networks; Sharing ViT (SV) designs a uniform ViT backbone, shared between depth estimation and segmentation branches; Depth Supervision (DS) makes use of depth prediction as an auxiliary training guide for feature learning. Though DS exhibits limited effectiveness, benchmark evaluations demonstrate that EDC and SV significantly enhance the robustness of VIS. When with Swin-L backbone, our EDC method gets 56.2 AP, which sets a new state-of-the-art result on OVIS benchmark. This work conclusively establishes depth cues as critical enablers for robust video understanding.
| null |
https://arxiv.org/abs/2507.05948v1
|
https://arxiv.org/pdf/2507.05948v1.pdf
| null |
[
"Quanzhu Niu",
"Yikang Zhou",
"Shihao Chen",
"Tao Zhang",
"Shunping Ji"
] |
[
"Depth Estimation",
"Depth Prediction",
"Instance Segmentation",
"Monocular Depth Estimation",
"Segmentation",
"Semantic Segmentation",
"Video Instance Segmentation",
"Video Understanding"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptive-cross-domain-learning-for
| null | null | null |
Adaptive Cross-Domain Learning for Generalizable Person Re-Identification
|
Domain Generalizable Person Re-Identification (DG-ReID) is a more practical ReID task that is trained from multiple source domains and tested on the unseen target domains. Most existing methods are challenged for dealing with the shared and specific characteristics among different domains, which is called the domain conflict problem. To address this problem, we present an Adaptive Cross-domain Learning (ACL) framework equipped with a CrOss-Domain Embedding Block (CODE-Block) to maintain a common feature space for capturing both the domain-invariant and the domain-specific features, while dynamically mining the relations across different domains. Moreover, our model adaptively adjusts the architecture to focus on learning the corresponding features of a single domain at a time without interference from the biased features of other domains. Specifically, the CODE-Block is composed of two complementary branches, a dynamic branch for extracting domain-adaptive features and a static branch for extracting the domain-invariant features. Extensive experiments demonstrate that the proposed approach achieves state-of-the-art performances on the popular benchmarks. Under Protocol-2, our method outperforms previous SOTA by 7.8% and 7.6% in terms of mAP and rank-1 accuracy.
| null |
https://link.springer.com/chapter/10.1007/978-3-031-19781-9_13
|
https://link.springer.com/chapter/10.1007/978-3-031-19781-9_13
|
ECCV 2022 10
|
[
"Pengyi Zhang",
"Huanzhang Dou",
"Yunlong Yu",
"Xi Li"
] |
[
"Generalizable Person Re-identification",
"Person Re-Identification",
"Unsupervised Domain Adaptation"
] | 2022-10-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/s-2-edit-text-guided-image-editing-with
|
2507.04584
| null | null |
S$^2$Edit: Text-Guided Image Editing with Precise Semantic and Spatial Control
|
Recent advances in diffusion models have enabled high-quality generation and manipulation of images guided by texts, as well as concept learning from images. However, naive applications of existing methods to editing tasks that require fine-grained control, e.g., face editing, often lead to suboptimal solutions with identity information and high-frequency details lost during the editing process, or irrelevant image regions altered due to entangled concepts. In this work, we propose S$^2$Edit, a novel method based on a pre-trained text-to-image diffusion model that enables personalized editing with precise semantic and spatial control. We first fine-tune our model to embed the identity information into a learnable text token. During fine-tuning, we disentangle the learned identity token from attributes to be edited by enforcing an orthogonality constraint in the textual feature space. To ensure that the identity token only affects regions of interest, we apply object masks to guide the cross-attention maps. At inference time, our method performs localized editing while faithfully preserving the original identity with semantically disentangled and spatially focused identity token learned. Extensive experiments demonstrate the superiority of S$^2$Edit over state-of-the-art methods both quantitatively and qualitatively. Additionally, we showcase several compositional image editing applications of S$^2$Edit such as makeup transfer.
| null |
https://arxiv.org/abs/2507.04584v1
|
https://arxiv.org/pdf/2507.04584v1.pdf
| null |
[
"Xudong Liu",
"Zikun Chen",
"Ruowei Jiang",
"Ziyi Wu",
"Kejia Yin",
"Han Zhao",
"Parham Aarabi",
"Igor Gilitschenski"
] |
[
"text-guided-image-editing"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/dual-level-viewpoint-learning-for-cross
| null | null | null |
Dual-Level Viewpoint-Learning for Cross-Domain Vehicle Re-Identification
|
The definition of vehicle viewpoint annotations is ambiguous due to human subjective judgment, which makes the cross-domain vehicle re-identification methods unable to learn the viewpoint invariance features during source domain pre-training. This will further lead to cross-view misalignment in downstream target domain tasks. To solve the above challenges, this paper presents a dual-level viewpoint-learning framework that contains an angle invariance pre-training method and a meta-orientation adaptation learning strategy. The dual-level viewpoint-annotation proposal is first designed to concretely redefine the vehicle viewpoint from two aspects (i.e., angle-level and orientation-level). An angle invariance pre-training method is then proposed to preserve identity similarity and difference across the cross-view; this consists of a part-level pyramidal network and an angle bias metric loss. Under the supervision of angle bias metric loss, the part-level pyramidal network, as the backbone, learns the subtle differences of vehicles from different angle-level viewpoints. Finally, a meta-orientation adaptation learning strategy is designed to extend the generalization ability of the re-identification model to the unseen orientation-level viewpoints. Simultaneously, the proposed meta-learning strategy enforces meta-orientation training and meta-orientation testing according to the orientation-level viewpoints in the target domain. Extensive experiments on public vehicle re-identification datasets demonstrate that the proposed method combines the redefined dual-level viewpoint-information and significantly outperforms other state-of-the-art methods in alleviating viewpoint variations.
| null |
https://www.mdpi.com/2079-9292/13/10/1823
|
https://www.mdpi.com/2079-9292/13/10/1823/pdf
|
Electronics 2024 5
|
[
"Zhou R",
"Wang Q",
"Cao L",
"Xu J",
"Zhu X",
"Xiong X",
"Zhang H",
"Zhong Y"
] |
[
"Meta-Learning",
"Unsupervised Domain Adaptation",
"Vehicle Re-Identification"
] | 2024-05-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/investigating-quantum-feature-maps-in-quantum
|
2506.03272
| null | null |
Investigating Quantum Feature Maps in Quantum Support Vector Machines for Lung Cancer Classification
|
In recent years, quantum machine learning has emerged as a promising intersection between quantum physics and artificial intelligence, particularly in domains requiring advanced pattern recognition such as healthcare. This study investigates the effectiveness of Quantum Support Vector Machines (QSVM), which leverage quantum mechanical phenomena like superposition and entanglement to construct high-dimensional Hilbert spaces for data classification. Focusing on lung cancer diagnosis, a concrete and critical healthcare application, we analyze how different quantum feature maps influence classification performance. Using a real-world dataset of 309 patient records with significant class imbalance (39 non-cancer vs. 270 cancer cases), we constructed six balanced subsets for robust evaluation. QSVM models were implemented using Qiskit and executed on the qasm simulator, employing three distinct quantum feature maps: ZFeatureMap, ZZFeatureMap, and PauliFeatureMap. Performance was assessed using accuracy, precision, recall, specificity, and F1-score. Results show that the PauliFeatureMap consistently outperformed the others, achieving perfect classification in three subsets and strong performance overall. These findings demonstrate how quantum computational principles can be harnessed to enhance diagnostic capabilities, reinforcing the importance of physics-based modeling in emerging AI applications within healthcare.
| null |
https://arxiv.org/abs/2506.03272v1
|
https://arxiv.org/pdf/2506.03272v1.pdf
| null |
[
"My Youssef El Hafidi",
"Achraf Toufah",
"Mohamed Achraf Kadim"
] |
[
"Cancer Classification",
"Diagnostic",
"Lung Cancer Diagnosis",
"Quantum Machine Learning",
"Specificity"
] | 2025-06-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sdafe-a-dual-filter-stable-diffusion-data
| null | null | null |
SDAFE: A Dual-filter Stable Diffusion Data Augmentation Method for Facial Expression Recognition
|
Facial expressions are a powerful medium for conveying emotions. In facial expression recognition (FER) field, the difficulty of collecting specific expressions often leads to class imbalance in mainstream datasets, significantly reducing the classification accuracy of deep neural networks. To address these issues, we propose a stable-diffusion-based augmentation method for facial expression (SDAFE) that resolves class imbalance problems and enhances data generation quality through cross-modal label guidance. By leveraging the neutrality of neutral faces, we generate additional expressions to balance the dataset classes. We introduce a peak signal-to-noise ratio (PSNR) filter to ensure the high quality of the generated images and a cosine similarity cross-modal filter based on CLIP encoders to ensure that the content of the generated images accurately aligns with their labels. Furthermore, we introduce a novel model, FERNeXt, which demonstrates outstanding performance in FER tasks, surpassing the state-of-the-art accuracy on the FER2013 dataset and achieving strong results on the RAF-DB and NHFI datasets. Subsequently, the performance of several models across different datasets significantly improves through the use of SDAFE in our experiments.
| null |
https://ieeexplore.ieee.org/abstract/document/10888031
|
https://ieeexplore.ieee.org/abstract/document/10888031
|
ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2025 4
|
[
"Minghao Zhao",
"Yifei Chen",
"Jiahao Lyu",
"Shuangli Du",
"Zhiyong Lv",
"Lin Wang"
] |
[
"Data Augmentation",
"Facial Expression Recognition",
"Facial Expression Recognition (FER)"
] | 2025-04-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/OpenAI/CLIP",
"description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)",
"full_name": "Contrastive Language-Image Pre-training",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Representations",
"parent": null
},
"name": "CLIP",
"source_title": "Learning Transferable Visual Models From Natural Language Supervision",
"source_url": "https://arxiv.org/abs/2103.00020v1"
}
] |
https://paperswithcode.com/paper/predictive-maintenance-optimization-for-smart
|
2507.02934
| null | null |
Predictive Maintenance Optimization for Smart Vending Machines Using IoT and Machine Learning
|
The increasing proliferation of vending machines in public and commercial environments has placed a growing emphasis on operational efficiency and customer satisfaction. Traditional maintenance approaches either reactive or time-based preventive are limited in their ability to preempt machine failures, leading to unplanned downtimes and elevated service costs. This research presents a novel predictive maintenance framework tailored for vending machines by leveraging Internet of Things (IoT) sensors and machine learning (ML) algorithms. The proposed system continuously monitors machine components and operating conditions in real time and applies predictive models to forecast failures before they occur. This enables timely maintenance scheduling, minimizing downtime and extending machine lifespan. The framework was validated through simulated fault data and performance evaluation using classification algorithms. Results show a significant improvement in early fault detection and a reduction in redundant service interventions. The findings indicate that predictive maintenance systems, when integrated into vending infrastructure, can transform operational efficiency and service reliability.
| null |
https://arxiv.org/abs/2507.02934v1
|
https://arxiv.org/pdf/2507.02934v1.pdf
| null |
[
"Md. Nisharul Hasan"
] |
[
"Fault Detection",
"Scheduling"
] | 2025-06-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/exploring-gender-bias-beyond-occupational
|
2507.02679
| null | null |
Exploring Gender Bias Beyond Occupational Titles
|
In this work, we investigate the correlation between gender and contextual biases, focusing on elements such as action verbs, object nouns, and particularly on occupations. We introduce a novel dataset, GenderLexicon, and a framework that can estimate contextual bias and its related gender bias. Our model can interpret the bias with a score and thus improve the explainability of gender bias. Also, our findings confirm the existence of gender biases beyond occupational stereotypes. To validate our approach and demonstrate its effectiveness, we conduct evaluations on five diverse datasets, including a Japanese dataset.
|
In this work, we investigate the correlation between gender and contextual biases, focusing on elements such as action verbs, object nouns, and particularly on occupations.
|
https://arxiv.org/abs/2507.02679v1
|
https://arxiv.org/pdf/2507.02679v1.pdf
| null |
[
"Ahmed Sabir",
"Rajesh Sharama"
] |
[] | 2025-07-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/contrastive-and-transfer-learning-for
|
2507.06070
| null | null |
Contrastive and Transfer Learning for Effective Audio Fingerprinting through a Real-World Evaluation Protocol
|
Recent advances in song identification leverage deep neural networks to learn compact audio fingerprints directly from raw waveforms. While these methods perform well under controlled conditions, their accuracy drops significantly in real-world scenarios where the audio is captured via mobile devices in noisy environments. In this paper, we introduce a novel evaluation protocol designed to better reflect such real-world conditions. We generate three recordings of the same audio, each with increasing levels of noise, captured using a mobile device's microphone. Our results reveal a substantial performance drop for two state-of-the-art CNN-based models under this protocol, compared to previously reported benchmarks. Additionally, we highlight the critical role of the augmentation pipeline during training with contrastive loss. By introduction low pass and high pass filters in the augmentation pipeline we significantly increase the performance of both systems in our proposed evaluation. Furthermore, we develop a transformer-based model with a tailored projection module and demonstrate that transferring knowledge from a semantically relevant domain yields a more robust solution. The transformer architecture outperforms CNN-based models across all noise levels, and query durations. In low noise conditions it achieves 47.99% for 1-sec queries, and 97% for 10-sec queries in finding the correct song, surpassing by 14%, and by 18.5% the second-best performing model, respectively, Under heavy noise levels, we achieve a detection rate 56.5% for 15-second query duration. All experiments are conducted on public large-scale dataset of over 100K songs, with queries matched against a database of 56 million vectors.
| null |
https://arxiv.org/abs/2507.06070v1
|
https://arxiv.org/pdf/2507.06070v1.pdf
| null |
[
"Christos Nikou",
"Theodoros Giannakopoulos"
] |
[
"Transfer Learning"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/core-benchmarking-llms-code-reasoning
|
2507.05269
| null | null |
CORE: Benchmarking LLMs Code Reasoning Capabilities through Static Analysis Tasks
|
Large language models (LLMs) have been widely adopted across diverse software engineering domains, such as code generation, program repair, and vulnerability detection. These applications require understanding beyond surface-level code patterns: value propagation, control flow, and interdependence between program elements. However, existing benchmarks primarily evaluate end-to-end outcomes, such as whether code is correctly repaired or generated, leaving the models ability for program semantic reasoning underexplored. This work presents CoRe, a high-quality, human-verified benchmark designed to evaluate LLMs on fundamental static analysis tasks. CoRe includes 12,553 task instances spanning data dependency, control dependency, and information flow across programs written in C/C++, Java, and Python. To ensure semantic diversity and reasoning complexity, we propose a semantics-aware diverse sampling strategy that selects targets and task instances based on structural coverage and dependency depth. We evaluate 10 mainstream LLMs and show that, while they perform well at identifying dependencies, models still struggle with tasks that require deeper semantic understanding and multi-step reasoning. We further conduct qualitative analyses to uncover key challenges, such as complex control structures and backward dependency patterns, offering insights into improving LLMs code reasoning capabilities.
| null |
https://arxiv.org/abs/2507.05269v1
|
https://arxiv.org/pdf/2507.05269v1.pdf
| null |
[
"Danning Xie",
"Mingwei Zheng",
"Xuwei Liu",
"Jiannan Wang",
"Chengpeng Wang",
"Lin Tan",
"Xiangyu Zhang"
] |
[
"Benchmarking",
"Code Generation",
"Program Repair",
"Vulnerability Detection"
] | 2025-07-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/educoder-an-open-source-annotation-system-for
|
2507.05385
| null | null |
EduCoder: An Open-Source Annotation System for Education Transcript Data
|
We introduce EduCoder, a domain-specialized tool designed to support utterance-level annotation of educational dialogue. While general-purpose text annotation tools for NLP and qualitative research abound, few address the complexities of coding education dialogue transcripts -- with diverse teacher-student and peer interactions. Common challenges include defining codebooks for complex pedagogical features, supporting both open-ended and categorical coding, and contextualizing utterances with external features, such as the lesson's purpose and the pedagogical value of the instruction. EduCoder is designed to address these challenges by providing a platform for researchers and domain experts to collaboratively define complex codebooks based on observed data. It incorporates both categorical and open-ended annotation types along with contextual materials. Additionally, it offers a side-by-side comparison of multiple annotators' responses, allowing comparison and calibration of annotations with others to improve data reliability. The system is open-source, with a demo video available.
|
We introduce EduCoder, a domain-specialized tool designed to support utterance-level annotation of educational dialogue.
|
https://arxiv.org/abs/2507.05385v1
|
https://arxiv.org/pdf/2507.05385v1.pdf
| null |
[
"Guanzhong Pan",
"Mei Tan",
"HyunJi Nam",
"Lucía Langlois",
"James Malamut",
"Liliana Deonizio",
"Dorottya Demszky"
] |
[
"text annotation"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/axlearn-modular-large-model-training-on
|
2507.05411
| null | null |
AXLearn: Modular Large Model Training on Heterogeneous Infrastructure
|
We design and implement AXLearn, a production deep learning system that facilitates scalable and high-performance training of large deep learning models. Compared to other state-of-the-art deep learning systems, AXLearn has a unique focus on modularity and support for heterogeneous hardware infrastructure. AXLearn's internal interfaces between software components follow strict encapsulation, allowing different components to be assembled to facilitate rapid model development and experimentation on heterogeneous compute infrastructure. We introduce a novel method of quantifying modularity via Lines-of-Code (LoC)-complexity, which demonstrates how our system maintains constant complexity as we scale the components in the system, compared to linear or quadratic complexity in other systems. This allows integrating features such as Rotary Position Embeddings (RoPE) into AXLearn across hundred of modules with just 10 lines of code, compared to hundreds as required in other systems. At the same time, AXLearn maintains equivalent performance compared to state-of-the-art training systems. Finally, we share our experience in the development and operation of AXLearn.
| null |
https://arxiv.org/abs/2507.05411v1
|
https://arxiv.org/pdf/2507.05411v1.pdf
| null |
[
"Mark Lee",
"Tom Gunter",
"Chang Lan",
"John Peebles",
"Hanzhi Zhou",
"Kelvin Zou",
"Sneha Bangalore",
"Chung-Cheng Chiu",
"Nan Du",
"Xianzhi Du",
"Philipp Dufter",
"Ruixuan Hou",
"Haoshuo Huang",
"Dongseong Hwang",
"Xiang Kong",
"Jinhao Lei",
"Tao Lei",
"Meng Li",
"Li Li",
"Jiarui Lu",
"Zhiyun Lu",
"Yiping Ma",
"David Qiu",
"Vivek Rathod",
"Senyu Tong",
"Zhucheng Tu",
"Jianyu Wang",
"Yongqiang Wang",
"ZiRui Wang",
"Floris Weers",
"Sam Wiseman",
"Guoli Yin",
"BoWen Zhang",
"Xiyou Zhou",
"Danyang Zhuo",
"Cheng Leong",
"Ruoming Pang"
] |
[
"Deep Learning"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/unpatchable-vulnerabilities-in-windows-10-11
| null | null | null |
Unpatchable Vulnerabilities in Windows 10/11: Security Report 2025
|
This comprehensive security report investigates unpatchable vulnerabilities in Windows 10 and
11, focusing on systemic flaws that resist traditional patching due to their deep integration into
the operating system’s architecture, hardware dependencies, and legacy compatibility requirements. These vulnerabilities, rooted in fundamental design choices and ecosystem constraints,
pose significant challenges to securing millions of Windows devices worldwide. The report examines three critical vulnerabilities: legacy BIOS/UEFI firmware weaknesses, kernel memory
management flaws, and backward compatibility with legacy protocols. It provides a detailed
technical analysis, exploitation vectors, detection challenges, and comprehensive mitigation
strategies. With Windows 10 approaching its end-of-support deadline in October 2025, these
flaws pose heightened risks, necessitating proactive defenses. This report adheres to responsible disclosure principles and aims to support Microsoft’s efforts to strengthen Windows security
in 2025.
| null |
https://zenodo.org/records/15850090
|
https://zenodo.org/records/15850090/files/Start.pdf?download=1
|
Independent publication 2025 7
|
[
"Vi Nhat Son"
] |
[
"Management"
] | 2025-07-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Activation patching studies the model's computation by altering its latent representations, the token embeddings in transformer-based language models, during the inference process",
"full_name": "Activation Patching",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Inference Extrapolation",
"parent": null
},
"name": "Patching",
"source_title": "Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models",
"source_url": "https://arxiv.org/abs/2401.06102v4"
}
] |
https://paperswithcode.com/paper/automated-neuron-labelling-enables-generative
|
2507.06458
| null | null |
Automated Neuron Labelling Enables Generative Steering and Interpretability in Protein Language Models
|
Protein language models (PLMs) encode rich biological information, yet their internal neuron representations are poorly understood. We introduce the first automated framework for labeling every neuron in a PLM with biologically grounded natural language descriptions. Unlike prior approaches relying on sparse autoencoders or manual annotation, our method scales to hundreds of thousands of neurons, revealing individual neurons are selectively sensitive to diverse biochemical and structural properties. We then develop a novel neuron activation-guided steering method to generate proteins with desired traits, enabling convergence to target biochemical properties like molecular weight and instability index as well as secondary and tertiary structural motifs, including alpha helices and canonical Zinc Fingers. We finally show that analysis of labeled neurons in different model sizes reveals PLM scaling laws and a structured neuron space distribution.
|
Protein language models (PLMs) encode rich biological information, yet their internal neuron representations are poorly understood.
|
https://arxiv.org/abs/2507.06458v1
|
https://arxiv.org/pdf/2507.06458v1.pdf
| null |
[
"Arjun Banerjee",
"David Martinez",
"Camille Dang",
"Ethan Tam"
] |
[] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ms-dpps-multi-source-determinantal-point
|
2507.06654
| null | null |
MS-DPPs: Multi-Source Determinantal Point Processes for Contextual Diversity Refinement of Composite Attributes in Text to Image Retrieval
|
Result diversification (RD) is a crucial technique in Text-to-Image Retrieval for enhancing the efficiency of a practical application. Conventional methods focus solely on increasing the diversity metric of image appearances. However, the diversity metric and its desired value vary depending on the application, which limits the applications of RD. This paper proposes a novel task called CDR-CA (Contextual Diversity Refinement of Composite Attributes). CDR-CA aims to refine the diversities of multiple attributes, according to the application's context. To address this task, we propose Multi-Source DPPs, a simple yet strong baseline that extends the Determinantal Point Process (DPP) to multi-sources. We model MS-DPP as a single DPP model with a unified similarity matrix based on a manifold representation. We also introduce Tangent Normalization to reflect contexts. Extensive experiments demonstrate the effectiveness of the proposed method. Our code is publicly available at https://github.com/NEC-N-SOGI/msdpp.
|
To address this task, we propose Multi-Source DPPs, a simple yet strong baseline that extends the Determinantal Point Process (DPP) to multi-sources.
|
https://arxiv.org/abs/2507.06654v1
|
https://arxiv.org/pdf/2507.06654v1.pdf
| null |
[
"Naoya Sogi",
"Takashi Shibata",
"Makoto Terao",
"Masanori Suganuma",
"Takayuki Okatani"
] |
[
"Diversity",
"Image Retrieval",
"Point Processes"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/mofe-time-mixture-of-frequency-domain-experts
|
2507.06502
| null | null |
MoFE-Time: Mixture of Frequency Domain Experts for Time-Series Forecasting Models
|
As a prominent data modality task, time series forecasting plays a pivotal role in diverse applications. With the remarkable advancements in Large Language Models (LLMs), the adoption of LLMs as the foundational architecture for time series modeling has gained significant attention. Although existing models achieve some success, they rarely both model time and frequency characteristics in a pretraining-finetuning paradigm leading to suboptimal performance in predictions of complex time series, which requires both modeling periodicity and prior pattern knowledge of signals. We propose MoFE-Time, an innovative time series forecasting model that integrates time and frequency domain features within a Mixture of Experts (MoE) network. Moreover, we use the pretraining-finetuning paradigm as our training framework to effectively transfer prior pattern knowledge across pretraining and finetuning datasets with different periodicity distributions. Our method introduces both frequency and time cells as experts after attention modules and leverages the MoE routing mechanism to construct multidimensional sparse representations of input signals. In experiments on six public benchmarks, MoFE-Time has achieved new state-of-the-art performance, reducing MSE and MAE by 6.95% and 6.02% compared to the representative methods Time-MoE. Beyond the existing evaluation benchmarks, we have developed a proprietary dataset, NEV-sales, derived from real-world business scenarios. Our method achieves outstanding results on this dataset, underscoring the effectiveness of the MoFE-Time model in practical commercial applications.
|
Although existing models achieve some success, they rarely both model time and frequency characteristics in a pretraining-finetuning paradigm leading to suboptimal performance in predictions of complex time series, which requires both modeling periodicity and prior pattern knowledge of signals.
|
https://arxiv.org/abs/2507.06502v1
|
https://arxiv.org/pdf/2507.06502v1.pdf
| null |
[
"YiWen Liu",
"Chenyu Zhang",
"Junjie Song",
"Siqi Chen",
"Sun Yin",
"Zihan Wang",
"Lingming Zeng",
"Yuji Cao",
"Junming Jiao"
] |
[
"Mixture-of-Experts",
"Time Series",
"Time Series Forecasting"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Mixture of Experts",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Ensembling",
"parent": null
},
"name": "MoE",
"source_title": "Equipping Computational Pathology Systems with Artifact Processing Pipelines: A Showcase for Computation and Performance Trade-offs",
"source_url": "https://arxiv.org/abs/2403.07743v3"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Masked autoencoder",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.",
"name": "Self-Supervised Learning",
"parent": null
},
"name": "MAE",
"source_title": "Masked Autoencoders Are Scalable Vision Learners",
"source_url": "https://arxiv.org/abs/2111.06377v2"
}
] |
https://paperswithcode.com/paper/speak2sign3d-a-multi-modal-pipeline-for
|
2507.06530
| null | null |
Speak2Sign3D: A Multi-modal Pipeline for English Speech to American Sign Language Animation
|
Helping deaf and hard-of-hearing people communicate more easily is the main goal of Automatic Sign Language Translation. Although most past research has focused on turning sign language into text, doing the reverse, turning spoken English into sign language animations, has been largely overlooked. That's because it involves multiple steps, such as understanding speech, translating it into sign-friendly grammar, and generating natural human motion. In this work, we introduce a complete pipeline that converts English speech into smooth, realistic 3D sign language animations. Our system starts with Whisper to translate spoken English into text. Then, we use a MarianMT machine translation model to translate that text into American Sign Language (ASL) gloss, a simplified version of sign language that captures meaning without grammar. This model performs well, reaching BLEU scores of 0.7714 and 0.8923. To make the gloss translation more accurate, we also use word embeddings such as Word2Vec and FastText to understand word meanings. Finally, we animate the translated gloss using a 3D keypoint-based motion system trained on Sign3D-WLASL, a dataset we created by extracting body, hand, and face key points from real ASL videos in the WLASL dataset. To support the gloss translation stage, we also built a new dataset called BookGlossCorpus-CG, which turns everyday English sentences from the BookCorpus dataset into ASL gloss using grammar rules. Our system stitches everything together by smoothly interpolating between signs to create natural, continuous animations. Unlike previous works like How2Sign and Phoenix-2014T that focus on recognition or use only one type of data, our pipeline brings together audio, text, and motion in a single framework that goes all the way from spoken English to lifelike 3D sign language animation.
| null |
https://arxiv.org/abs/2507.06530v1
|
https://arxiv.org/pdf/2507.06530v1.pdf
| null |
[
"Kazi Mahathir Rahman",
"Naveed Imtiaz Nafis",
"Md. Farhan Sadik",
"Mohammad Al Rafi",
"Mehedi Hasan Shahed"
] |
[
"Machine Translation",
"Sign Language Translation",
"Translation",
"Word Embeddings"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**fastText** embeddings exploit subword information to construct word embeddings. Representations are learnt of character $n$-grams, and words represented as the sum of the $n$-gram vectors. This extends the word2vec type models with subword information. This helps the embeddings understand suffixes and prefixes. Once a word is represented using character $n$-grams, a skipgram model is trained to learn the embeddings.",
"full_name": "fastText",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Word Embeddings",
"parent": null
},
"name": "fastText",
"source_title": "Enriching Word Vectors with Subword Information",
"source_url": "http://arxiv.org/abs/1607.04606v2"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/prefixagent-an-llm-powered-design-framework
|
2507.06127
| null | null |
PrefixAgent: An LLM-Powered Design Framework for Efficient Prefix Adder Optimization
|
Prefix adders are fundamental arithmetic circuits, but their design space grows exponentially with bit-width, posing significant optimization challenges. Previous works face limitations in performance, generalization, and scalability. To address these challenges, we propose PrefixAgent, a large language model (LLM)-powered framework that enables efficient prefix adder optimization. Specifically, PrefixAgent reformulates the problem into subtasks including backbone synthesis and structure refinement, which effectively reduces the search space. More importantly, this new design perspective enables us to efficiently collect enormous high-quality data and reasoning traces with E-graph, which further results in an effective fine-tuning of LLM. Experimental results show that PrefixAgent synthesizes prefix adders with consistently smaller areas compared to baseline methods, while maintaining scalability and generalization in commercial EDA flows.
| null |
https://arxiv.org/abs/2507.06127v1
|
https://arxiv.org/pdf/2507.06127v1.pdf
| null |
[
"Dongsheng Zuo",
"Jiadong Zhu",
"Yang Luo",
"Yuzhe ma"
] |
[
"Language Modeling",
"Language Modelling",
"Large Language Model"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/evolutionary-and-coevolutionary-multi-agent
|
2507.05534
| null | null |
Evolutionary and Coevolutionary Multi-Agent Design Choices and Dynamics
|
We investigate two representation alternatives for the controllers of teams of cyber agents. We combine these controller representations with different evolutionary algorithms, one of which introduces a novel LLM-supported mutation operator. Using a cyber security scenario, we evaluate agent learning when one side is trained to compete against a side that does not evolve and when two sides coevolve with each other. This allows us to quantify the relative merits and tradeoffs of representation and algorithm combinations in terms of team performance. Our versions of grammatical evolution algorithms using grammars that allow a controller to be expressed in code-like logic can achieve the best team performance. The scenario also allows us to compare the performance impact and dynamics of coevolution versus evolution under different combinations. Across the algorithms and representations, we observe that coevolution reduces the performance highs and lows of both sides while it induces fluctuations on both sides. In contrast, when only one-side is optimized, performance peaks are higher and is more sustained than when both sides are optimized with coevolution.
| null |
https://arxiv.org/abs/2507.05534v1
|
https://arxiv.org/pdf/2507.05534v1.pdf
| null |
[
"Erik Hemberg",
"Eric Liu",
"Lucille Fuller",
"Stephen Moskal",
"Una-May O'Reilly"
] |
[
"Evolutionary Algorithms"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generalized-adaptive-transfer-network
|
2507.03026
| null | null |
Generalized Adaptive Transfer Network: Enhancing Transfer Learning in Reinforcement Learning Across Domains
|
Transfer learning in Reinforcement Learning (RL) enables agents to leverage knowledge from source tasks to accelerate learning in target tasks. While prior work, such as the Attend, Adapt, and Transfer (A2T) framework, addresses negative transfer and selective transfer, other critical challenges remain underexplored. This paper introduces the Generalized Adaptive Transfer Network (GATN), a deep RL architecture designed to tackle task generalization across domains, robustness to environmental changes, and computational efficiency in transfer. GATN employs a domain-agnostic representation module, a robustness-aware policy adapter, and an efficient transfer scheduler to achieve these goals. We evaluate GATN on diverse benchmarks, including Atari 2600, MuJoCo, and a custom chatbot dialogue environment, demonstrating superior performance in cross-domain generalization, resilience to dynamic environments, and reduced computational overhead compared to baselines. Our findings suggest GATN is a versatile framework for real-world RL applications, such as adaptive chatbots and robotic control.
|
Transfer learning in Reinforcement Learning (RL) enables agents to leverage knowledge from source tasks to accelerate learning in target tasks.
|
https://arxiv.org/abs/2507.03026v1
|
https://arxiv.org/pdf/2507.03026v1.pdf
| null |
[
"Abhishek Verma",
"Nallarasan V",
"Balaraman Ravindran"
] |
[
"Atari Games",
"Chatbot",
"Computational Efficiency",
"Deep Learning",
"Deep Reinforcement Learning",
"Domain Generalization",
"MuJoCo",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Transfer Learning"
] | 2025-07-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gnn-vitcap-gnn-enhanced-multiple-instance
|
2507.07006
| null | null |
GNN-ViTCap: GNN-Enhanced Multiple Instance Learning with Vision Transformers for Whole Slide Image Classification and Captioning
|
Microscopic assessment of histopathology images is vital for accurate cancer diagnosis and treatment. Whole Slide Image (WSI) classification and captioning have become crucial tasks in computer-aided pathology. However, microscopic WSI face challenges such as redundant patches and unknown patch positions due to subjective pathologist captures. Moreover, generating automatic pathology captions remains a significant challenge. To address these issues, we introduce a novel GNN-ViTCap framework for classification and caption generation from histopathological microscopic images. First, a visual feature extractor generates patch embeddings. Redundant patches are then removed by dynamically clustering these embeddings using deep embedded clustering and selecting representative patches via a scalar dot attention mechanism. We build a graph by connecting each node to its nearest neighbors in the similarity matrix and apply a graph neural network to capture both local and global context. The aggregated image embeddings are projected into the language model's input space through a linear layer and combined with caption tokens to fine-tune a large language model. We validate our method on the BreakHis and PatchGastric datasets. GNN-ViTCap achieves an F1 score of 0.934 and an AUC of 0.963 for classification, along with a BLEU-4 score of 0.811 and a METEOR score of 0.569 for captioning. Experimental results demonstrate that GNN-ViTCap outperforms state of the art approaches, offering a reliable and efficient solution for microscopy based patient diagnosis.
| null |
https://arxiv.org/abs/2507.07006v1
|
https://arxiv.org/pdf/2507.07006v1.pdf
| null |
[
"S M Taslim Uddin Raju",
"Md. Milon Islam",
"Md Rezwanul Haque",
"Hamdi Altaheri",
"Fakhri Karray"
] |
[
"Caption Generation",
"Clustering",
"Graph Neural Network",
"image-classification",
"Image Classification",
"Large Language Model",
"Multiple Instance Learning"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Graph Neural Network",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Graph Neural Network",
"source_title": "Graph Neural Networks: A Review of Methods and Applications",
"source_url": "https://arxiv.org/abs/1812.08434v6"
},
{
"code_snippet_url": null,
"description": "A **Linear Layer** is a projection $\\mathbf{XW + b}$.",
"full_name": "Linear Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Linear Layer",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/eegfloss-a-python-package-for-refining-sleep
|
2507.06433
| null | null |
eegFloss: A Python package for refining sleep EEG recordings using machine learning models
|
Electroencephalography (EEG) allows monitoring of brain activity, providing insights into the functional dynamics of various brain regions and their roles in cognitive processes. EEG is a cornerstone in sleep research, serving as the primary modality of polysomnography, the gold standard in the field. However, EEG signals are prone to artifacts caused by both internal (device-specific) factors and external (environmental) interferences. As sleep studies are becoming larger, most rely on automatic sleep staging, a process highly susceptible to artifacts, leading to erroneous sleep scores. This paper addresses this challenge by introducing eegFloss, an open-source Python package to utilize eegUsability, a novel machine learning (ML) model designed to detect segments with artifacts in sleep EEG recordings. eegUsability has been trained and evaluated on manually artifact-labeled EEG data collected from 15 participants over 127 nights using the Zmax headband. It demonstrates solid overall classification performance (F1-score is approximately 0.85, Cohens kappa is 0.78), achieving a high recall rate of approximately 94% in identifying channel-wise usable EEG data, and extends beyond Zmax. Additionally, eegFloss offers features such as automatic time-in-bed detection using another ML model named eegMobility, filtering out certain artifacts, and generating hypnograms and sleep statistics. By addressing a fundamental challenge faced by most sleep studies, eegFloss can enhance the precision and rigor of their analysis as well as the accuracy and reliability of their outcomes.
|
By addressing a fundamental challenge faced by most sleep studies, eegFloss can enhance the precision and rigor of their analysis as well as the accuracy and reliability of their outcomes.
|
https://arxiv.org/abs/2507.06433v1
|
https://arxiv.org/pdf/2507.06433v1.pdf
| null |
[
"Niloy Sikder",
"Paul Zerr",
"Mahdad Jafarzadeh Esfahani",
"Martin Dresler",
"Matthias Krauledat"
] |
[
"EEG",
"Sleep Staging"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dlava-document-language-and-vision-assistant
|
2412.00151
| null | null |
DLaVA: Document Language and Vision Assistant for Answer Localization with Enhanced Interpretability and Trustworthiness
|
Document Visual Question Answering (VQA) requires models to interpret textual information within complex visual layouts and comprehend spatial relationships to answer questions based on document images. Existing approaches often lack interpretability and fail to precisely localize answers within the document, hindering users' ability to verify responses and understand the reasoning process. Moreover, standard metrics like Average Normalized Levenshtein Similarity (ANLS) focus on text accuracy but overlook spatial correctness. We introduce DLaVA, a novel method that enhances Multimodal Large Language Models (MLLMs) with answer localization capabilities for Document VQA. Our approach integrates image annotation directly into the MLLM pipeline, improving interpretability by enabling users to trace the model's reasoning. We present both OCR-dependent and OCR-free architectures, with the OCR-free approach eliminating the need for separate text recognition components, thus reducing complexity. To the best of our knowledge, DLaVA is the first approach to introduce answer localization within multimodal QA, marking a significant step forward in enhancing user trust and reducing the risk of AI hallucinations. Our contributions include enhancing interpretability and reliability by grounding responses in spatially annotated visual content, introducing answer localization in MLLMs, proposing a streamlined pipeline that combines an MLLM with a text detection module, and conducting comprehensive evaluations using both textual and spatial accuracy metrics, including Intersection over Union (IoU). Experimental results on standard datasets demonstrate that DLaVA achieves SOTA performance, significantly enhancing model transparency and reliability. Our approach sets a new benchmark for Document VQA, highlighting the critical importance of precise answer localization and model interpretability.
|
We introduce DLaVA, a novel method that enhances Multimodal Large Language Models (MLLMs) with answer localization capabilities for Document VQA.
|
https://arxiv.org/abs/2412.00151v1
|
https://arxiv.org/pdf/2412.00151v1.pdf
| null |
[
"Ahmad Mohammadshirazi",
"Pinaki Prasad Guha Neogi",
"Ser-Nam Lim",
"Rajiv Ramnath"
] |
[
"Optical Character Recognition (OCR)",
"Question Answering",
"Text Detection",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2024-11-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/hybrid-view-attention-for-cspca
|
2507.03421
| null | null |
Hybrid-View Attention for csPCa Classification in TRUS
|
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.
|
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention.
|
https://arxiv.org/abs/2507.03421v1
|
https://arxiv.org/pdf/2507.03421v1.pdf
| null |
[
"Zetian Feng",
"Juan Fu",
"Xuebin Zou",
"Hongsheng Ye",
"Hong Wu",
"Jianhua Zhou",
"Yi Wang"
] |
[
"Classification",
"Diagnostic"
] | 2025-07-04T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/design-and-implementation-of-an-ocr-powered
|
2507.07029
| null | null |
Design and Implementation of an OCR-Powered Pipeline for Table Extraction from Invoices
|
This paper presents the design and development of an OCR-powered pipeline for efficient table extraction from invoices. The system leverages Tesseract OCR for text recognition and custom post-processing logic to detect, align, and extract structured tabular data from scanned invoice documents. Our approach includes dynamic preprocessing, table boundary detection, and row-column mapping, optimized for noisy and non-standard invoice formats. The resulting pipeline significantly improves data extraction accuracy and consistency, supporting real-world use cases such as automated financial workflows and digital archiving.
| null |
https://arxiv.org/abs/2507.07029v1
|
https://arxiv.org/pdf/2507.07029v1.pdf
| null |
[
"Parshva Dhilankumar Patel"
] |
[
"Boundary Detection",
"Optical Character Recognition (OCR)",
"Table Extraction"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/from-large-eddy-simulations-to-deep-learning
|
2507.06533
| null | null |
From large-eddy simulations to deep learning: A U-net model for fast urban canopy flow predictions
|
Accurate prediction of wind flow fields in urban canopies is crucial for ensuring pedestrian comfort, safety, and sustainable urban design. Traditional methods using wind tunnels and Computational Fluid Dynamics, such as Large-Eddy Simulations (LES), are limited by high costs, computational demands, and time requirements. This study presents a deep neural network (DNN) approach for fast and accurate predictions of urban wind flow fields, reducing computation time from an order of 10 hours on 32 CPUs for one LES evaluation to an order of 1 second on a single GPU using the DNN model. We employ a U-Net architecture trained on LES data including 252 synthetic urban configurations at seven wind directions ($0^{o}$ to $90^{o}$ in $15^{o}$ increments). The model predicts two key quantities of interest: mean velocity magnitude and streamwise turbulence intensity, at multiple heights within the urban canopy. The U-net uses 2D building representations augmented with signed distance functions and their gradients as inputs, forming a $256\times256\times9$ tensor. In addition, a Spatial Attention Module is used for feature transfer through skip connections. The loss function combines the root-mean-square error of predictions, their gradient magnitudes, and L2 regularization. Model evaluation on 50 test cases demonstrates high accuracy with an overall mean relative error of 9.3% for velocity magnitude and 5.2% for turbulence intensity. This research shows the potential of deep learning approaches to provide fast, accurate urban wind assessments essential for creating comfortable and safe urban environments. Code is available at https://github.com/tvarg/Urban-FlowUnet.git
| null |
https://arxiv.org/abs/2507.06533v1
|
https://arxiv.org/pdf/2507.06533v1.pdf
| null |
[
"Themistoklis Vargiemezis",
"Catherine Gorlé"
] |
[
"GPU",
"L2 Regularization"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/advanced-financial-reasoning-at-scale-a
|
2507.02954
| null | null |
Advanced Financial Reasoning at Scale: A Comprehensive Evaluation of Large Language Models on CFA Level III
|
As financial institutions increasingly adopt Large Language Models (LLMs), rigorous domain-specific evaluation becomes critical for responsible deployment. This paper presents a comprehensive benchmark evaluating 23 state-of-the-art LLMs on the Chartered Financial Analyst (CFA) Level III exam - the gold standard for advanced financial reasoning. We assess both multiple-choice questions (MCQs) and essay-style responses using multiple prompting strategies including Chain-of-Thought and Self-Discover. Our evaluation reveals that leading models demonstrate strong capabilities, with composite scores such as 79.1% (o4-mini) and 77.3% (Gemini 2.5 Flash) on CFA Level III. These results, achieved under a revised, stricter essay grading methodology, indicate significant progress in LLM capabilities for high-stakes financial applications. Our findings provide crucial guidance for practitioners on model selection and highlight remaining challenges in cost-effective deployment and the need for nuanced interpretation of performance against professional benchmarks.
| null |
https://arxiv.org/abs/2507.02954v1
|
https://arxiv.org/pdf/2507.02954v1.pdf
| null |
[
"Pranam Shetty",
"Abhisek Upadhayaya",
"Parth Mitesh Shah",
"Srikanth Jagabathula",
"Shilpi Nayak",
"Anna Joo Fee"
] |
[
"Model Selection",
"Multiple-choice"
] | 2025-06-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "ADaptive gradient method with the OPTimal convergence rate",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "ADOPT",
"source_title": "ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate",
"source_url": "https://arxiv.org/abs/2411.02853v3"
}
] |
https://paperswithcode.com/paper/deepretro-retrosynthetic-pathway-discovery
|
2507.07060
| null | null |
DeepRetro: Retrosynthetic Pathway Discovery using Iterative LLM Reasoning
|
Retrosynthesis, the identification of precursor molecules for a target compound, is pivotal for synthesizing complex molecules, but faces challenges in discovering novel pathways beyond predefined templates. Recent large language model (LLM) approaches to retrosynthesis have shown promise but effectively harnessing LLM reasoning capabilities for effective multi-step planning remains an open question. To address this challenge, we introduce DeepRetro, an open-source, iterative, hybrid LLM-based retrosynthetic framework. Our approach integrates the strengths of conventional template-based/Monte Carlo tree search tools with the generative power of LLMs in a step-wise, feedback-driven loop. Initially, synthesis planning is attempted with a template-based engine. If this fails, the LLM subsequently proposes single-step retrosynthetic disconnections. Crucially, these suggestions undergo rigorous validity, stability, and hallucination checks before the resulting precursors are recursively fed back into the pipeline for further evaluation. This iterative refinement allows for dynamic pathway exploration and correction. We demonstrate the potential of this pipeline through benchmark evaluations and case studies, showcasing its ability to identify viable and potentially novel retrosynthetic routes. In particular, we develop an interactive graphical user interface that allows expert human chemists to provide human-in-the-loop feedback to the reasoning algorithm. This approach successfully generates novel pathways for complex natural product compounds, demonstrating the potential for iterative LLM reasoning to advance state-of-art in complex chemical syntheses.
| null |
https://arxiv.org/abs/2507.07060v1
|
https://arxiv.org/pdf/2507.07060v1.pdf
| null |
[
"Shreyas Vinaya Sathyanarayana",
"Rahil Shah",
"Sharanabasava D. Hiremath",
"Rishikesh Panda",
"Rahul Jana",
"Riya Singh",
"Rida Irfan",
"Ashwin Murali",
"Bharath Ramsundar"
] |
[
"Hallucination",
"Large Language Model",
"Retrosynthesis"
] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/veritas-verification-and-explanation-of
|
2507.05146
| null | null |
VERITAS: Verification and Explanation of Realness in Images for Transparency in AI Systems
|
The widespread and rapid adoption of AI-generated content, created by models such as Generative Adversarial Networks (GANs) and Diffusion Models, has revolutionized the digital media landscape by allowing efficient and creative content generation. However, these models also blur the difference between real images and AI-generated synthetic images, raising concerns regarding content authenticity and integrity. While many existing solutions to detect fake images focus solely on classification and higher-resolution images, they often lack transparency in their decision-making, making it difficult for users to understand why an image is classified as fake. In this paper, we present VERITAS, a comprehensive framework that not only accurately detects whether a small (32x32) image is AI-generated but also explains why it was classified that way through artifact localization and semantic reasoning. VERITAS produces human-readable explanations that describe key artifacts in synthetic images. We show that this architecture offers clear explanations of the basis of zero-shot synthetic image detection tasks. Code and relevant prompts can be found at https://github.com/V-i-g-n-e-s-h-N/VERITAS .
|
The widespread and rapid adoption of AI-generated content, created by models such as Generative Adversarial Networks (GANs) and Diffusion Models, has revolutionized the digital media landscape by allowing efficient and creative content generation.
|
https://arxiv.org/abs/2507.05146v1
|
https://arxiv.org/pdf/2507.05146v1.pdf
| null |
[
"Aadi Srivastava",
"Vignesh Natarajkumar",
"Utkarsh Bheemanaboyna",
"Devisree Akashapu",
"Nagraj Gaonkar",
"Archit Joshi"
] |
[
"Decision Making",
"Synthetic Image Detection"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/robust-one-step-speech-enhancement-via-1
|
2507.05688
| null | null |
Robust One-step Speech Enhancement via Consistency Distillation
|
Diffusion models have shown strong performance in speech enhancement, but their real-time applicability has been limited by multi-step iterative sampling. Consistency distillation has recently emerged as a promising alternative by distilling a one-step consistency model from a multi-step diffusion-based teacher model. However, distilled consistency models are inherently biased towards the sampling trajectory of the teacher model, making them less robust to noise and prone to inheriting inaccuracies from the teacher model. To address this limitation, we propose ROSE-CD: Robust One-step Speech Enhancement via Consistency Distillation, a novel approach for distilling a one-step consistency model. Specifically, we introduce a randomized learning trajectory to improve the model's robustness to noise. Furthermore, we jointly optimize the one-step model with two time-domain auxiliary losses, enabling it to recover from teacher-induced errors and surpass the teacher model in overall performance. This is the first pure one-step consistency distillation model for diffusion-based speech enhancement, achieving 54 times faster inference speed and superior performance compared to its 30-step teacher model. Experiments on the VoiceBank-DEMAND dataset demonstrate that the proposed model achieves state-of-the-art performance in terms of speech quality. Moreover, its generalization ability is validated on both an out-of-domain dataset and real-world noisy recordings.
|
To address this limitation, we propose ROSE-CD: Robust One-step Speech Enhancement via Consistency Distillation, a novel approach for distilling a one-step consistency model.
|
https://arxiv.org/abs/2507.05688v1
|
https://arxiv.org/pdf/2507.05688v1.pdf
| null |
[
"Liang Xu",
"Longfei Felix Yan",
"W. Bastiaan Kleijn"
] |
[
"Speech Enhancement"
] | 2025-07-08T00:00:00 |
https://arxiv.org/abs/2507.05688
|
https://arxiv.org/abs/2507.05688
|
robust-one-step-speech-enhancement-via
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "",
"full_name": "Consistency Models",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Diffusion Models",
"parent": null
},
"name": "Consistency Models",
"source_title": "Consistency Models",
"source_url": "https://arxiv.org/abs/2303.01469v2"
}
] |
https://paperswithcode.com/paper/constrained-ensemble-exploration-for
|
2405.16030
| null | null |
Constrained Ensemble Exploration for Unsupervised Skill Discovery
|
Unsupervised Reinforcement Learning (RL) provides a promising paradigm for learning useful behaviors via reward-free per-training. Existing methods for unsupervised RL mainly conduct empowerment-driven skill discovery or entropy-based exploration. However, empowerment often leads to static skills, and pure exploration only maximizes the state coverage rather than learning useful behaviors. In this paper, we propose a novel unsupervised RL framework via an ensemble of skills, where each skill performs partition exploration based on the state prototypes. Thus, each skill can explore the clustered area locally, and the ensemble skills maximize the overall state coverage. We adopt state-distribution constraints for the skill occupancy and the desired cluster for learning distinguishable skills. Theoretical analysis is provided for the state entropy and the resulting skill distributions. Based on extensive experiments on several challenging tasks, we find our method learns well-explored ensemble skills and achieves superior performance in various downstream tasks compared to previous methods.
| null |
https://arxiv.org/abs/2405.16030v1
|
https://arxiv.org/pdf/2405.16030v1.pdf
| null |
[
"Chenjia Bai",
"Rushuai Yang",
"Qiaosheng Zhang",
"Kang Xu",
"Yi Chen",
"Ting Xiao",
"Xuelong Li"
] |
[
"Reinforcement Learning (RL)",
"Unsupervised Reinforcement Learning"
] | 2024-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sas-segment-any-3d-scene-with-integrated-2d
|
2503.08512
| null | null |
SAS: Segment Any 3D Scene with Integrated 2D Priors
|
The open vocabulary capability of 3D models is increasingly valued, as traditional methods with models trained with fixed categories fail to recognize unseen objects in complex dynamic 3D scenes. In this paper, we propose a simple yet effective approach, SAS, to integrate the open vocabulary capability of multiple 2D models and migrate it to 3D domain. Specifically, we first propose Model Alignment via Text to map different 2D models into the same embedding space using text as a bridge. Then, we propose Annotation-Free Model Capability Construction to explicitly quantify the 2D model's capability of recognizing different categories using diffusion models. Following this, point cloud features from different 2D models are fused with the guide of constructed model capabilities. Finally, the integrated 2D open vocabulary capability is transferred to 3D domain through feature distillation. SAS outperforms previous methods by a large margin across multiple datasets, including ScanNet v2, Matterport3D, and nuScenes, while its generalizability is further validated on downstream tasks, e.g., gaussian segmentation and instance segmentation.
|
In this paper, we propose a simple yet effective approach, SAS, to integrate the open vocabulary capability of multiple 2D models and migrate it to 3D domain.
|
https://arxiv.org/abs/2503.08512v1
|
https://arxiv.org/pdf/2503.08512v1.pdf
| null |
[
"Zhuoyuan Li",
"Jiahao Lu",
"Jiacheng Deng",
"Hanzhi Chang",
"Lifan Wu",
"Yanzhe Liang",
"Tianzhu Zhang"
] |
[
"Instance Segmentation",
"Semantic Segmentation"
] | 2025-03-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/tabasco-a-fast-simplified-model-for-molecular-1
|
2507.00899
| null | null |
TABASCO: A Fast, Simplified Model for Molecular Generation with Improved Physical Quality
|
State-of-the-art models for 3D molecular generation are based on significant inductive biases, SE(3), permutation equivariance to respect symmetry and graph message-passing networks to capture local chemistry, yet the generated molecules still struggle with physical plausibility. We introduce TABASCO which relaxes these assumptions: The model has a standard non-equivariant transformer architecture, treats atoms in a molecule as sequences and reconstructs bonds deterministically after generation. The absence of equivariant layers and message passing allows us to significantly simplify the model architecture and scale data throughput. On the GEOM-Drugs benchmark TABASCO achieves state-of-the-art PoseBusters validity and delivers inference roughly 10x faster than the strongest baseline, while exhibiting emergent rotational equivariance despite symmetry not being hard-coded. Our work offers a blueprint for training minimalist, high-throughput generative models suited to specialised tasks such as structure- and pharmacophore-based drug design. We provide a link to our implementation at github.com/carlosinator/tabasco.
|
State-of-the-art models for 3D molecular generation are based on significant inductive biases, SE(3), permutation equivariance to respect symmetry and graph message-passing networks to capture local chemistry, yet the generated molecules still struggle with physical plausibility.
|
https://arxiv.org/abs/2507.00899v1
|
https://arxiv.org/pdf/2507.00899v1.pdf
| null |
[
"Carlos Vonessen",
"Charles Harris",
"Miruna Cretu",
"Pietro Liò"
] |
[
"Drug Design"
] | 2025-07-01T00:00:00 |
https://arxiv.org/abs/2507.00899
|
https://arxiv.org/pdf/2507.00899
|
tabasco-a-fast-simplified-model-for-molecular
| null |
[] |
https://paperswithcode.com/paper/l0-reinforcement-learning-to-become-general-1
|
2506.23667
| null | null |
L0: Reinforcement Learning to Become General Agents
|
Training large language models (LLMs) to act as autonomous agents for multi-turn, long-horizon tasks remains significant challenges in scalability and training efficiency. To address this, we introduce L-Zero (L0), a scalable, end-to-end training pipeline for general-purpose agents. Featuring a low-cost, extensible, and sandboxed concurrent agent worker pool, L0 lowers the barrier for applying reinforcement learning in complex environments. We also introduce NB-Agent, the agent scaffold within L0, which operates in a "code-as-action" fashion via a Read-Eval-Print-Loop (REPL). We evaluate L0 on factuality question-answering benchmarks. Our experiments demonstrate that a base model can develop robust problem-solving skills using solely Reinforcement Learning with Verifiable Rewards (RLVR). On the Qwen2.5-7B-Instruct model, our method boosts accuracy on SimpleQA from 30 % to 80 % and on HotpotQA from 22 % to 41 %. We have open-sourced the entire L0 system, including our L0 series models, the NB-Agent, a complete training pipeline, and the corresponding training recipes on (https://github.com/cmriat/l0).
|
We have open-sourced the entire L0 system, including our L0 series models, the NB-Agent, a complete training pipeline, and the corresponding training recipes on (https://github. com/cmriat/l0).
|
https://arxiv.org/abs/2506.23667v1
|
https://arxiv.org/pdf/2506.23667v1.pdf
| null |
[
"Junjie Zhang",
"Jingyi Xi",
"Zhuoyang Song",
"Junyu Lu",
"Yuhua Ke",
"Ting Sun",
"Yukun Yang",
"Jiaxing Zhang",
"Songxin Zhang",
"Zejian Xie"
] |
[
"Question Answering",
"reinforcement-learning",
"Reinforcement Learning"
] | 2025-06-30T00:00:00 |
https://arxiv.org/abs/2506.23667
|
https://arxiv.org/pdf/2506.23667
|
l0-reinforcement-learning-to-become-general
| null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
}
] |
https://paperswithcode.com/paper/yolov12-attention-centric-real-time-object
|
2502.12524
| null | null |
YOLOv12: Attention-Centric Real-Time Object Detectors
|
Enhancing the network architecture of the YOLO framework has been crucial for a long time, but has focused on CNN-based improvements despite the proven superiority of attention mechanisms in modeling capabilities. This is because attention-based models cannot match the speed of CNN-based models. This paper proposes an attention-centric YOLO framework, namely YOLOv12, that matches the speed of previous CNN-based ones while harnessing the performance benefits of attention mechanisms. YOLOv12 surpasses all popular real-time object detectors in accuracy with competitive speed. For example, YOLOv12-N achieves 40.6% mAP with an inference latency of 1.64 ms on a T4 GPU, outperforming advanced YOLOv10-N / YOLOv11-N by 2.1%/1.2% mAP with a comparable speed. This advantage extends to other model scales. YOLOv12 also surpasses end-to-end real-time detectors that improve DETR, such as RT-DETR / RT-DETRv2: YOLOv12-S beats RT-DETR-R18 / RT-DETRv2-R18 while running 42% faster, using only 36% of the computation and 45% of the parameters. More comparisons are shown in Figure 1.
|
This paper proposes an attention-centric YOLO framework, namely YOLOv12, that matches the speed of previous CNN-based ones while harnessing the performance benefits of attention mechanisms.
|
https://arxiv.org/abs/2502.12524v1
|
https://arxiv.org/pdf/2502.12524v1.pdf
| null |
[
"Yunjie Tian",
"Qixiang Ye",
"David Doermann"
] |
[
"GPU",
"Object"
] | 2025-02-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/benchmarking-and-analyzing-3d-aware-image-1
|
2306.12423
| null |
MZopld6S22
|
Benchmarking and Analyzing 3D-aware Image Synthesis with a Modularized Codebase
|
Despite the rapid advance of 3D-aware image synthesis, existing studies usually adopt a mixture of techniques and tricks, leaving it unclear how each part contributes to the final performance in terms of generality. Following the most popular and effective paradigm in this field, which incorporates a neural radiance field (NeRF) into the generator of a generative adversarial network (GAN), we build a well-structured codebase, dubbed Carver, through modularizing the generation process. Such a design allows researchers to develop and replace each module independently, and hence offers an opportunity to fairly compare various approaches and recognize their contributions from the module perspective. The reproduction of a range of cutting-edge algorithms demonstrates the availability of our modularized codebase. We also perform a variety of in-depth analyses, such as the comparison across different types of point feature, the necessity of the tailing upsampler in the generator, the reliance on the camera pose prior, etc., which deepen our understanding of existing methods and point out some further directions of the research work. We release code and models at https://github.com/qiuyu96/Carver to facilitate the development and evaluation of this field.
|
Despite the rapid advance of 3D-aware image synthesis, existing studies usually adopt a mixture of techniques and tricks, leaving it unclear how each part contributes to the final performance in terms of generality.
|
https://arxiv.org/abs/2306.12423v1
|
https://arxiv.org/pdf/2306.12423v1.pdf
|
NeurIPS 2023 11
|
[
"Qiuyu Wang",
"Zifan Shi",
"Kecheng Zheng",
"Yinghao Xu",
"Sida Peng",
"Yujun Shen"
] |
[
"3D-Aware Image Synthesis",
"Benchmarking",
"Generative Adversarial Network",
"Image Generation",
"NeRF"
] | 2023-06-21T00:00:00 |
https://openreview.net/forum?id=MZopld6S22
|
https://openreview.net/pdf?id=MZopld6S22
|
benchmarking-and-analyzing-3d-aware-image
| null |
[] |
https://paperswithcode.com/paper/dynamic-mixture-of-curriculum-lora-experts
|
2506.11672
| null | null |
Dynamic Mixture of Curriculum LoRA Experts for Continual Multimodal Instruction Tuning
|
Continual multimodal instruction tuning is crucial for adapting Multimodal Large Language Models (MLLMs) to evolving tasks. However, most existing methods adopt a fixed architecture, struggling with adapting to new tasks due to static model capacity. We propose to evolve the architecture under parameter budgets for dynamic task adaptation, which remains unexplored and imposes two challenges: 1) task architecture conflict, where different tasks require varying layer-wise adaptations, and 2) modality imbalance, where different tasks rely unevenly on modalities, leading to unbalanced updates. To address these challenges, we propose a novel Dynamic Mixture of Curriculum LoRA Experts (D-MoLE) method, which automatically evolves MLLM's architecture with controlled parameter budgets to continually adapt to new tasks while retaining previously learned knowledge. Specifically, we propose a dynamic layer-wise expert allocator, which automatically allocates LoRA experts across layers to resolve architecture conflicts, and routes instructions layer-wisely to facilitate knowledge sharing among experts. Then, we propose a gradient-based inter-modal continual curriculum, which adjusts the update ratio of each module in MLLM based on the difficulty of each modality within the task to alleviate the modality imbalance problem. Extensive experiments show that D-MoLE significantly outperforms state-of-the-art baselines, achieving a 15% average improvement over the best baseline. To the best of our knowledge, this is the first study of continual learning for MLLMs from an architectural perspective.
| null |
https://arxiv.org/abs/2506.11672v1
|
https://arxiv.org/pdf/2506.11672v1.pdf
| null |
[
"Chendi Ge",
"Xin Wang",
"Zeyang Zhang",
"Hong Chen",
"Jiapei Fan",
"Longtao Huang",
"Hui Xue",
"Wenwu Zhu"
] |
[
"Continual Learning"
] | 2025-06-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "ADaptive gradient method with the OPTimal convergence rate",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "ADOPT",
"source_title": "ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate",
"source_url": "https://arxiv.org/abs/2411.02853v3"
}
] |
https://paperswithcode.com/paper/enhancing-multimodal-continual-instruction
|
2506.02041
| null | null |
Enhancing Multimodal Continual Instruction Tuning with BranchLoRA
|
Multimodal Continual Instruction Tuning (MCIT) aims to finetune Multimodal Large Language Models (MLLMs) to continually align with human intent across sequential tasks. Existing approaches often rely on the Mixture-of-Experts (MoE) LoRA framework to preserve previous instruction alignments. However, these methods are prone to Catastrophic Forgetting (CF), as they aggregate all LoRA blocks via simple summation, which compromises performance over time. In this paper, we identify a critical parameter inefficiency in the MoELoRA framework within the MCIT context. Based on this insight, we propose BranchLoRA, an asymmetric framework to enhance both efficiency and performance. To mitigate CF, we introduce a flexible tuning-freezing mechanism within BranchLoRA, enabling branches to specialize in intra-task knowledge while fostering inter-task collaboration. Moreover, we incrementally incorporate task-specific routers to ensure an optimal branch distribution over time, rather than favoring the most recent task. To streamline inference, we introduce a task selector that automatically routes test inputs to the appropriate router without requiring task identity. Extensive experiments on the latest MCIT benchmark demonstrate that BranchLoRA significantly outperforms MoELoRA and maintains its superiority across various MLLM sizes.
| null |
https://arxiv.org/abs/2506.02041v1
|
https://arxiv.org/pdf/2506.02041v1.pdf
| null |
[
"Duzhen Zhang",
"Yong Ren",
"Zhong-Zhi Li",
"Yahan Yu",
"Jiahua Dong",
"Chenxing Li",
"Zhilong Ji",
"Jinfeng Bai"
] |
[
"Mixture-of-Experts"
] | 2025-05-31T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.",
"full_name": "ALIGN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)",
"name": "Vision and Language Pre-Trained Models",
"parent": null
},
"name": "ALIGN",
"source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision",
"source_url": "https://arxiv.org/abs/2102.05918v2"
}
] |
https://paperswithcode.com/paper/vlm-assisted-continual-learning-for-visual
|
2502.00843
| null | null |
VLM-Assisted Continual learning for Visual Question Answering in Self-Driving
|
In this paper, we propose a novel approach for solving the Visual Question Answering (VQA) task in autonomous driving by integrating Vision-Language Models (VLMs) with continual learning. In autonomous driving, VQA plays a vital role in enabling the system to understand and reason about its surroundings. However, traditional models often struggle with catastrophic forgetting when sequentially exposed to new driving tasks, such as perception, prediction, and planning, each requiring different forms of knowledge. To address this challenge, we present a novel continual learning framework that combines VLMs with selective memory replay and knowledge distillation, reinforced by task-specific projection layer regularization. The knowledge distillation allows a previously trained model to act as a "teacher" to guide the model through subsequent tasks, minimizing forgetting. Meanwhile, task-specific projection layers calculate the loss based on the divergence of feature representations, ensuring continuity in learning and reducing the shift between tasks. Evaluated on the DriveLM dataset, our framework shows substantial performance improvements, with gains ranging from 21.40% to 32.28% across various metrics. These results highlight the effectiveness of combining continual learning with VLMs in enhancing the resilience and reliability of VQA systems in autonomous driving. We will release our source code.
| null |
https://arxiv.org/abs/2502.00843v1
|
https://arxiv.org/pdf/2502.00843v1.pdf
| null |
[
"Yuxin Lin",
"Mengshi Qi",
"Liang Liu",
"Huadong Ma"
] |
[
"Autonomous Driving",
"Continual Learning",
"Knowledge Distillation",
"Question Answering",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2025-02-02T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://research.google/blog/auto-generated-summaries-in-google-docs/",
"description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.\r\nSource: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)",
"full_name": "Knowledge Distillation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Knowledge Distillation",
"parent": null
},
"name": "Knowledge Distillation",
"source_title": "Distilling the Knowledge in a Neural Network",
"source_url": "http://arxiv.org/abs/1503.02531v1"
}
] |
https://paperswithcode.com/paper/oasis-online-sample-selection-for-continual
|
2506.02011
| null | null |
OASIS: Online Sample Selection for Continual Visual Instruction Tuning
|
In continual visual instruction tuning (CVIT) scenarios, where multi-modal data continuously arrive in an online streaming manner, training delays from large-scale data significantly hinder real-time adaptation. While existing data selection strategies reduce training overheads, they rely on pre-trained reference models, which are impractical in CVIT setups due to unknown future data. Recent reference model-free online sample selection methods address this issue but typically select a fixed number of samples per batch (e.g., top-k), causing them to suffer from distribution shifts where informativeness varies across batches. To address these limitations, we propose OASIS, an adaptive online sample selection approach for CVIT that: (1) dynamically adjusts selected samples per batch based on relative inter-batch informativeness, and (2) minimizes redundancy of selected samples through iterative selection score updates. Empirical results across various MLLMs, such as LLaVA-1.5 and Qwen-VL-2.5, show that OASIS achieves comparable performance to full-data training using only 25% of the data and outperforms the state-of-the-art.
| null |
https://arxiv.org/abs/2506.02011v1
|
https://arxiv.org/pdf/2506.02011v1.pdf
| null |
[
"Minjae Lee",
"Minhyuk Seo",
"Tingyu Qu",
"Tinne Tuytelaars",
"Jonghyun Choi"
] |
[
"Informativeness"
] | 2025-05-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/boschresearch/OASIS",
"description": "OASIS is a [GAN](https://paperswithcode.com/method/gan)-based model to translate semantic label maps into realistic-looking images. The model builds on preceding work such as [Pix2Pix](https://paperswithcode.com/method/pix2pix) and SPADE. OASIS introduces the following innovations: \r\n\r\n1. The method is not dependent on the perceptual loss, which is commonly used for the semantic image synthesis task. A [VGG](https://paperswithcode.com/method/vgg) network trained on ImageNet is routinely employed as the perceptual loss to strongly improve the synthesis quality. The authors show that this perceptual loss also has negative effects: First, it reduces the diversity of the generated images. Second, it negatively influences the color distribution to be more biased towards ImageNet. OASIS eliminates the dependence on the perceptual loss by changing the common discriminator design: The OASIS discriminator segments an image into one of the real classes or an additional fake class. In doing so, it makes more efficient use of the label maps that the discriminator normally receives. This distinguishes the discriminator from the commonly used encoder-shaped discriminators, which concatenate the label maps to the input image and predict a single score per image. With the more fine-grained supervision through the loss of the OASIS discriminator, the perceptual loss is shown to become unnecessary.\r\n\r\n2. A user can generate a diverse set of images per label map by simply resampling noise. This is achieved by conditioning the [spatially-adaptive denormalization](https://arxiv.org/abs/1903.07291) module in each layer of the GAN generator directly on spatially replicated input noise. A side effect of this conditioning is that at inference time an image can be resampled either globally or locally (either the complete image changes or a restricted region in the image).",
"full_name": "OASIS",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Conditional Image-to-Image Translation Models",
"parent": null
},
"name": "OASIS",
"source_title": "You Only Need Adversarial Supervision for Semantic Image Synthesis",
"source_url": "https://arxiv.org/abs/2012.04781v3"
}
] |
https://paperswithcode.com/paper/p-0-5-a-vision-language-action-model-with
|
2504.16054
| null | null |
$π_{0.5}$: a Vision-Language-Action Model with Open-World Generalization
|
In order for robots to be useful, they must perform practically relevant tasks in the real world, outside of the lab. While vision-language-action (VLA) models have demonstrated impressive results for end-to-end robot control, it remains an open question how far such models can generalize in the wild. We describe $\pi_{0.5}$, a new model based on $\pi_{0}$ that uses co-training on heterogeneous tasks to enable broad generalization. $\pi_{0.5}$\ uses data from multiple robots, high-level semantic prediction, web data, and other sources to enable broadly generalizable real-world robotic manipulation. Our system uses a combination of co-training and hybrid multi-modal examples that combine image observations, language commands, object detections, semantic subtask prediction, and low-level actions. Our experiments show that this kind of knowledge transfer is essential for effective generalization, and we demonstrate for the first time that an end-to-end learning-enabled robotic system can perform long-horizon and dexterous manipulation skills, such as cleaning a kitchen or bedroom, in entirely new homes.
| null |
https://arxiv.org/abs/2504.16054v1
|
https://arxiv.org/pdf/2504.16054v1.pdf
| null |
[
"Physical Intelligence",
"Kevin Black",
"Noah Brown",
"James Darpinian",
"Karan Dhabalia",
"Danny Driess",
"Adnan Esmail",
"Michael Equi",
"Chelsea Finn",
"Niccolo Fusai",
"Manuel Y. Galliker",
"Dibya Ghosh",
"Lachy Groom",
"Karol Hausman",
"Brian Ichter",
"Szymon Jakubczak",
"Tim Jones",
"Liyiming Ke",
"Devin LeBlanc",
"Sergey Levine",
"Adrian Li-Bell",
"Mohith Mothukuri",
"Suraj Nair",
"Karl Pertsch",
"Allen Z. Ren",
"Lucy Xiaoyang Shi",
"Laura Smith",
"Jost Tobias Springenberg",
"Kyle Stachowicz",
"James Tanner",
"Quan Vuong",
"Homer Walke",
"Anna Walling",
"Haohuan Wang",
"Lili Yu",
"Ury Zhilinsky"
] |
[
"Transfer Learning",
"Vision-Language-Action"
] | 2025-04-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multimodal-deepresearcher-generating-text
|
2506.02454
| null | null |
Multimodal DeepResearcher: Generating Text-Chart Interleaved Reports From Scratch with Agentic Framework
|
Visualizations play a crucial part in effective communication of concepts and information. Recent advances in reasoning and retrieval augmented generation have enabled Large Language Models (LLMs) to perform deep research and generate comprehensive reports. Despite its progress, existing deep research frameworks primarily focus on generating text-only content, leaving the automated generation of interleaved texts and visualizations underexplored. This novel task poses key challenges in designing informative visualizations and effectively integrating them with text reports. To address these challenges, we propose Formal Description of Visualization (FDV), a structured textual representation of charts that enables LLMs to learn from and generate diverse, high-quality visualizations. Building on this representation, we introduce Multimodal DeepResearcher, an agentic framework that decomposes the task into four stages: (1) researching, (2) exemplar report textualization, (3) planning, and (4) multimodal report generation. For the evaluation of generated multimodal reports, we develop MultimodalReportBench, which contains 100 diverse topics served as inputs along with 5 dedicated metrics. Extensive experiments across models and evaluation methods demonstrate the effectiveness of Multimodal DeepResearcher. Notably, utilizing the same Claude 3.7 Sonnet model, Multimodal DeepResearcher achieves an 82\% overall win rate over the baseline method.
| null |
https://arxiv.org/abs/2506.02454v1
|
https://arxiv.org/pdf/2506.02454v1.pdf
| null |
[
"Zhaorui Yang",
"Bo Pan",
"Han Wang",
"Yiyao Wang",
"Xingyu Liu",
"Minfeng Zhu",
"Bo Zhang",
"Wei Chen"
] |
[
"Retrieval-augmented Generation"
] | 2025-06-03T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/deepresearch-bench-a-comprehensive-benchmark
|
2506.11763
| null | null |
DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents
|
Deep Research Agents are a prominent category of LLM-based agents. By autonomously orchestrating multistep web exploration, targeted retrieval, and higher-order synthesis, they transform vast amounts of online information into analyst-grade, citation-rich reports--compressing hours of manual desk research into minutes. However, a comprehensive benchmark for systematically evaluating the capabilities of these agents remains absent. To bridge this gap, we present DeepResearch Bench, a benchmark consisting of 100 PhD-level research tasks, each meticulously crafted by domain experts across 22 distinct fields. Evaluating DRAs is inherently complex and labor-intensive. We therefore propose two novel methodologies that achieve strong alignment with human judgment. The first is a reference-based method with adaptive criteria to assess the quality of generated research reports. The other framework is introduced to evaluate DRA's information retrieval and collection capabilities by assessing its effective citation count and overall citation accuracy. We have open-sourced DeepResearch Bench and key components of these frameworks at https://github.com/Ayanami0730/deep_research_bench to accelerate the development of practical LLM-based agents.
| null |
https://arxiv.org/abs/2506.11763v1
|
https://arxiv.org/pdf/2506.11763v1.pdf
| null |
[
"Mingxuan Du",
"Benfeng Xu",
"Chiwei Zhu",
"Xiaorui Wang",
"Zhendong Mao"
] |
[
"Information Retrieval",
"Retrieval"
] | 2025-06-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/target-concrete-score-matching-a-holistic
|
2504.16431
| null | null |
Target Concrete Score Matching: A Holistic Framework for Discrete Diffusion
|
Discrete diffusion is a promising framework for modeling and generating discrete data. In this work, we present Target Concrete Score Matching (TCSM), a novel and versatile objective for training and fine-tuning discrete diffusion models. TCSM provides a general framework with broad applicability. It supports pre-training discrete diffusion models directly from data samples, and many existing discrete diffusion approaches naturally emerge as special cases of our more general TCSM framework. Furthermore, the same TCSM objective extends to post-training of discrete diffusion models, including fine-tuning using reward functions or preference data, and distillation of knowledge from pre-trained autoregressive models. These new capabilities stem from the core idea of TCSM, estimating the concrete score of the target distribution, which resides in the original (clean) data space. This allows seamless integration with reward functions and pre-trained models, which inherently only operate in the clean data space rather than the noisy intermediate spaces of diffusion processes. Our experiments on language modeling tasks demonstrate that TCSM matches or surpasses current methods. Additionally, TCSM is versatile, applicable to both pre-training and post-training scenarios, offering greater flexibility and sample efficiency.
| null |
https://arxiv.org/abs/2504.16431v1
|
https://arxiv.org/pdf/2504.16431v1.pdf
| null |
[
"Ruixiang Zhang",
"Shuangfei Zhai",
"Yizhe Zhang",
"James Thornton",
"Zijing Ou",
"Joshua Susskind",
"Navdeep Jaitly"
] |
[
"Language Modeling",
"Language Modelling"
] | 2025-04-23T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/fast-dllm-training-free-acceleration-of
|
2505.22618
| null | null |
Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding
|
Diffusion-based large language models (Diffusion LLMs) have shown promise for non-autoregressive text generation with parallel decoding capabilities. However, the practical inference speed of open-sourced Diffusion LLMs often lags behind autoregressive models due to the lack of Key-Value (KV) Cache and quality degradation when decoding multiple tokens simultaneously. To bridge this gap, we introduce a novel block-wise approximate KV Cache mechanism tailored for bidirectional diffusion models, enabling cache reuse with negligible performance drop. Additionally, we identify the root cause of generation quality degradation in parallel decoding as the disruption of token dependencies under the conditional independence assumption. To address this, we propose a confidence-aware parallel decoding strategy that selectively decodes tokens exceeding a confidence threshold, mitigating dependency violations and maintaining generation quality. Experimental results on LLaDA and Dream models across multiple LLM benchmarks demonstrate up to \textbf{27.6$\times$ throughput} improvement with minimal accuracy loss, closing the performance gap with autoregressive models and paving the way for practical deployment of Diffusion LLMs.
| null |
https://arxiv.org/abs/2505.22618v3
|
https://arxiv.org/pdf/2505.22618v3.pdf
| null |
[
"Chengyue Wu",
"Hao Zhang",
"Shuchen Xue",
"Zhijian Liu",
"Shizhe Diao",
"Ligeng Zhu",
"Ping Luo",
"Song Han",
"Enze Xie"
] |
[
"Text Generation"
] | 2025-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/freemorph-tuning-free-generalized-image
|
2507.01953
| null | null |
FreeMorph: Tuning-Free Generalized Image Morphing with Diffusion Model
|
We present FreeMorph, the first tuning-free method for image morphing that accommodates inputs with different semantics or layouts. Unlike existing methods that rely on finetuning pre-trained diffusion models and are limited by time constraints and semantic/layout discrepancies, FreeMorph delivers high-fidelity image morphing without requiring per-instance training. Despite their efficiency and potential, tuning-free methods face challenges in maintaining high-quality results due to the non-linear nature of the multi-step denoising process and biases inherited from the pre-trained diffusion model. In this paper, we introduce FreeMorph to address these challenges by integrating two key innovations. 1) We first propose a guidance-aware spherical interpolation design that incorporates explicit guidance from the input images by modifying the self-attention modules, thereby addressing identity loss and ensuring directional transitions throughout the generated sequence. 2) We further introduce a step-oriented variation trend that blends self-attention modules derived from each input image to achieve controlled and consistent transitions that respect both inputs. Our extensive evaluations demonstrate that FreeMorph outperforms existing methods, being 10x ~ 50x faster and establishing a new state-of-the-art for image morphing.
| null |
https://arxiv.org/abs/2507.01953v1
|
https://arxiv.org/pdf/2507.01953v1.pdf
| null |
[
"Yukang Cao",
"Chenyang Si",
"Jinghao Wang",
"Ziwei Liu"
] |
[
"Denoising",
"Image Morphing"
] | 2025-07-02T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/re-examining-the-legendre-gauss-lobatto
|
2507.01660
| null | null |
Re-examining the Legendre-Gauss-Lobatto Pseudospectral Methods for Optimal Control
|
Pseudospectral methods represent an efficient approach for solving optimal control problems. While Legendre-Gauss-Lobatto (LGL) collocation points have traditionally been considered inferior to Legendre-Gauss (LG) and Legendre-Gauss-Radau (LGR) points in terms of convergence properties, this paper presents a rigorous re-examination of LGL-based methods. We introduce an augmented formulation that enhances the standard LGL collocation approach by incorporating an additional degree of freedom (DOF) into the interpolation structure. We demonstrate that this augmented formulation is mathematically equivalent to the integral formulation of the LGL collocation method. Through analytical derivation, we establish that the adjoint system in both the augmented differential and integral formulations corresponds to a Lobatto IIIB discontinuous collocation method for the costate vector, thereby resolving the previously reported convergence issues. Our comparative analysis of LG, LGR, and LGL collocation methods reveals significant advantages of the improved LGL approach in terms of discretized problem dimensionality and symplectic integration properties. Numerical examples validate our theoretical findings, demonstrating that the proposed LGL-based method achieves comparable accuracy to LG and LGR methods while offering superior computational performance for long-horizon optimal control problems due to the preservation of symplecticity.
|
Pseudospectral methods represent an efficient approach for solving optimal control problems.
|
https://arxiv.org/abs/2507.01660v1
|
https://arxiv.org/pdf/2507.01660v1.pdf
| null |
[
"Yilin Zou",
"Fanghua Jiang"
] |
[] | 2025-07-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sasep-saliency-aware-structured-separation-of-1
|
2506.13224
| null | null |
SASep: Saliency-Aware Structured Separation of Geometry and Feature for Open Set Learning on Point Clouds
|
Recent advancements in deep learning have greatly enhanced 3D object recognition, but most models are limited to closed-set scenarios, unable to handle unknown samples in real-world applications. Open-set recognition (OSR) addresses this limitation by enabling models to both classify known classes and identify novel classes. However, current OSR methods rely on global features to differentiate known and unknown classes, treating the entire object uniformly and overlooking the varying semantic importance of its different parts. To address this gap, we propose Salience-Aware Structured Separation (SASep), which includes (i) a tunable semantic decomposition (TSD) module to semantically decompose objects into important and unimportant parts, (ii) a geometric synthesis strategy (GSS) to generate pseudo-unknown objects by combining these unimportant parts, and (iii) a synth-aided margin separation (SMS) module to enhance feature-level separation by expanding the feature distributions between classes. Together, these components improve both geometric and feature representations, enhancing the model's ability to effectively distinguish known and unknown classes. Experimental results show that SASep achieves superior performance in 3D OSR, outperforming existing state-of-the-art methods.
|
Recent advancements in deep learning have greatly enhanced 3D object recognition, but most models are limited to closed-set scenarios, unable to handle unknown samples in real-world applications.
|
https://arxiv.org/abs/2506.13224v1
|
https://arxiv.org/pdf/2506.13224v1.pdf
|
CVPR 2025 1
|
[
"Jinfeng Xu",
"Xianzhi Li",
"Yuan Tang",
"Xu Han",
"Qiao Yu",
"Yixue Hao",
"Long Hu",
"Min Chen"
] |
[
"3D Object Recognition",
"Object Recognition",
"Open Set Learning"
] | 2025-06-16T00:00:00 |
http://openaccess.thecvf.com//content/CVPR2025/html/Xu_SASep_Saliency-Aware_Structured_Separation_of_Geometry_and_Feature_for_Open_CVPR_2025_paper.html
|
http://openaccess.thecvf.com//content/CVPR2025/papers/Xu_SASep_Saliency-Aware_Structured_Separation_of_Geometry_and_Feature_for_Open_CVPR_2025_paper.pdf
|
sasep-saliency-aware-structured-separation-of
| null |
[] |
https://paperswithcode.com/paper/evolutionary-perspectives-on-the-evaluation
|
2506.11102
| null | null |
Evolutionary Perspectives on the Evaluation of LLM-Based AI Agents: A Comprehensive Survey
|
The advent of large language models (LLMs), such as GPT, Gemini, and DeepSeek, has significantly advanced natural language processing, giving rise to sophisticated chatbots capable of diverse language-related tasks. The transition from these traditional LLM chatbots to more advanced AI agents represents a pivotal evolutionary step. However, existing evaluation frameworks often blur the distinctions between LLM chatbots and AI agents, leading to confusion among researchers selecting appropriate benchmarks. To bridge this gap, this paper introduces a systematic analysis of current evaluation approaches, grounded in an evolutionary perspective. We provide a detailed analytical framework that clearly differentiates AI agents from LLM chatbots along five key aspects: complex environment, multi-source instructor, dynamic feedback, multi-modal perception, and advanced capability. Further, we categorize existing evaluation benchmarks based on external environments driving forces, and resulting advanced internal capabilities. For each category, we delineate relevant evaluation attributes, presented comprehensively in practical reference tables. Finally, we synthesize current trends and outline future evaluation methodologies through four critical lenses: environment, agent, evaluator, and metrics. Our findings offer actionable guidance for researchers, facilitating the informed selection and application of benchmarks in AI agent evaluation, thus fostering continued advancement in this rapidly evolving research domain.
| null |
https://arxiv.org/abs/2506.11102v1
|
https://arxiv.org/pdf/2506.11102v1.pdf
| null |
[
"Jiachen Zhu",
"Menghui Zhu",
"Renting Rui",
"Rong Shan",
"Congmin Zheng",
"Bo Chen",
"Yunjia Xi",
"Jianghao Lin",
"Weiwen Liu",
"Ruiming Tang",
"Yong Yu",
"Weinan Zhang"
] |
[
"AI Agent"
] | 2025-06-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!",
"full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v5"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Cosine Annealing** is a type of learning rate schedule that has the effect of starting with a large learning rate that is relatively rapidly decreased to a minimum value before being increased rapidly again. The resetting of the learning rate acts like a simulated restart of the learning process and the re-use of good weights as the starting point of the restart is referred to as a \"warm restart\" in contrast to a \"cold restart\" where a new set of small random numbers may be used as a starting point.\r\n\r\n$$\\eta\\_{t} = \\eta\\_{min}^{i} + \\frac{1}{2}\\left(\\eta\\_{max}^{i}-\\eta\\_{min}^{i}\\right)\\left(1+\\cos\\left(\\frac{T\\_{cur}}{T\\_{i}}\\pi\\right)\\right)\r\n$$\r\n\r\nWhere where $\\eta\\_{min}^{i}$ and $ \\eta\\_{max}^{i}$ are ranges for the learning rate, and $T\\_{cur}$ account for how many epochs have been performed since the last restart.\r\n\r\nText Source: [Jason Brownlee](https://machinelearningmastery.com/snapshot-ensemble-deep-learning-neural-network/)\r\n\r\nImage Source: [Gao Huang](https://www.researchgate.net/figure/Training-loss-of-100-layer-DenseNet-on-CIFAR10-using-standard-learning-rate-blue-and-M_fig2_315765130)",
"full_name": "Cosine Annealing",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Cosine Annealing",
"source_title": "SGDR: Stochastic Gradient Descent with Warm Restarts",
"source_url": "http://arxiv.org/abs/1608.03983v5"
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Cosine Annealing** is a learning rate schedule where we increase the learning rate linearly for $n$ updates and then anneal according to a cosine schedule afterwards.",
"full_name": "Linear Warmup With Cosine Annealing",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Cosine Annealing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/fastai/fastai/blob/43001e17ba469308e9688dfe99a891018bcf7ad4/courses/dl2/imdb_scripts/finetune_lm.py#L132",
"description": "**Discriminative Fine-Tuning** is a fine-tuning strategy that is used for [ULMFiT](https://paperswithcode.com/method/ulmfit) type models. Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent ([SGD](https://paperswithcode.com/method/sgd)) update of a model’s parameters $\\theta$ at time step $t$ looks like the following (Ruder, 2016):\r\n\r\n$$ \\theta\\_{t} = \\theta\\_{t-1} − \\eta\\cdot\\nabla\\_{\\theta}J\\left(\\theta\\right)$$\r\n\r\nwhere $\\eta$ is the learning rate and $\\nabla\\_{\\theta}J\\left(\\theta\\right)$ is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters $\\theta$ into {$\\theta\\_{1}, \\ldots, \\theta\\_{L}$} where $\\theta\\_{l}$ contains the parameters of the model at the $l$-th layer and $L$ is the number of layers of the model. Similarly, we obtain {$\\eta\\_{1}, \\ldots, \\eta\\_{L}$} where $\\theta\\_{l}$ where $\\eta\\_{l}$ is the learning rate of the $l$-th layer. The SGD update with discriminative finetuning is then:\r\n\r\n$$ \\theta\\_{t}^{l} = \\theta\\_{t-1}^{l} - \\eta^{l}\\cdot\\nabla\\_{\\theta^{l}}J\\left(\\theta\\right) $$\r\n\r\nThe authors find that empirically it worked well to first choose the learning rate $\\eta^{L}$ of the last layer by fine-tuning only the last layer and using $\\eta^{l-1}=\\eta^{l}/2.6$ as the learning rate for lower layers.",
"full_name": "Discriminative Fine-Tuning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Fine-Tuning** methods in deep learning take existing trained networks and 'fine-tune' them to a new task so that information contained in the weights can be repurposed. Below you can find a continuously updating list of fine-tuning methods.",
"name": "Fine-Tuning",
"parent": null
},
"name": "Discriminative Fine-Tuning",
"source_title": "Universal Language Model Fine-tuning for Text Classification",
"source_url": "http://arxiv.org/abs/1801.06146v5"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**GPT** is a [Transformer](https://paperswithcode.com/method/transformer)-based architecture and training procedure for natural language processing tasks. Training follows a two-stage procedure. First, a language modeling objective is used on\r\nthe unlabeled data to learn the initial parameters of a neural network model. Subsequently, these parameters are adapted to a target task using the corresponding supervised objective.",
"full_name": "GPT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "GPT",
"source_title": "Improving Language Understanding by Generative Pre-Training",
"source_url": "https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf"
}
] |
https://paperswithcode.com/paper/thought-augmented-planning-for-llm-powered
|
2506.23485
| null | null |
Thought-Augmented Planning for LLM-Powered Interactive Recommender Agent
|
Interactive recommendation is a typical information-seeking task that allows users to interactively express their needs through natural language and obtain personalized recommendations. Large language model-powered (LLM-powered) agents have become a new paradigm in interactive recommendations, effectively capturing users' real-time needs and enhancing personalized experiences. However, due to limited planning and generalization capabilities, existing formulations of LLM-powered interactive recommender agents struggle to effectively address diverse and complex user intents, such as intuitive, unrefined, or occasionally ambiguous requests. To tackle this challenge, we propose a novel thought-augmented interactive recommender agent system (TAIRA) that addresses complex user intents through distilled thought patterns. Specifically, TAIRA is designed as an LLM-powered multi-agent system featuring a manager agent that orchestrates recommendation tasks by decomposing user needs and planning subtasks, with its planning capacity strengthened through Thought Pattern Distillation (TPD), a thought-augmentation method that extracts high-level thoughts from the agent's and human experts' experiences. Moreover, we designed a set of user simulation schemes to generate personalized queries of different difficulties and evaluate the recommendations based on specific datasets. Through comprehensive experiments conducted across multiple datasets, TAIRA exhibits significantly enhanced performance compared to existing methods. Notably, TAIRA shows a greater advantage on more challenging tasks while generalizing effectively on novel tasks, further validating its superiority in managing complex user intents within interactive recommendation systems. The code is publicly available at:https://github.com/Alcein/TAIRA.
| null |
https://arxiv.org/abs/2506.23485v1
|
https://arxiv.org/pdf/2506.23485v1.pdf
| null |
[
"Haocheng Yu",
"Yaxiong Wu",
"Hao Wang",
"Wei Guo",
"Yong liu",
"Yawen Li",
"Yuyang Ye",
"Junping Du",
"Enhong Chen"
] |
[
"Interactive Recommendation",
"Large Language Model",
"Recommendation Systems",
"User Simulation"
] | 2025-06-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/naviagent-bilevel-planning-on-tool-dependency
|
2506.19500
| null | null |
NaviAgent: Bilevel Planning on Tool Dependency Graphs for Function Calling
|
LLMs' reliance on static knowledge and fragile tool invocation severely hinders the orchestration of complex, heterogeneous toolchains, particularly at large scales. Existing methods typically use rigid single-path execution, resulting in poor error recovery and exponentially growing search spaces. We introduce NaviAgent, a graph-navigated bilevel planning architecture for robust function calling, comprising a Multi-Path Decider and Graph-Encoded Navigator. As an LLM-powered agent, the Multi-Path Decider defines a four-dimensional decision space and continuously perceives environmental states, dynamically selecting the optimal action to fully cover all tool invocation scenarios. The Graph-Encoded Navigator constructs a Tool Dependency Heterogeneous Graph (TDHG), where node embeddings explicitly fuse API schema structure with historical invocation behavior. It also integrates a novel heuristic search strategy that guides the Decider toward efficient and highly successful toolchains, even for unseen tool combinations. Experiments show that NaviAgent consistently achieves the highest task success rate (TSR) across all foundation models and task complexities, outperforming the average baselines (ReAct, ToolLLM, {\alpha}-UMI) by 13.5%, 16.4%, and 19.0% on Qwen2.5-14B, Qwen2.5-32B, and Deepseek-V3, respectively. Its execution steps are typically within one step of the most efficient baseline, ensuring a strong balance between quality and efficiency. Notably, a fine-tuned Qwen2.5-14B model achieves a TSR of 49.5%, surpassing the much larger 32B model (44.9%) under our architecture. Incorporating the Graph-Encoded Navigator further boosts TSR by an average of 2.4 points, with gains up over 9 points on complex tasks for larger models (Deepseek-V3 and GPT-4o), highlighting its essential role in toolchain orchestration.
| null |
https://arxiv.org/abs/2506.19500v1
|
https://arxiv.org/pdf/2506.19500v1.pdf
| null |
[
"Yan Jiang",
"Hao Zhou",
"LiZhong GU",
"Ai Han",
"Tianlong Li"
] |
[
"Heuristic Search"
] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-llm-agent-planning-with-in-context
|
2506.09171
| null | null |
Improving LLM Agent Planning with In-Context Learning via Atomic Fact Augmentation and Lookahead Search
|
Large Language Models (LLMs) are increasingly capable but often require significant guidance or extensive interaction history to perform effectively in complex, interactive environments. Existing methods may struggle with adapting to new information or efficiently utilizing past experiences for multi-step reasoning without fine-tuning. We introduce a novel LLM agent framework that enhances planning capabilities through in-context learning, facilitated by atomic fact augmentation and a recursive lookahead search. Our agent learns to extract task-critical ``atomic facts'' from its interaction trajectories. These facts dynamically augment the prompts provided to LLM-based components responsible for action proposal, latent world model simulation, and state-value estimation. Planning is performed via a depth-limited lookahead search, where the LLM simulates potential trajectories and evaluates their outcomes, guided by the accumulated facts and interaction history. This approach allows the agent to improve its understanding and decision-making online, leveraging its experience to refine its behavior without weight updates. We provide a theoretical motivation linking performance to the quality of fact-based abstraction and LLM simulation accuracy. Empirically, our agent demonstrates improved performance and adaptability on challenging interactive tasks, achieving more optimal behavior as it accumulates experience, showcased in tasks such as TextFrozenLake and ALFWorld.
| null |
https://arxiv.org/abs/2506.09171v1
|
https://arxiv.org/pdf/2506.09171v1.pdf
| null |
[
"Samuel Holt",
"Max Ruiz Luyten",
"Thomas Pouplin",
"Mihaela van der Schaar"
] |
[
"In-Context Learning"
] | 2025-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/maple-multi-agent-adaptive-planning-with-long
|
2506.05813
| null | null |
MAPLE: Multi-Agent Adaptive Planning with Long-Term Memory for Table Reasoning
|
Table-based question answering requires complex reasoning capabilities that current LLMs struggle to achieve with single-pass inference. Existing approaches, such as Chain-of-Thought reasoning and question decomposition, lack error detection mechanisms and discard problem-solving experiences, contrasting sharply with how humans tackle such problems. In this paper, we propose MAPLE (Multi-agent Adaptive Planning with Long-term mEmory), a novel framework that mimics human problem-solving through specialized cognitive agents working in a feedback-driven loop. MAPLE integrates 4 key components: (1) a Solver using the ReAct paradigm for reasoning, (2) a Checker for answer verification, (3) a Reflector for error diagnosis and strategy correction, and (4) an Archiver managing long-term memory for experience reuse and evolution. Experiments on WiKiTQ and TabFact demonstrate significant improvements over existing methods, achieving state-of-the-art performance across multiple LLM backbones.
| null |
https://arxiv.org/abs/2506.05813v1
|
https://arxiv.org/pdf/2506.05813v1.pdf
| null |
[
"Ye Bai",
"Minghan Wang",
"Thuy-Trang Vu"
] |
[
"Question Answering",
"Table-based Question Answering"
] | 2025-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ella-embodied-social-agents-with-lifelong
|
2506.24019
| null | null |
Ella: Embodied Social Agents with Lifelong Memory
|
We introduce Ella, an embodied social agent capable of lifelong learning within a community in a 3D open world, where agents accumulate experiences and acquire knowledge through everyday visual observations and social interactions. At the core of Ella's capabilities is a structured, long-term multimodal memory system that stores, updates, and retrieves information effectively. It consists of a name-centric semantic memory for organizing acquired knowledge and a spatiotemporal episodic memory for capturing multimodal experiences. By integrating this lifelong memory system with foundation models, Ella retrieves relevant information for decision-making, plans daily activities, builds social relationships, and evolves autonomously while coexisting with other intelligent beings in the open world. We conduct capability-oriented evaluations in a dynamic 3D open world where 15 agents engage in social activities for days and are assessed with a suite of unseen controlled evaluations. Experimental results show that Ella can influence, lead, and cooperate with other agents well to achieve goals, showcasing its ability to learn effectively through observation and social interaction. Our findings highlight the transformative potential of combining structured memory systems with foundation models for advancing embodied intelligence. More videos can be found at https://umass-embodied-agi.github.io/Ella/.
| null |
https://arxiv.org/abs/2506.24019v1
|
https://arxiv.org/pdf/2506.24019v1.pdf
| null |
[
"Hongxin Zhang",
"Zheyuan Zhang",
"Zeyuan Wang",
"Zunzhe Zhang",
"Lixing Fang",
"Qinhong Zhou",
"Chuang Gan"
] |
[
"Lifelong learning"
] | 2025-06-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/membench-towards-more-comprehensive
|
2506.21605
| null | null |
MemBench: Towards More Comprehensive Evaluation on the Memory of LLM-based Agents
|
Recent works have highlighted the significance of memory mechanisms in LLM-based agents, which enable them to store observed information and adapt to dynamic environments. However, evaluating their memory capabilities still remains challenges. Previous evaluations are commonly limited by the diversity of memory levels and interactive scenarios. They also lack comprehensive metrics to reflect the memory capabilities from multiple aspects. To address these problems, in this paper, we construct a more comprehensive dataset and benchmark to evaluate the memory capability of LLM-based agents. Our dataset incorporates factual memory and reflective memory as different levels, and proposes participation and observation as various interactive scenarios. Based on our dataset, we present a benchmark, named MemBench, to evaluate the memory capability of LLM-based agents from multiple aspects, including their effectiveness, efficiency, and capacity. To benefit the research community, we release our dataset and project at https://github.com/import-myself/Membench.
| null |
https://arxiv.org/abs/2506.21605v1
|
https://arxiv.org/pdf/2506.21605v1.pdf
| null |
[
"Haoran Tan",
"Zeyu Zhang",
"Chen Ma",
"Xu Chen",
"Quanyu Dai",
"Zhenhua Dong"
] |
[
"Diversity"
] | 2025-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mem1-learning-to-synergize-memory-and
|
2506.15841
| null | null |
MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents
|
Modern language agents must operate over long-horizon, multi-turn interactions, where they retrieve external information, adapt to observations, and answer interdependent queries. Yet, most LLM systems rely on full-context prompting, appending all past turns regardless of their relevance. This leads to unbounded memory growth, increased computational costs, and degraded reasoning performance on out-of-distribution input lengths. We introduce MEM1, an end-to-end reinforcement learning framework that enables agents to operate with constant memory across long multi-turn tasks. At each turn, MEM1 updates a compact shared internal state that jointly supports memory consolidation and reasoning. This state integrates prior memory with new observations from the environment while strategically discarding irrelevant or redundant information. To support training in more realistic and compositional settings, we propose a simple yet effective and scalable approach to constructing multi-turn environments by composing existing datasets into arbitrarily complex task sequences. Experiments across three domains, including internal retrieval QA, open-domain web QA, and multi-turn web shopping, show that MEM1-7B improves performance by 3.5x while reducing memory usage by 3.7x compared to Qwen2.5-14B-Instruct on a 16-objective multi-hop QA task, and generalizes beyond the training horizon. Our results demonstrate the promise of reasoning-driven memory consolidation as a scalable alternative to existing solutions for training long-horizon interactive agents, where both efficiency and performance are optimized.
| null |
https://arxiv.org/abs/2506.15841v1
|
https://arxiv.org/pdf/2506.15841v1.pdf
| null |
[
"Zijian Zhou",
"Ao Qu",
"Zhaoxuan Wu",
"Sunghwan Kim",
"Alok Prakash",
"Daniela Rus",
"Jinhua Zhao",
"Bryan Kian Hsiang Low",
"Paul Pu Liang"
] |
[] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/contextual-experience-replay-for-self
|
2506.06698
| null | null |
Contextual Experience Replay for Self-Improvement of Language Agents
|
Large language model (LLM) agents have been applied to sequential decision-making tasks such as web navigation, but without any environment-specific experiences, they often fail in these complex tasks. Moreover, current LLM agents are not designed to continually learn from past experiences during inference time, which could be crucial for them to gain these environment-specific experiences. To address this, we propose Contextual Experience Replay (CER), a training-free framework to enable efficient self-improvement for language agents in their context window. Specifically, CER accumulates and synthesizes past experiences into a dynamic memory buffer. These experiences encompass environment dynamics and common decision-making patterns, allowing the agents to retrieve and augment themselves with relevant knowledge in new tasks, enhancing their adaptability in complex environments. We evaluate CER on the challenging WebArena and VisualWebArena benchmarks. On VisualWebArena, CER achieves a competitive performance of 31.9%. On WebArena, CER also gets a competitive average success rate of 36.7%, relatively improving the success rate of the GPT-4o agent baseline by 51.0%. We also conduct a comprehensive analysis on it to prove its efficiency, validity and understand it better.
| null |
https://arxiv.org/abs/2506.06698v1
|
https://arxiv.org/pdf/2506.06698v1.pdf
| null |
[
"Yitao Liu",
"Chenglei Si",
"Karthik Narasimhan",
"Shunyu Yao"
] |
[
"Decision Making",
"Large Language Model",
"Sequential Decision Making"
] | 2025-06-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Experience Replay** is a replay memory technique used in reinforcement learning where we store the agent’s experiences at each time-step, $e\\_{t} = \\left(s\\_{t}, a\\_{t}, r\\_{t}, s\\_{t+1}\\right)$ in a data-set $D = e\\_{1}, \\cdots, e\\_{N}$ , pooled over many episodes into a replay memory. We then usually sample the memory randomly for a minibatch of experience, and use this to learn off-policy, as with Deep Q-Networks. This tackles the problem of autocorrelation leading to unstable training, by making the problem more like a supervised learning problem.\r\n\r\nImage Credit: [Hands-On Reinforcement Learning with Python, Sudharsan Ravichandiran](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788836524)",
"full_name": "Experience Replay",
"introduced_year": 1993,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Replay Memory",
"parent": null
},
"name": "Experience Replay",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/debate-reflect-and-distill-multi-agent
|
2506.03541
| null | null |
Debate, Reflect, and Distill: Multi-Agent Feedback with Tree-Structured Preference Optimization for Efficient Language Model Enhancement
|
Large Language Models (LLMs) continue to set new standards in knowledge-intensive and complex reasoning tasks, yet their high computational demands limit widespread adoption. While distilling large models into smaller ones offers a sustainable solution, current techniques--such as static knowledge distillation, resource-intensive reinforcement learning from human feedback, or limited self-reflection--struggle to yield substantial and lasting performance gains. In this paper, we present a novel Debate and Reflect (D&R) framework that orchestrates multi-turn debates between smaller models and stronger teacher models, eliciting actionable feedback (e.g., error analysis, corrective strategies) to guide student models. Further, we introduce Tree-structured Direct Preference Optimization (T-DPO) to efficiently leverage these debate logs, organizing interactions into a hierarchical format for effective training. Empirical evaluations across diverse NLP benchmarks demonstrate that our approach significantly improves smaller-model accuracy, robustness, and generalization, outperforming conventional baselines by a large margin.
| null |
https://arxiv.org/abs/2506.03541v1
|
https://arxiv.org/pdf/2506.03541v1.pdf
| null |
[
"Xiaofeng Zhou",
"Heyan Huang",
"Lizi Liao"
] |
[
"Knowledge Distillation",
"Language Modeling",
"Language Modelling"
] | 2025-06-04T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/mitigating-manipulation-and-enhancing
|
2506.02992
| null | null |
Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation
|
Large Language Models (LLMs) are increasingly explored for legal argument generation, yet they pose significant risks of manipulation through hallucination and ungrounded persuasion, and often fail to utilize provided factual bases effectively or abstain when arguments are untenable. This paper introduces a novel reflective multi-agent method designed to address these challenges in the context of legally compliant persuasion. Our approach employs specialized agents--a Factor Analyst and an Argument Polisher--in an iterative refinement process to generate 3-ply legal arguments (plaintiff, defendant, rebuttal). We evaluate Reflective Multi-Agent against single-agent, enhanced-prompt single-agent, and non-reflective multi-agent baselines using four diverse LLMs (GPT-4o, GPT-4o-mini, Llama-4-Maverick-17b-128e, Llama-4-Scout-17b-16e) across three legal scenarios: "arguable", "mismatched", and "non-arguable". Results demonstrate Reflective Multi-Agent's significant superiority in successful abstention (preventing generation when arguments cannot be grounded), marked improvements in hallucination accuracy (reducing fabricated and misattributed factors), particularly in "non-arguable" scenarios, and enhanced factor utilization recall (improving the use of provided case facts). These findings suggest that structured reflection within a multi-agent framework offers a robust computable method for fostering ethical persuasion and mitigating manipulation in LLM-based legal argumentation systems, a critical step towards trustworthy AI in law. Project page: https://lizhang-aiandlaw.github.io/A-Reflective-Multi-Agent-Approach-for-Legal-Argument-Generation/
| null |
https://arxiv.org/abs/2506.02992v1
|
https://arxiv.org/pdf/2506.02992v1.pdf
| null |
[
"Li Zhang",
"Kevin D. Ashley"
] |
[
"Hallucination"
] | 2025-06-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/knowledge-augmented-finetuning-matters-in
|
2506.22852
| null | null |
Knowledge Augmented Finetuning Matters in both RAG and Agent Based Dialog Systems
|
Large language models (LLMs) have recently been applied to dialog systems. Despite making progress, LLMs are prone to errors in knowledge-intensive scenarios. Recently, approaches based on retrieval augmented generation (RAG) and agent have emerged to improve the factual accuracy by enhancing the LLMs with knowledge retrieved from external knowledge bases (KBs). This is mostly implemented by prompting the LLMs with instructions, examples and the retrieved knowledge. However, LLMs may have difficulty using the retrieved knowledge effectively for response generation, because they are not well trained to do such generation for specific domains. To mitigate this problem, we propose to finetune the LLMs in the RAG-based and agent-based systems with domain-specific data, together with domain-specific external knowledge, which is called knowledge augmented finetuning (KAFT). We base our study on the MobileCS2 dataset, a real-life customer service dialog dataset that features intensive knowledge interactions, to systematically compare the prompting and KAFT techniques in the RAG-based and agent-based systems. Experiment results show that KAFT substantially surpasses prompting in both RAG and agent systems, particularly in terms of factual accuracy. To the best of our knowledge, this paper represents the first solid empirical work to investigate the KAFT idea.
| null |
https://arxiv.org/abs/2506.22852v1
|
https://arxiv.org/pdf/2506.22852v1.pdf
| null |
[
"Yucheng Cai",
"Yuxuan Wu",
"Yi Huang",
"Junlan Feng",
"Zhijian Ou"
] |
[
"RAG",
"Response Generation",
"Retrieval-augmented Generation"
] | 2025-06-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
},
{
"code_snippet_url": null,
"description": "**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://paperswithcode.com/method/transformer)-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like [BERT](https://paperswithcode.com/method/bert)) and a left-to-right decoder (like [GPT](https://paperswithcode.com/method/gpt)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like [GPT2](https://paperswithcode.com/method/gpt-2).",
"full_name": "BART",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "BART",
"source_title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension",
"source_url": "https://arxiv.org/abs/1910.13461v1"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
},
{
"code_snippet_url": "",
"description": "**Retriever-Augmented Generation**, or **RAG**, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z\\_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents.",
"full_name": "RAG",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "RAG",
"source_title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks",
"source_url": "https://arxiv.org/abs/2005.11401v4"
}
] |
https://paperswithcode.com/paper/arag-agentic-retrieval-augmented-generation
|
2506.21931
| null | null |
ARAG: Agentic Retrieval Augmented Generation for Personalized Recommendation
|
Retrieval-Augmented Generation (RAG) has shown promise in enhancing recommendation systems by incorporating external context into large language model prompts. However, existing RAG-based approaches often rely on static retrieval heuristics and fail to capture nuanced user preferences in dynamic recommendation scenarios. In this work, we introduce ARAG, an Agentic Retrieval-Augmented Generation framework for Personalized Recommendation, which integrates a multi-agent collaboration mechanism into the RAG pipeline. To better understand the long-term and session behavior of the user, ARAG leverages four specialized LLM-based agents: a User Understanding Agent that summarizes user preferences from long-term and session contexts, a Natural Language Inference (NLI) Agent that evaluates semantic alignment between candidate items retrieved by RAG and inferred intent, a context summary agent that summarizes the findings of NLI agent, and an Item Ranker Agent that generates a ranked list of recommendations based on contextual fit. We evaluate ARAG accross three datasets. Experimental results demonstrate that ARAG significantly outperforms standard RAG and recency-based baselines, achieving up to 42.1% improvement in NDCG@5 and 35.5% in Hit@5. We also, conduct an ablation study to analyse the effect by different components of ARAG. Our findings highlight the effectiveness of integrating agentic reasoning into retrieval-augmented recommendation and provide new directions for LLM-based personalization.
| null |
https://arxiv.org/abs/2506.21931v1
|
https://arxiv.org/pdf/2506.21931v1.pdf
| null |
[
"Reza Yousefi Maragheh",
"Pratheek Vadla",
"Priyank Gupta",
"Kai Zhao",
"Aysenur Inan",
"Kehui Yao",
"Jianpeng Xu",
"Praveen Kanumala",
"Jason Cho",
"Sushant Kumar"
] |
[
"Large Language Model",
"Natural Language Inference",
"RAG",
"Recommendation Systems",
"Retrieval",
"Retrieval-augmented Generation"
] | 2025-06-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
},
{
"code_snippet_url": null,
"description": "**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://paperswithcode.com/method/transformer)-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like [BERT](https://paperswithcode.com/method/bert)) and a left-to-right decoder (like [GPT](https://paperswithcode.com/method/gpt)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like [GPT2](https://paperswithcode.com/method/gpt-2).",
"full_name": "BART",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "BART",
"source_title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension",
"source_url": "https://arxiv.org/abs/1910.13461v1"
},
{
"code_snippet_url": "",
"description": "**Retriever-Augmented Generation**, or **RAG**, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z\\_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents.",
"full_name": "RAG",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "RAG",
"source_title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks",
"source_url": "https://arxiv.org/abs/2005.11401v4"
}
] |
https://paperswithcode.com/paper/agentswift-efficient-llm-agent-design-via
|
2506.06017
| null | null |
AgentSwift: Efficient LLM Agent Design via Value-guided Hierarchical Search
|
Large language model (LLM) agents have demonstrated strong capabilities across diverse domains. However, designing high-performing agentic systems remains challenging. Existing agent search methods suffer from three major limitations: (1) an emphasis on optimizing agentic workflows while under-utilizing proven human-designed components such as memory, planning, and tool use; (2) high evaluation costs, as each newly generated agent must be fully evaluated on benchmarks; and (3) inefficient search in large search space. In this work, we introduce a comprehensive framework to address these challenges. First, We propose a hierarchical search space that jointly models agentic workflow and composable functional components, enabling richer agentic system designs. Building on this structured design space, we introduce a predictive value model that estimates agent performance given agentic system and task description, allowing for efficient, low-cost evaluation during the search process. Finally, we present a hierarchical Monte Carlo Tree Search (MCTS) strategy informed by uncertainty to guide the search. Experiments on seven benchmarks, covering embodied, math, web, tool, and game, show that our method achieves an average performance gain of 8.34\% over state-of-the-art baselines and exhibits faster search progress with steeper improvement trajectories. Code repo is available at https://github.com/Ericccc02/AgentSwift.
| null |
https://arxiv.org/abs/2506.06017v1
|
https://arxiv.org/pdf/2506.06017v1.pdf
| null |
[
"Yu Li",
"Lehui Li",
"Zhihao Wu",
"Qingmin Liao",
"Jianye Hao",
"Kun Shao",
"Fengli Xu",
"Yong Li"
] |
[
"Large Language Model",
"Math"
] | 2025-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/agent-to-agent-theory-of-mind-testing
|
2506.22957
| null | null |
Agent-to-Agent Theory of Mind: Testing Interlocutor Awareness among Large Language Models
|
As large language models (LLMs) are increasingly integrated into multi-agent and human-AI systems, understanding their awareness of both self-context and conversational partners is essential for ensuring reliable performance and robust safety. While prior work has extensively studied situational awareness which refers to an LLM's ability to recognize its operating phase and constraints, it has largely overlooked the complementary capacity to identify and adapt to the identity and characteristics of a dialogue partner. In this paper, we formalize this latter capability as interlocutor awareness and present the first systematic evaluation of its emergence in contemporary LLMs. We examine interlocutor inference across three dimensions-reasoning patterns, linguistic style, and alignment preferences-and show that LLMs reliably identify same-family peers and certain prominent model families, such as GPT and Claude. To demonstrate its practical significance, we develop three case studies in which interlocutor awareness both enhances multi-LLM collaboration through prompt adaptation and introduces new alignment and safety vulnerabilities, including reward-hacking behaviors and increased jailbreak susceptibility. Our findings highlight the dual promise and peril of identity-sensitive behavior in LLMs, underscoring the need for further understanding of interlocutor awareness and new safeguards in multi-agent deployments. Our code is open-sourced at https://github.com/younwoochoi/InterlocutorAwarenessLLM.
| null |
https://arxiv.org/abs/2506.22957v1
|
https://arxiv.org/pdf/2506.22957v1.pdf
| null |
[
"Younwoo Choi",
"Changling Li",
"Yongjin Yang",
"Zhijing Jin"
] |
[] | 2025-06-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!",
"full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v5"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Cosine Annealing** is a type of learning rate schedule that has the effect of starting with a large learning rate that is relatively rapidly decreased to a minimum value before being increased rapidly again. The resetting of the learning rate acts like a simulated restart of the learning process and the re-use of good weights as the starting point of the restart is referred to as a \"warm restart\" in contrast to a \"cold restart\" where a new set of small random numbers may be used as a starting point.\r\n\r\n$$\\eta\\_{t} = \\eta\\_{min}^{i} + \\frac{1}{2}\\left(\\eta\\_{max}^{i}-\\eta\\_{min}^{i}\\right)\\left(1+\\cos\\left(\\frac{T\\_{cur}}{T\\_{i}}\\pi\\right)\\right)\r\n$$\r\n\r\nWhere where $\\eta\\_{min}^{i}$ and $ \\eta\\_{max}^{i}$ are ranges for the learning rate, and $T\\_{cur}$ account for how many epochs have been performed since the last restart.\r\n\r\nText Source: [Jason Brownlee](https://machinelearningmastery.com/snapshot-ensemble-deep-learning-neural-network/)\r\n\r\nImage Source: [Gao Huang](https://www.researchgate.net/figure/Training-loss-of-100-layer-DenseNet-on-CIFAR10-using-standard-learning-rate-blue-and-M_fig2_315765130)",
"full_name": "Cosine Annealing",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Cosine Annealing",
"source_title": "SGDR: Stochastic Gradient Descent with Warm Restarts",
"source_url": "http://arxiv.org/abs/1608.03983v5"
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Cosine Annealing** is a learning rate schedule where we increase the learning rate linearly for $n$ updates and then anneal according to a cosine schedule afterwards.",
"full_name": "Linear Warmup With Cosine Annealing",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Cosine Annealing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/fastai/fastai/blob/43001e17ba469308e9688dfe99a891018bcf7ad4/courses/dl2/imdb_scripts/finetune_lm.py#L132",
"description": "**Discriminative Fine-Tuning** is a fine-tuning strategy that is used for [ULMFiT](https://paperswithcode.com/method/ulmfit) type models. Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent ([SGD](https://paperswithcode.com/method/sgd)) update of a model’s parameters $\\theta$ at time step $t$ looks like the following (Ruder, 2016):\r\n\r\n$$ \\theta\\_{t} = \\theta\\_{t-1} − \\eta\\cdot\\nabla\\_{\\theta}J\\left(\\theta\\right)$$\r\n\r\nwhere $\\eta$ is the learning rate and $\\nabla\\_{\\theta}J\\left(\\theta\\right)$ is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters $\\theta$ into {$\\theta\\_{1}, \\ldots, \\theta\\_{L}$} where $\\theta\\_{l}$ contains the parameters of the model at the $l$-th layer and $L$ is the number of layers of the model. Similarly, we obtain {$\\eta\\_{1}, \\ldots, \\eta\\_{L}$} where $\\theta\\_{l}$ where $\\eta\\_{l}$ is the learning rate of the $l$-th layer. The SGD update with discriminative finetuning is then:\r\n\r\n$$ \\theta\\_{t}^{l} = \\theta\\_{t-1}^{l} - \\eta^{l}\\cdot\\nabla\\_{\\theta^{l}}J\\left(\\theta\\right) $$\r\n\r\nThe authors find that empirically it worked well to first choose the learning rate $\\eta^{L}$ of the last layer by fine-tuning only the last layer and using $\\eta^{l-1}=\\eta^{l}/2.6$ as the learning rate for lower layers.",
"full_name": "Discriminative Fine-Tuning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Fine-Tuning** methods in deep learning take existing trained networks and 'fine-tune' them to a new task so that information contained in the weights can be repurposed. Below you can find a continuously updating list of fine-tuning methods.",
"name": "Fine-Tuning",
"parent": null
},
"name": "Discriminative Fine-Tuning",
"source_title": "Universal Language Model Fine-tuning for Text Classification",
"source_url": "http://arxiv.org/abs/1801.06146v5"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**GPT** is a [Transformer](https://paperswithcode.com/method/transformer)-based architecture and training procedure for natural language processing tasks. Training follows a two-stage procedure. First, a language modeling objective is used on\r\nthe unlabeled data to learn the initial parameters of a neural network model. Subsequently, these parameters are adapted to a target task using the corresponding supervised objective.",
"full_name": "GPT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "GPT",
"source_title": "Improving Language Understanding by Generative Pre-Training",
"source_url": "https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf"
}
] |
https://paperswithcode.com/paper/mam-modular-multi-agent-framework-for-multi
|
2506.19835
| null | null |
MAM: Modular Multi-Agent Framework for Multi-Modal Medical Diagnosis via Role-Specialized Collaboration
|
Recent advancements in medical Large Language Models (LLMs) have showcased their powerful reasoning and diagnostic capabilities. Despite their success, current unified multimodal medical LLMs face limitations in knowledge update costs, comprehensiveness, and flexibility. To address these challenges, we introduce the Modular Multi-Agent Framework for Multi-Modal Medical Diagnosis (MAM). Inspired by our empirical findings highlighting the benefits of role assignment and diagnostic discernment in LLMs, MAM decomposes the medical diagnostic process into specialized roles: a General Practitioner, Specialist Team, Radiologist, Medical Assistant, and Director, each embodied by an LLM-based agent. This modular and collaborative framework enables efficient knowledge updates and leverages existing medical LLMs and knowledge bases. Extensive experimental evaluations conducted on a wide range of publicly accessible multimodal medical datasets, incorporating text, image, audio, and video modalities, demonstrate that MAM consistently surpasses the performance of modality-specific LLMs. Notably, MAM achieves significant performance improvements ranging from 18% to 365% compared to baseline models. Our code is released at https://github.com/yczhou001/MAM.
| null |
https://arxiv.org/abs/2506.19835v1
|
https://arxiv.org/pdf/2506.19835v1.pdf
| null |
[
"Yucheng Zhou",
"Lingran Song",
"Jianbing Shen"
] |
[
"Diagnostic",
"Medical Diagnosis"
] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/language-informed-synthesis-of-rational-agent
|
2506.16755
| null | null |
Language-Informed Synthesis of Rational Agent Models for Grounded Theory-of-Mind Reasoning On-The-Fly
|
Drawing real world social inferences usually requires taking into account information from multiple modalities. Language is a particularly powerful source of information in social settings, especially in novel situations where language can provide both abstract information about the environment dynamics and concrete specifics about an agent that cannot be easily visually observed. In this paper, we propose Language-Informed Rational Agent Synthesis (LIRAS), a framework for drawing context-specific social inferences that integrate linguistic and visual inputs. LIRAS frames multimodal social reasoning as a process of constructing structured but situation-specific agent and environment representations - leveraging multimodal language models to parse language and visual inputs into unified symbolic representations, over which a Bayesian inverse planning engine can be run to produce granular probabilistic judgments. On a range of existing and new social reasoning tasks derived from cognitive science experiments, we find that our model (instantiated with a comparatively lightweight VLM) outperforms ablations and state-of-the-art models in capturing human judgments across all domains.
| null |
https://arxiv.org/abs/2506.16755v1
|
https://arxiv.org/pdf/2506.16755v1.pdf
| null |
[
"Lance Ying",
"Ryan Truong",
"Katherine M. Collins",
"Cedegao E. Zhang",
"Megan Wei",
"Tyler Brooke-Wilson",
"Tan Zhi-Xuan",
"Lionel Wong",
"Joshua B. Tenenbaum"
] |
[] | 2025-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/personaagent-when-large-language-model-agents
|
2506.06254
| null | null |
PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time
|
Large Language Model (LLM) empowered agents have recently emerged as advanced paradigms that exhibit impressive capabilities in a wide range of domains and tasks. Despite their potential, current LLM agents often adopt a one-size-fits-all approach, lacking the flexibility to respond to users' varying needs and preferences. This limitation motivates us to develop PersonaAgent, the first personalized LLM agent framework designed to address versatile personalization tasks. Specifically, PersonaAgent integrates two complementary components - a personalized memory module that includes episodic and semantic memory mechanisms; a personalized action module that enables the agent to perform tool actions tailored to the user. At the core, the persona (defined as unique system prompt for each user) functions as an intermediary: it leverages insights from personalized memory to control agent actions, while the outcomes of these actions in turn refine the memory. Based on the framework, we propose a test-time user-preference alignment strategy that simulate the latest n interactions to optimize the persona prompt, ensuring real-time user preference alignment through textual loss feedback between simulated and ground-truth responses. Experimental evaluations demonstrate that PersonaAgent significantly outperforms other baseline methods by not only personalizing the action space effectively but also scaling during test-time real-world applications. These results underscore the feasibility and potential of our approach in delivering tailored, dynamic user experiences.
| null |
https://arxiv.org/abs/2506.06254v1
|
https://arxiv.org/pdf/2506.06254v1.pdf
| null |
[
"Weizhi Zhang",
"Xinyang Zhang",
"Chenwei Zhang",
"Liangwei Yang",
"Jingbo Shang",
"Zhepei Wei",
"Henry Peng Zou",
"Zijie Huang",
"Zhengyang Wang",
"Yifan Gao",
"Xiaoman Pan",
"Lian Xiong",
"Jingguo Liu",
"Philip S. Yu",
"Xian Li"
] |
[
"Language Modeling",
"Language Modelling",
"Large Language Model"
] | 2025-06-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "ADaptive gradient method with the OPTimal convergence rate",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "ADOPT",
"source_title": "ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate",
"source_url": "https://arxiv.org/abs/2411.02853v3"
}
] |
https://paperswithcode.com/paper/thinking-in-character-advancing-role-playing
|
2506.01748
| null | null |
Thinking in Character: Advancing Role-Playing Agents with Role-Aware Reasoning
|
The advancement of Large Language Models (LLMs) has spurred significant interest in Role-Playing Agents (RPAs) for applications such as emotional companionship and virtual interaction. However, recent RPAs are often built on explicit dialogue data, lacking deep, human-like internal thought processes, resulting in superficial knowledge and style expression. While Large Reasoning Models (LRMs) can be employed to simulate character thought, their direct application is hindered by attention diversion (i.e., RPAs forget their role) and style drift (i.e., overly formal and rigid reasoning rather than character-consistent reasoning). To address these challenges, this paper introduces a novel Role-Aware Reasoning (RAR) method, which consists of two important stages: Role Identity Activation (RIA) and Reasoning Style Optimization (RSO). RIA explicitly guides the model with character profiles during reasoning to counteract attention diversion, and then RSO aligns reasoning style with the character and scene via LRM distillation to mitigate style drift. Extensive experiments demonstrate that the proposed RAR significantly enhances the performance of RPAs by effectively addressing attention diversion and style drift.
| null |
https://arxiv.org/abs/2506.01748v1
|
https://arxiv.org/pdf/2506.01748v1.pdf
| null |
[
"Yihong Tang",
"Kehai Chen",
"Muyun Yang",
"ZhengYu Niu",
"Jing Li",
"Tiejun Zhao",
"Min Zhang"
] |
[] | 2025-06-02T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.