paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
float64
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/getting-dynamic-line-ratings-into-markets
2507.00826
null
null
Getting Dynamic Line Ratings into Markets
Static transmission line ratings may lead to underutilization of line capacity due to overly conservative (worst-case) assumptions. Grid-enhancing technologies (GETs) such as dynamic line ratings (DLRs), which adjust line capacity based on real-time conditions, are a techno-economically viable alternative to increase the utilization of existing power lines. Nonetheless, their adoption has been slow, partly due to the absence of operational tools that effectively account for simultaneous impacts on dispatch and pricing. In this paper, we represent transmission capacity with DLRs as a stock-like resource with time-variant interdependency, which is modeled via an approximation of line temperature evolution process, decoupling the impacts of ambient weather conditions and power flow on transmission line temperature and thus capacity. We integrate DLRs into a multi-period DC optimal power flow problem, with chance constrains addressing correlated uncertainty in DLRs and renewable generation. This yields non-convex problems that we transform into a tractable convex form by linearization. We derive locational marginal energy and ancillary services prices consistent with a competitive equilibrium. Numerical experiments on the 11-zone and 1814-node NYISO systems demonstrate its performance, including impacts on dispatch, pricing, and marginal carbon emissions.
null
https://arxiv.org/abs/2507.00826v1
https://arxiv.org/pdf/2507.00826v1.pdf
null
[ "Zhiyi Zhou", "Christoph Graf", "Yury Dvorkin" ]
[]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/theoretical-modeling-of-llm-self-improvement
2507.00075
null
null
Theoretical Modeling of LLM Self-Improvement Training Dynamics Through Solver-Verifier Gap
Self-improvement is among the most prominent techniques within the realm of large language models (LLM), aiming to enhance the LLM performance without relying on external data. Despite its significance, generally how LLM performances evolve during the self-improvement process remains underexplored. In this paper, we theoretically model the training dynamics of self-improvement via the concept of solver-verifier gap. This is inspired by the conjecture that the performance enhancement of self-improvement stems from the gap between LLM's solver capability and verifier capability. Based on the theoretical framework, we further introduce how to predict the ultimate power of self-improvement using only information from the first few training epochs. We empirically validate the effectiveness of the theoretical model on various LLMs and datasets. Beyond self-improvement, we extend our analysis to investigate how external data influences these dynamics within the framework. Notably, we find that under limited external data regimes, such external data can be utilized at any stage without significantly affecting final performances, which accords with the empirical observations.
null
https://arxiv.org/abs/2507.00075v1
https://arxiv.org/pdf/2507.00075v1.pdf
null
[ "Yifan Sun", "Yushan Liang", "Zhen Zhang", "Jiaye Teng" ]
[]
2025-06-29T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/uniglyph-unified-segmentation-conditioned
2507.00992
null
null
UniGlyph: Unified Segmentation-Conditioned Diffusion for Precise Visual Text Synthesis
Text-to-image generation has greatly advanced content creation, yet accurately rendering visual text remains a key challenge due to blurred glyphs, semantic drift, and limited style control. Existing methods often rely on pre-rendered glyph images as conditions, but these struggle to retain original font styles and color cues, necessitating complex multi-branch designs that increase model overhead and reduce flexibility. To address these issues, we propose a segmentation-guided framework that uses pixel-level visual text masks -- rich in glyph shape, color, and spatial detail -- as unified conditional inputs. Our method introduces two core components: (1) a fine-tuned bilingual segmentation model for precise text mask extraction, and (2) a streamlined diffusion model augmented with adaptive glyph conditioning and a region-specific loss to preserve textual fidelity in both content and style. Our approach achieves state-of-the-art performance on the AnyText benchmark, significantly surpassing prior methods in both Chinese and English settings. To enable more rigorous evaluation, we also introduce two new benchmarks: GlyphMM-benchmark for testing layout and glyph consistency in complex typesetting, and MiniText-benchmark for assessing generation quality in small-scale text regions. Experimental results show that our model outperforms existing methods by a large margin in both scenarios, particularly excelling at small text rendering and complex layout preservation, validating its strong generalization and deployment readiness.
null
https://arxiv.org/abs/2507.00992v1
https://arxiv.org/pdf/2507.00992v1.pdf
null
[ "Yuanrui Wang", "Cong Han", "YafeiLi", "Zhipeng Jin", "Xiawei Li", "Sinan Du", "Wen Tao", "Yi Yang", "Shuanglong Li", "Chun Yuan", "Liu Lin" ]
[ "Image Generation", "Text to Image Generation", "Text-to-Image Generation" ]
2025-07-01T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/m-2-tokenizer-differentiable-multi-scale
2507.00316
null
null
$μ^2$Tokenizer: Differentiable Multi-Scale Multi-Modal Tokenizer for Radiology Report Generation
Automated radiology report generation (RRG) aims to produce detailed textual reports from clinical imaging, such as computed tomography (CT) scans, to improve the accuracy and efficiency of diagnosis and provision of management advice. RRG is complicated by two key challenges: (1) inherent complexity in extracting relevant information from imaging data under resource constraints, and (2) difficulty in objectively evaluating discrepancies between model-generated and expert-written reports. To address these challenges, we propose $\mu^2$LLM, a $\underline{\textbf{mu}}$ltiscale $\underline{\textbf{mu}}$ltimodal large language models for RRG tasks. The novel ${\mu}^2$Tokenizer, as an intermediate layer, integrates multi-modal features from the multiscale visual tokenizer and the text tokenizer, then enhances report generation quality through direct preference optimization (DPO), guided by GREEN-RedLlama. Experimental results on four large CT image-report medical datasetdemonstrate that our method outperforms existing approaches, highlighting the potential of our fine-tuned $\mu^2$LLMs on limited data for RRG tasks.
null
https://arxiv.org/abs/2507.00316v1
https://arxiv.org/pdf/2507.00316v1.pdf
null
[ "Siyou Li", "Pengyao Qin", "Huanan Wu", "Dong Nie", "Arun J. Thirunavukarasu", "Juntao Yu", "Le Zhang" ]
[ "Computed Tomography (CT)" ]
2025-06-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/llava-sp-enhancing-visual-representation-with
2507.00505
null
null
LLaVA-SP: Enhancing Visual Representation with Visual Spatial Tokens for MLLMs
The architecture of multimodal large language models (MLLMs) commonly connects a vision encoder, often based on CLIP-ViT, to a large language model. While CLIP-ViT works well for capturing global image features, it struggles to model local relationships between adjacent patches, leading to weaker visual representation, which in turn affects the detailed understanding ability of MLLMs. To solve this, we propose LLaVA-SP, which \textbf{ only adds six spatial visual tokens} to the original visual tokens to enhance the visual representation. Our approach offers three key advantages: 1)We propose a novel Projector, which uses convolutional kernels to derive visual spatial tokens from ViT patch features, simulating two visual spatial ordering approaches: ``from central region to global" and ``from abstract to specific". Then, a cross-attention mechanism is applied to fuse fine-grained visual information, enriching the overall visual representation. 2) We present two model variants: LLaVA-SP-Cropping, which focuses on detail features through progressive cropping, and LLaVA-SP-Pooling, which captures global semantics through adaptive pooling, enabling the model to handle diverse visual understanding tasks. 3) Extensive experiments show that LLaVA-SP, fine-tuned with LoRA, achieves significant performance improvements across various multimodal benchmarks, outperforming the state-of-the-art LLaVA-1.5 model in multiple tasks with nearly identical inference latency. The code and models are available at \href{https://github.com/CnFaker/LLaVA-SP}{\texttt{https://github.com/CnFaker/LLaVA-SP}}.
The architecture of multimodal large language models (MLLMs) commonly connects a vision encoder, often based on CLIP-ViT, to a large language model.
https://arxiv.org/abs/2507.00505v1
https://arxiv.org/pdf/2507.00505v1.pdf
null
[ "Haoran Lou", "Chunxiao Fan", "Ziyan Liu", "Yuexin Wu", "Xinxiang Wang" ]
[ "Large Language Model" ]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/zero-shot-skeleton-based-action-recognition-2
2507.00566
null
null
Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment
Zero-shot skeleton-based action recognition aims to classify unseen skeleton-based human actions without prior exposure to such categories during training. This task is extremely challenging due to the difficulty in generalizing from known to unknown actions. Previous studies typically use two-stage training: pre-training skeleton encoders on seen action categories using cross-entropy loss and then aligning pre-extracted skeleton and text features, enabling knowledge transfer to unseen classes through skeleton-text alignment and language models' generalization. However, their efficacy is hindered by 1) insufficient discrimination for skeleton features, as the fixed skeleton encoder fails to capture necessary alignment information for effective skeleton-text alignment; 2) the neglect of alignment bias between skeleton and unseen text features during testing. To this end, we propose a prototype-guided feature alignment paradigm for zero-shot skeleton-based action recognition, termed PGFA. Specifically, we develop an end-to-end cross-modal contrastive training framework to improve skeleton-text alignment, ensuring sufficient discrimination for skeleton features. Additionally, we introduce a prototype-guided text feature alignment strategy to mitigate the adverse impact of the distribution discrepancy during testing. We provide a theoretical analysis to support our prototype-guided text feature alignment strategy and empirically evaluate our overall PGFA on three well-known datasets. Compared with the top competitor SMIE method, our PGFA achieves absolute accuracy improvements of 22.96%, 12.53%, and 18.54% on the NTU-60, NTU-120, and PKU-MMD datasets, respectively.
However, their efficacy is hindered by 1) insufficient discrimination for skeleton features, as the fixed skeleton encoder fails to capture necessary alignment information for effective skeleton-text alignment; 2) the neglect of alignment bias between skeleton and unseen text features during testing.
https://arxiv.org/abs/2507.00566v1
https://arxiv.org/pdf/2507.00566v1.pdf
null
[ "Kai Zhou", "Shuhai Zhang", "Zeng You", "Jinwu Hu", "Mingkui Tan", "Fei Liu" ]
[ "Action Recognition", "One-Shot 3D Action Recognition", "Skeleton Based Action Recognition", "Transfer Learning", "Zero Shot Skeletal Action Recognition", "Zero-shot skeleton-based action recognition" ]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/risk-averse-best-arm-set-identification-with
2506.22253
null
null
Risk-Averse Best Arm Set Identification with Fixed Budget and Fixed Confidence
Decision making under uncertain environments in the maximization of expected reward while minimizing its risk is one of the ubiquitous problems in many subjects. Here, we introduce a novel problem setting in stochastic bandit optimization that jointly addresses two critical aspects of decision-making: maximizing expected reward and minimizing associated uncertainty, quantified via the mean-variance(MV) criterion. Unlike traditional bandit formulations that focus solely on expected returns, our objective is to efficiently and accurately identify the Pareto-optimal set of arms that strikes the best trade-off between expected performance and risk. We propose a unified meta-algorithmic framework capable of operating under both fixed-confidence and fixed-budget regimes, achieved through adaptive design of confidence intervals tailored to each scenario using the same sample exploration strategy. We provide theoretical guarantees on the correctness of the returned solutions in both settings. To complement this theoretical analysis, we conduct extensive empirical evaluations across synthetic benchmarks, demonstrating that our approach outperforms existing methods in terms of both accuracy and sample efficiency, highlighting its broad applicability to risk-aware decision-making tasks in uncertain environments.
null
https://arxiv.org/abs/2506.22253v1
https://arxiv.org/pdf/2506.22253v1.pdf
null
[ "Shunta Nonaga", "Koji Tabata", "Yuta Mizuno", "Tamiki Komatsuzaki" ]
[ "Decision Making" ]
2025-06-27T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/a-unified-transformer-based-framework-with
2507.00676
null
null
A Unified Transformer-Based Framework with Pretraining For Whole Body Grasping Motion Generation
Accepted in the ICIP 2025 We present a novel transformer-based framework for whole-body grasping that addresses both pose generation and motion infilling, enabling realistic and stable object interactions. Our pipeline comprises three stages: Grasp Pose Generation for full-body grasp generation, Temporal Infilling for smooth motion continuity, and a LiftUp Transformer that refines downsampled joints back to high-resolution markers. To overcome the scarcity of hand-object interaction data, we introduce a data-efficient Generalized Pretraining stage on large, diverse motion datasets, yielding robust spatio-temporal representations transferable to grasping tasks. Experiments on the GRAB dataset show that our method outperforms state-of-the-art baselines in terms of coherence, stability, and visual realism. The modular design also supports easy adaptation to other human-motion applications.
Accepted in the ICIP 2025 We present a novel transformer-based framework for whole-body grasping that addresses both pose generation and motion infilling, enabling realistic and stable object interactions.
https://arxiv.org/abs/2507.00676v1
https://arxiv.org/pdf/2507.00676v1.pdf
null
[ "Edward Effendy", "Kuan-Wei Tseng", "Rei Kawakami" ]
[ "Grasp Generation", "Motion Generation" ]
2025-07-01T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" } ]
https://paperswithcode.com/paper/process-aware-and-high-fidelity
2507.00459
null
null
Process-aware and high-fidelity microstructure generation using stable diffusion
Synthesizing realistic microstructure images conditioned on processing parameters is crucial for understanding process-structure relationships in materials design. However, this task remains challenging due to limited training micrographs and the continuous nature of processing variables. To overcome these challenges, we present a novel process-aware generative modeling approach based on Stable Diffusion 3.5 Large (SD3.5-Large), a state-of-the-art text-to-image diffusion model adapted for microstructure generation. Our method introduces numeric-aware embeddings that encode continuous variables (annealing temperature, time, and magnification) directly into the model's conditioning, enabling controlled image generation under specified process conditions and capturing process-driven microstructural variations. To address data scarcity and computational constraints, we fine-tune only a small fraction of the model's weights via DreamBooth and Low-Rank Adaptation (LoRA), efficiently transferring the pre-trained model to the materials domain. We validate realism using a semantic segmentation model based on a fine-tuned U-Net with a VGG16 encoder on 24 labeled micrographs. It achieves 97.1% accuracy and 85.7% mean IoU, outperforming previous methods. Quantitative analyses using physical descriptors and spatial statistics show strong agreement between synthetic and real microstructures. Specifically, two-point correlation and lineal-path errors remain below 2.1% and 0.6%, respectively. Our method represents the first adaptation of SD3.5-Large for process-aware microstructure generation, offering a scalable approach for data-driven materials design.
null
https://arxiv.org/abs/2507.00459v1
https://arxiv.org/pdf/2507.00459v1.pdf
null
[ "Hoang Cuong Phan", "Minh Tien Tran", "Chihun Lee", "Hoheok Kim", "Sehyok Oh", "Dong-Kyu Kim", "Ho Won Lee" ]
[ "Image Generation", "Semantic Segmentation" ]
2025-07-01T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/lora-mixer-coordinate-modular-lora-experts
2507.00029
null
null
LoRA-Mixer: Coordinate Modular LoRA Experts Through Serial Attention Routing
Recent efforts to combine low-rank adaptation (LoRA) with mixture-of-experts (MoE) for adapting large language models (LLMs) to multiple tasks still exhibit prevailing limitations: they either swap entire attention/feed-forward layers for switch experts or bolt on parallel expert branches, diluting parameter efficiency and task fidelity. We propose the LoRA-Mixer, a modular and lightweight MoE framework that integrates LoRA experts. Our core innovation lies in replacing the projection matrices of the attention module's input/output linear layers with dynamically routed, task-specific LoRA experts. This design ensures seamless compatibility with diverse foundation models, including transformers and state space models (SSMs), by leveraging their inherent linear projection structures. The framework supports two operational paradigms: (1) joint optimization of LoRA experts and routing mechanisms via a novel hard-soft routing strategy, or (2) direct deployment of pre-trained, frozen LoRA modules sourced from external repositories. To enable robust router training with limited data while ensuring stable routing decisions and maximizing expert reuse, we introduce an adaptive Specialization Balance Loss (SBL) that jointly optimizes expert balance and task-specific alignment. Extensive experiments on seven benchmark datasets, including MedQA, CoLA, SST-2, GSM8K, ARC-E, ARC-C, and HumanEval, demonstrate the effectiveness of LoRA-Mixer. On datasets such as GSM8K, HumanEval, and MedQA, LoRA-Mixer achieves significant improvements of 7.61%, 4.88%, and 3.08% over the base models, respectively. Compared with state-of-the-art methods, LoRA-Mixer achieves additional improvements of 1.09%, 1.45%, and 1.68%, respectively, using only 48% of the parameters, demonstrating its efficiency and strong performance.
null
https://arxiv.org/abs/2507.00029v1
https://arxiv.org/pdf/2507.00029v1.pdf
null
[ "Wenbing Li", "Zikai Song", "Hang Zhou", "Yunyao Zhang", "Junqing Yu", "Wei Yang" ]
[ "ARC", "CoLA", "GSM8K", "HumanEval", "MedQA", "Mixture-of-Experts", "SST-2", "State Space Models" ]
2025-06-17T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Mixture of Experts", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Ensembling", "parent": null }, "name": "MoE", "source_title": "Equipping Computational Pathology Systems with Artifact Processing Pipelines: A Showcase for Computation and Performance Trade-offs", "source_url": "https://arxiv.org/abs/2403.07743v3" }, { "code_snippet_url": null, "description": "", "full_name": "Balanced Selection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Active Learning", "parent": null }, "name": "BASE", "source_title": "Active Learning at the ImageNet Scale", "source_url": "https://arxiv.org/abs/2111.12880v1" } ]
https://paperswithcode.com/paper/understanding-generalization-in-node-and-link
2507.00927
null
null
Understanding Generalization in Node and Link Prediction
Using message-passing graph neural networks (MPNNs) for node and link prediction is crucial in various scientific and industrial domains, which has led to the development of diverse MPNN architectures. Besides working well in practical settings, their ability to generalize beyond the training set remains poorly understood. While some studies have explored MPNNs' generalization in graph-level prediction tasks, much less attention has been given to node- and link-level predictions. Existing works often rely on unrealistic i.i.d.\@ assumptions, overlooking possible correlations between nodes or links, and assuming fixed aggregation and impractical loss functions while neglecting the influence of graph structure. In this work, we introduce a unified framework to analyze the generalization properties of MPNNs in inductive and transductive node and link prediction settings, incorporating diverse architectural parameters and loss functions and quantifying the influence of graph structure. Additionally, our proposed generalization framework can be applied beyond graphs to any classification task under the inductive or transductive setting. Our empirical study supports our theoretical insights, deepening our understanding of MPNNs' generalization capabilities in these tasks.
null
https://arxiv.org/abs/2507.00927v1
https://arxiv.org/pdf/2507.00927v1.pdf
null
[ "Antonis Vasileiou", "Timo Stoll", "Christopher Morris" ]
[ "Link Prediction", "Prediction" ]
2025-07-01T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "There are at least eight notable examples of models from the literature that can be described using the **Message Passing Neural Networks** (**MPNN**) framework. For simplicity we describe MPNNs which operate on undirected graphs $G$ with node features $x_{v}$ and edge features $e_{vw}$. It is trivial to extend the formalism to directed multigraphs. The forward pass has two phases, a message passing phase and a readout phase. The message passing phase runs for $T$ time steps and is defined in terms of message functions $M_{t}$ and vertex update functions $U_{t}$. During the message passing phase, hidden states $h_{v}^{t}$ at each node in the graph are updated based on messages $m_{v}^{t+1}$ according to\r\n$$\r\nm_{v}^{t+1} = \\sum_{w \\in N(v)} M_{t}(h_{v}^{t}, h_{w}^{t}, e_{vw})\r\n$$\r\n$$\r\nh_{v}^{t+1} = U_{t}(h_{v}^{t}, m_{v}^{t+1})\r\n$$\r\nwhere in the sum, $N(v)$ denotes the neighbors of $v$ in graph $G$. The readout phase computes a feature vector for the whole graph using some readout function $R$ according to\r\n$$\r\n\\hat{y} = R(\\\\{ h_{v}^{T} | v \\in G \\\\})\r\n$$\r\nThe message functions $M_{t}$, vertex update functions $U_{t}$, and readout function $R$ are all learned differentiable functions. $R$ operates on the set of node states and must be invariant to permutations of the node states in order for the MPNN to be invariant to graph isomorphism.", "full_name": "Message Passing Neural Network", "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)", "name": "Graph Models", "parent": null }, "name": "MPNN", "source_title": "Neural Message Passing for Quantum Chemistry", "source_url": "http://arxiv.org/abs/1704.01212v2" }, { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/ai-approach-for-predicting-superhyrophobicity
null
null
null
AI Approach for Predicting Superhyrophobicity of Thermal Sprayed Copper Coated Aluminum Surfaces
Wettability, characterized by the contact angle of a liquid on a surface, is a critical property that influences numerous natural and industrial applications. In this study, I have developed a CNN-based model to predict the hydrophobicity or super-hydrophobicity of copper-coated aluminum surfaces treated with various reagents or etchants. The data set has been created by analyzing copper-coated aluminum samples with a 3D non-contact profilometer, and contact angle measurements were done to correlate surface properties with the resultant contact angle values. After reagent treatments, the approach had been to preprocess 3D profilometer images to extract surface morphology and structure features. These images and associated contact angle measurements were used as inputs to train the CNN model to classify whether the treated surfaces are hydrophobic or super-hydrophobic. Although the model may initially have limited training accuracy, this study demonstrates the potential of deep learning to predict wettability based on surface characteristics. The results also highlight the need for improvements, such as data set expansion, the inclusion of more varied reagent treatments, and the exploration of hybrid modeling approaches to enhance the model
In this study, I have developed a CNN-based model to predict the hydrophobicity or super-hydrophobicity of copper-coated aluminum surfaces treated with various reagents or etchants.
https://www.onlinescientificresearch.com/articles/ai-approach-for-predicting-superhyrophobicity-of-thermal-sprayed-copper-coated-aluminum-surfaces.pdf
https://www.onlinescientificresearch.com/articles/ai-approach-for-predicting-superhyrophobicity-of-thermal-sprayed-copper-coated-aluminum-surfaces.pdf
Journal of Diagnosis & Case Reports 2025 5
[ "Mahule Roy" ]
[]
2025-05-27T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/seed-enhancing-text-to-sql-performance-and
2506.07423
null
null
SEED: Enhancing Text-to-SQL Performance and Practical Usability Through Automatic Evidence Generation
Text-to-SQL enables non-experts to retrieve data from databases by converting natural language queries into SQL. However, state-of-the-art text-to-SQL studies rely on the BIRD dataset, which assumes that evidence is provided along with questions. Although BIRD facilitates research advancements, it assumes that users have expertise and domain knowledge, contradicting the fundamental goal of text-to-SQL. In addition, human-generated evidence in BIRD contains defects, including missing or erroneous evidence, which affects model performance. To address this issue, we propose SEED (System for Evidence Extraction and Domain knowledge generation), an approach that automatically generates evidence to improve performance and practical usability in real-world scenarios. SEED systematically analyzes database schema, description files, and values to extract relevant information. We evaluated SEED on BIRD and Spider, demonstrating that it significantly improves SQL generation accuracy in the no-evidence scenario, and in some cases, even outperforms the setting where BIRD evidence is provided. Our results highlight that SEED-generated evidence not only bridges the gap between research and real-world deployment but also improves the adaptability and robustness of text-to-SQL models. Our code is available at https://github.com/felix01189/SEED
We evaluated SEED on BIRD and Spider, demonstrating that it significantly improves SQL generation accuracy in the no-evidence scenario, and in some cases, even outperforms the setting where BIRD evidence is provided.
https://arxiv.org/abs/2506.07423v1
https://arxiv.org/pdf/2506.07423v1.pdf
null
[ "Janghyeon Yun", "Sang-goo Lee" ]
[ "Natural Language Queries", "Text to SQL", "Text-To-SQL" ]
2025-06-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/improving-deep-learning-in-arrhythmia
null
null
null
Improving deep learning in arrhythmia Detection: The application of modular quality and quantity controllers in data augmentation
Among the most prevalent diseases with significant fatality rates are cardiac disorders. In recent years, the application of deep learning in diagnosing various cardiac conditions, namely arrhythmia, has gained widespread attention. Nevertheless, deep neural networks struggle to detect arrhythmia due to skewed datasets and a lack of data in different classes. If used effectively, data augmentation can address this gap by adding further synthetic samples in the correct distribution of the corresponding skewed classes. To achieve this, we have instituted the Modular Distribution and Volume Controller method, abbreviated as MDVC. Our method concentrates on both qualitative and quantitative aspects of data augmentation to elevate efficiency and create a significant and varied amount of synthetic samples. Seven distinct methods of data augmentation are utilized in fusion to synthesize samples. Subsequently, the distribution controller determines the most advantageous distribution of artificial samples for each data augmentation technique, emphasizing the dispersion and collision of different classes. The maximum overall data augmentation volume, the volume of each class, and the volume of each data augmentation technique are defined by the volume controller through the novel x, α, and β parameters. Classifying the 17 classes of the MIT-BIH dataset using the MDVC yielded an accuracy of 98.9 % using a 10-fold cross-validation strategy; thus, we have outperformed state-of-the-art data augmentation techniques such as RandAugment and α-trim by 1.3 % and 0.8 %, respectively.
To achieve this, we have instituted the Modular Distribution and Volume Controller method, abbreviated as MDVC.
https://www.sciencedirect.com/science/article/abs/pii/S1746809423013733
https://www.sciencedirect.com/getaccess/pii/S1746809423013733/purchase
Biomedical Signal Processing and Control 2024 5
[ "Mohammad Usef Khosravi Khaliran", "Iman Zabbah", "Mehrbod Faraji", "Reza Ebrahimpour" ]
[ "Arrhythmia Detection", "Data Augmentation" ]
2024-05-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/self-supervised-multiview-xray-matching
2507.00287
null
null
Self-Supervised Multiview Xray Matching
Accurate interpretation of multi-view radiographs is crucial for diagnosing fractures, muscular injuries, and other anomalies. While significant advances have been made in AI-based analysis of single images, current methods often struggle to establish robust correspondences between different X-ray views, an essential capability for precise clinical evaluations. In this work, we present a novel self-supervised pipeline that eliminates the need for manual annotation by automatically generating a many-to-many correspondence matrix between synthetic X-ray views. This is achieved using digitally reconstructed radiographs (DRR), which are automatically derived from unannotated CT volumes. Our approach incorporates a transformer-based training phase to accurately predict correspondences across two or more X-ray views. Furthermore, we demonstrate that learning correspondences among synthetic X-ray views can be leveraged as a pretraining strategy to enhance automatic multi-view fracture detection on real data. Extensive evaluations on both synthetic and real X-ray datasets show that incorporating correspondences improves performance in multi-view fracture classification.
Accurate interpretation of multi-view radiographs is crucial for diagnosing fractures, muscular injuries, and other anomalies.
https://arxiv.org/abs/2507.00287v1
https://arxiv.org/pdf/2507.00287v1.pdf
null
[ "Mohamad Dabboussi", "Malo Huard", "Yann Gousseau", "Pietro Gori" ]
[ "Fracture detection" ]
2025-06-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/large-language-models-don-t-make-sense-of
2506.24006
null
null
Large Language Models Don't Make Sense of Word Problems. A Scoping Review from a Mathematics Education Perspective
The progress of Large Language Models (LLMs) like ChatGPT raises the question of how they can be integrated into education. One hope is that they can support mathematics learning, including word-problem solving. Since LLMs can handle textual input with ease, they appear well-suited for solving mathematical word problems. Yet their real competence, whether they can make sense of the real-world context, and the implications for classrooms remain unclear. We conducted a scoping review from a mathematics-education perspective, including three parts: a technical overview, a systematic review of word problems used in research, and a state-of-the-art empirical evaluation of LLMs on mathematical word problems. First, in the technical overview, we contrast the conceptualization of word problems and their solution processes between LLMs and students. In computer-science research this is typically labeled mathematical reasoning, a term that does not align with usage in mathematics education. Second, our literature review of 213 studies shows that the most popular word-problem corpora are dominated by s-problems, which do not require a consideration of realities of their real-world context. Finally, our evaluation of GPT-3.5-turbo, GPT-4o-mini, GPT-4.1, and o3 on 287 word problems shows that most recent LLMs solve these s-problems with near-perfect accuracy, including a perfect score on 20 problems from PISA. LLMs still showed weaknesses in tackling problems where the real-world context is problematic or non-sensical. In sum, we argue based on all three aspects that LLMs have mastered a superficial solution process but do not make sense of word problems, which potentially limits their value as instructional tools in mathematics classrooms.
null
https://arxiv.org/abs/2506.24006v1
https://arxiv.org/pdf/2506.24006v1.pdf
null
[ "Anselm R. Strohmaier", "Wim Van Dooren", "Kathrin Seßler", "Brian Greer", "Lieven Verschaffel" ]
[ "Mathematical Reasoning" ]
2025-06-30T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "", "description": "**PrIme Sample Attention (PISA)** directs the training of object detection frameworks towards prime samples. These are samples that play a key role in driving the detection performance. The authors define Hierarchical Local Rank (HLR) as a metric of importance. Specifically, they use IoU-HLR to rank positive samples and ScoreHLR to rank negative samples in each mini-batch. This ranking strategy places the positive samples with highest IoUs around each object and the negative samples with highest scores in each cluster to the top of the ranked list and directs the focus of the training process to them via a simple re-weighting scheme. The authors also devise a classification-aware regression loss to jointly optimize the classification and regression branches. Particularly, this loss would suppress those samples with large regression loss, thus reinforcing the attention to prime samples.", "full_name": "PrIme Sample Attention", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Prioritized Sampling** methods are used in tasks like object detection to prioritize examples (e.g. hard examples) to induce better detection performance. Below you can find a continuously updating list of prioritized sampling methods.", "name": "Prioritized Sampling", "parent": "Optimization" }, "name": "PISA", "source_title": "Prime Sample Attention in Object Detection", "source_url": "https://arxiv.org/abs/1904.04821v2" }, { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" }, { "code_snippet_url": "", "description": "**GPT-4** is a transformer based model pre-trained to predict the next token in a document.", "full_name": "GPT-4", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "GPT-4", "source_title": "GPT-4 Technical Report", "source_url": "https://arxiv.org/abs/2303.08774v5" } ]
https://paperswithcode.com/paper/geometric-gaussian-approximations-of
2507.00616
null
null
Geometric Gaussian Approximations of Probability Distributions
Approximating complex probability distributions, such as Bayesian posterior distributions, is of central interest in many applications. We study the expressivity of geometric Gaussian approximations. These consist of approximations by Gaussian pushforwards through diffeomorphisms or Riemannian exponential maps. We first review these two different kinds of geometric Gaussian approximations. Then we explore their relationship to one another. We further provide a constructive proof that such geometric Gaussian approximations are universal, in that they can capture any probability distribution. Finally, we discuss whether, given a family of probability distributions, a common diffeomorphism can be found to obtain uniformly high-quality geometric Gaussian approximations for that family.
null
https://arxiv.org/abs/2507.00616v1
https://arxiv.org/pdf/2507.00616v1.pdf
null
[ "Nathaël Da Costa", "Bálint Mucsányi", "Philipp Hennig" ]
[]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/llava-scissor-token-compression-with-semantic
2506.21862
null
null
LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMs
In this paper, we present LLaVA-Scissor, a training-free token compression strategy designed for video multimodal large language models. Previous methods mostly attempt to compress tokens based on attention scores, but fail to effectively capture all semantic regions and often lead to token redundancy. Differently, we propose to leverage the Semantic Connected Components (SCC) approach that assigns tokens to distinct semantic regions within the token set, ensuring comprehensive semantic coverage. The outcome is a two-step spatio-temporal token compression strategy that utilizes SCC in both spatial and temporal domains. This strategy can effectively compress tokens by representing the entire video with a set of non-overlapping semantic tokens. We conduct extensive evaluations of the token compression capabilities of LLaVA-Scissor across diverse video understanding benchmarks, including video question answering, long video understanding, and comprehensive multi-choices benchmarks. Experimental results show that the proposed LLaVA-Scissor outperforms other token compression methods, achieving superior performance in various video understanding benchmarks, particularly at low token retention ratios. Project page: https://github.com/HumanMLLM/LLaVA-Scissor.
This strategy can effectively compress tokens by representing the entire video with a set of non-overlapping semantic tokens.
https://arxiv.org/abs/2506.21862v1
https://arxiv.org/pdf/2506.21862v1.pdf
null
[ "Boyuan Sun", "Jiaxing Zhao", "Xihan Wei", "Qibin Hou" ]
[ "Question Answering", "Video Question Answering", "Video Understanding" ]
2025-06-27T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/sez-harn-self-explainable-zero-shot-human
2507.00050
null
null
SEZ-HARN: Self-Explainable Zero-shot Human Activity Recognition Network
Human Activity Recognition (HAR), which uses data from Inertial Measurement Unit (IMU) sensors, has many practical applications in healthcare and assisted living environments. However, its use in real-world scenarios has been limited by the lack of comprehensive IMU-based HAR datasets that cover a wide range of activities and the lack of transparency in existing HAR models. Zero-shot HAR (ZS-HAR) overcomes the data limitations, but current models struggle to explain their decisions, making them less transparent. This paper introduces a novel IMU-based ZS-HAR model called the Self-Explainable Zero-shot Human Activity Recognition Network (SEZ-HARN). It can recognize activities not encountered during training and provide skeleton videos to explain its decision-making process. We evaluate the effectiveness of the proposed SEZ-HARN on four benchmark datasets PAMAP2, DaLiAc, HTD-MHAD and MHealth and compare its performance against three state-of-the-art black-box ZS-HAR models. The experiment results demonstrate that SEZ-HARN produces realistic and understandable explanations while achieving competitive Zero-shot recognition accuracy. SEZ-HARN achieves a Zero-shot prediction accuracy within 3\% of the best-performing black-box model on PAMAP2 while maintaining comparable performance on the other three datasets.
Human Activity Recognition (HAR), which uses data from Inertial Measurement Unit (IMU) sensors, has many practical applications in healthcare and assisted living environments.
https://arxiv.org/abs/2507.00050v1
https://arxiv.org/pdf/2507.00050v1.pdf
null
[ "Devin Y. De Silva", "Sandareka Wickramanayake", "Dulani Meedeniya", "Sanka Rasnayaka" ]
[ "Activity Recognition", "Human Activity Recognition", "Zero-Shot Learning" ]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/contrast-compress-learning-lightweight
2506.02571
null
null
Contrast & Compress: Learning Lightweight Embeddings for Short Trajectories
The ability to retrieve semantically and directionally similar short-range trajectories with both accuracy and efficiency is foundational for downstream applications such as motion forecasting and autonomous navigation. However, prevailing approaches often depend on computationally intensive heuristics or latent anchor representations that lack interpretability and controllability. In this work, we propose a novel framework for learning fixed-dimensional embeddings for short trajectories by leveraging a Transformer encoder trained with a contrastive triplet loss that emphasize the importance of discriminative feature spaces for trajectory data. We analyze the influence of Cosine and FFT-based similarity metrics within the contrastive learning paradigm, with a focus on capturing the nuanced directional intent that characterizes short-term maneuvers. Our empirical evaluation on the Argoverse 2 dataset demonstrates that embeddings shaped by Cosine similarity objectives yield superior clustering of trajectories by both semantic and directional attributes, outperforming FFT-based baselines in retrieval tasks. Notably, we show that compact Transformer architectures, even with low-dimensional embeddings (e.g., 16 dimensions, but qualitatively down to 4), achieve a compelling balance between retrieval performance (minADE, minFDE) and computational overhead, aligning with the growing demand for scalable and interpretable motion priors in real-time systems. The resulting embeddings provide a compact, semantically meaningful, and efficient representation of trajectory data, offering a robust alternative to heuristic similarity measures and paving the way for more transparent and controllable motion forecasting pipelines.
null
https://arxiv.org/abs/2506.02571v1
https://arxiv.org/pdf/2506.02571v1.pdf
null
[ "Abhishek Vivekanandan", "Christian Hubschneider", "J. Marius Zöllner" ]
[ "Autonomous Navigation", "Contrastive Learning", "Motion Forecasting", "Retrieval", "Triplet" ]
2025-06-03T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "", "full_name": null, "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Contrastive Learning", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "The goal of **Triplet loss**, in the context of Siamese Networks, is to maximize the joint probability among all score-pairs i.e. the product of all probabilities. By using its negative logarithm, we can get the loss formulation as follows:\r\n\r\n$$\r\nL\\_{t}\\left(\\mathcal{V}\\_{p}, \\mathcal{V}\\_{n}\\right)=-\\frac{1}{M N} \\sum\\_{i}^{M} \\sum\\_{j}^{N} \\log \\operatorname{prob}\\left(v p\\_{i}, v n\\_{j}\\right)\r\n$$\r\n\r\nwhere the balance weight $1/MN$ is used to keep the loss with the same scale for different number of instance sets.", "full_name": "Triplet Loss", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.", "name": "Loss Functions", "parent": null }, "name": "Triplet Loss", "source_title": "Triplet Loss in Siamese Network for Object Tracking", "source_url": "http://openaccess.thecvf.com/content_ECCV_2018/html/Xingping_Dong_Triplet_Loss_with_ECCV_2018_paper.html" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/mambattention-mamba-with-multi-head-attention
2507.00966
null
null
MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement
With the advent of new sequence models like Mamba and xLSTM, several studies have shown that these models match or outperform state-of-the-art models in single-channel speech enhancement, automatic speech recognition, and self-supervised audio representation learning. However, prior research has demonstrated that sequence models like LSTM and Mamba tend to overfit to the training set. To address this issue, previous works have shown that adding self-attention to LSTMs substantially improves generalization performance for single-channel speech enhancement. Nevertheless, neither the concept of hybrid Mamba and time-frequency attention models nor their generalization performance have been explored for speech enhancement. In this paper, we propose a novel hybrid architecture, MambAttention, which combines Mamba and shared time- and frequency-multi-head attention modules for generalizable single-channel speech enhancement. To train our model, we introduce VoiceBank+Demand Extended (VB-DemandEx), a dataset inspired by VoiceBank+Demand but with more challenging noise types and lower signal-to-noise ratios. Trained on VB-DemandEx, our proposed MambAttention model significantly outperforms existing state-of-the-art LSTM-, xLSTM-, Mamba-, and Conformer-based systems of similar complexity across all reported metrics on two out-of-domain datasets: DNS 2020 and EARS-WHAM_v2, while matching their performance on the in-domain dataset VB-DemandEx. Ablation studies highlight the role of weight sharing between the time- and frequency-multi-head attention modules for generalization performance. Finally, we explore integrating the shared time- and frequency-multi-head attention modules with LSTM and xLSTM, which yields a notable performance improvement on the out-of-domain datasets. However, our MambAttention model remains superior on both out-of-domain datasets across all reported evaluation metrics.
Nevertheless, neither the concept of hybrid Mamba and time-frequency attention models nor their generalization performance have been explored for speech enhancement.
https://arxiv.org/abs/2507.00966v1
https://arxiv.org/pdf/2507.00966v1.pdf
null
[ "Nikolai Lund Kühne", "Jesper Jensen", "Jan Østergaard", "Zheng-Hua Tan" ]
[ "Automatic Speech Recognition", "Mamba", "Speech Enhancement", "speech-recognition", "Speech Recognition" ]
2025-07-01T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)", "full_name": "Long Short-Term Memory", "introduced_year": 1997, "main_collection": { "area": "Sequential", "description": "", "name": "Recurrent Neural Networks", "parent": null }, "name": "LSTM", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/state-spaces/mamba", "description": "Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers’ computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5× higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pre-training and downstream evaluation.", "full_name": "Mamba: Linear-Time Sequence Modeling with Selective State Spaces", "introduced_year": 2000, "main_collection": null, "name": "Mamba", "source_title": "Mamba: Linear-Time Sequence Modeling with Selective State Spaces", "source_url": "https://arxiv.org/abs/2312.00752v2" } ]
https://paperswithcode.com/paper/interact2vec-an-efficient-neural-network
2506.22648
null
null
Interact2Vec -- An efficient neural network-based model for simultaneously learning users and items embeddings in recommender systems
Over the past decade, recommender systems have experienced a surge in popularity. Despite notable progress, they grapple with challenging issues, such as high data dimensionality and sparseness. Representing users and items as low-dimensional embeddings learned via neural networks has become a leading solution. However, while recent studies show promising results, many approaches rely on complex architectures or require content data, which may not always be available. This paper presents Interact2Vec, a novel neural network-based model that simultaneously learns distributed embeddings for users and items while demanding only implicit feedback. The model employs state-of-the-art strategies that natural language processing models commonly use to optimize the training phase and enhance the final embeddings. Two types of experiments were conducted regarding the extrinsic and intrinsic quality of the model. In the former, we benchmarked the recommendations generated by Interact2Vec's embeddings in a top-$N$ ranking problem, comparing them with six other recommender algorithms. The model achieved the second or third-best results in 30\% of the datasets, being competitive with other recommenders, and has proven to be very efficient with an average training time reduction of 274\% compared to other embedding-based models. Later, we analyzed the intrinsic quality of the embeddings through similarity tables. Our findings suggest that Interact2Vec can achieve promising results, especially on the extrinsic task, and is an excellent embedding-generator model for scenarios of scarce computing resources, enabling the learning of item and user embeddings simultaneously and efficiently.
null
https://arxiv.org/abs/2506.22648v1
https://arxiv.org/pdf/2506.22648v1.pdf
null
[ "Pedro R. Pires", "Tiago A. Almeida" ]
[ "Efficient Neural Network", "Recommendation Systems" ]
2025-06-27T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/calohadronic-a-diffusion-model-for-the
2506.21720
null
null
CaloHadronic: a diffusion model for the generation of hadronic showers
Simulating showers of particles in highly-granular calorimeters is a key frontier in the application of machine learning to particle physics. Achieving high accuracy and speed with generative machine learning models can enable them to augment traditional simulations and alleviate a major computing constraint. Recent developments have shown how diffusion based generative shower simulation approaches that do not rely on a fixed structure, but instead generate geometry-independent point clouds, are very efficient. We present a transformer-based extension to previous architectures which were developed for simulating electromagnetic showers in the highly granular electromagnetic calorimeter of the International Large Detector, ILD. The attention mechanism now allows us to generate complex hadronic showers with more pronounced substructure across both the electromagnetic and hadronic calorimeters. This is the first time that machine learning methods are used to holistically generate showers across the electromagnetic and hadronic calorimeter in highly granular imaging calorimeter systems.
This is the first time that machine learning methods are used to holistically generate showers across the electromagnetic and hadronic calorimeter in highly granular imaging calorimeter systems.
https://arxiv.org/abs/2506.21720v1
https://arxiv.org/pdf/2506.21720v1.pdf
null
[ "Thorsten Buss", "Frank Gaede", "Gregor Kasieczka", "Anatolii Korol", "Katja Krüger", "Peter McKeown", "Martina Mozzanica" ]
[]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/sundaypaulneo
null
null
null
SundayPaulneo
'''Early life''' '''SundayPaulneo''' Chidiebube who is born on / 24/ 2012/ March in ilorin, My parent are Mr. and Mrs. Sunday who are native of Ebonyi, but they live in ilorin kwara state, Nigeria. I went to Gifted Kiddies Model Nursery and Primary School in ilorin, when i was in pry 5 i graduated from primary school and still continue with the Secondary School, Which is Gifted Model Secondary School . When i was in JSS3 class, i was introduced to a great business know as Network Marketing ([Neolife][1]) as he joined the business he has became an expert freelancer working on Fiverr, Upwork, PeoplePerHour, kwork, Latium and many freelance platform with many project ( Social Media marketer, Programming projects, Digital Marketing and so on) but he sometimes specialized on Digital Marketing service: Google knowledge panel, Tiktok promotion, Instagram promotion Google review removing and adding, TikTok promotion and followers, IMDb page creation, wikipedia page creation, crunchbase and so on. He has been doing this for the past 2 years. '''Career''' '''SundayPaulneo''' start his career in the years 2025 , Because he join a great and awesome business opportunity call [2 in 1 business opportunity] Neolife. And he is also an expert in online business that he specialize on Digital marketing, social media marketing, Seo expert, Shopify marketing, dropshipping, promotion and so on. And he is working on some freelancer platform like [Upwork][3], [fiverr][4], [kwork][5], [PeoplePerHour][6], [Latium] [7]and so on
null
https://www.facebook.com/profile.php?id=61576014892300
https://www.instagram.com/sundaypaul349/
fhg 2012 3
[ "Sundaypaulneo" ]
[ "Marketing" ]
2012-03-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/potential-customer-lifetime-value-in
2506.22711
null
null
Potential Customer Lifetime Value in Financial Institutions: The Usage Of Open Banking Data to Improve CLV Estimation
Financial institutions increasingly adopt customer-centric strategies to enhance profitability and build long-term relationships. While Customer Lifetime Value (CLV) is a core metric, its calculations often rely solely on single-entity data, missing insights from customer activities across multiple firms. This study introduces the Potential Customer Lifetime Value (PCLV) framework, leveraging Open Banking (OB) data to estimate customer value comprehensively. We predict retention probability and estimate Potential Contribution Margins (PCM) from competitor data, enabling PCLV calculation. Results show that OB data can be used to estimate PCLV per competitor, indicating a potential upside of 21.06% over the Actual CLV. PCLV offers a strategic tool for managers to strengthen competitiveness by leveraging OB data and boost profitability by driving marketing efforts at the individual customer level to increase the Actual CLV.
null
https://arxiv.org/abs/2506.22711v1
https://arxiv.org/pdf/2506.22711v1.pdf
null
[ "João B. G. de Brito", "Rodrigo Heldt", "Cleo S. Silveira", "Matthias Bogaert", "Guilherme B. Bucco", "Fernando B. Luce", "João L. Becker", "Filipe J. Zabala", "Michel J. Anzanello" ]
[ "Marketing" ]
2025-06-28T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Please enter a description about the method here", "full_name": "ADaptive gradient method with the OPTimal convergence rate", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "ADOPT", "source_title": "ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate", "source_url": "https://arxiv.org/abs/2411.02853v3" } ]
https://paperswithcode.com/paper/integrating-traditional-and-deep-learning
2507.01502
null
null
Integrating Traditional and Deep Learning Methods to Detect Tree Crowns in Satellite Images
Global warming, loss of biodiversity, and air pollution are among the most significant problems facing Earth. One of the primary challenges in addressing these issues is the lack of monitoring forests to protect them. To tackle this problem, it is important to leverage remote sensing and computer vision methods to automate monitoring applications. Hence, automatic tree crown detection algorithms emerged based on traditional and deep learning methods. In this study, we first introduce two different tree crown detection methods based on these approaches. Then, we form a novel rule-based approach that integrates these two methods to enhance robustness and accuracy of tree crown detection results. While traditional methods are employed for feature extraction and segmentation of forested areas, deep learning methods are used to detect tree crowns in our method. With the proposed rule-based approach, we post-process these results, aiming to increase the number of detected tree crowns through neighboring trees and localized operations. We compare the obtained results with the proposed method in terms of the number of detected tree crowns and report the advantages, disadvantages, and areas for improvement of the obtained outcomes.
We compare the obtained results with the proposed method in terms of the number of detected tree crowns and report the advantages, disadvantages, and areas for improvement of the obtained outcomes.
https://arxiv.org/abs/2507.01502v1
https://arxiv.org/pdf/2507.01502v1.pdf
null
[ "Ozan Durgut", "Beril Kallfelz-Sirmacek", "Cem Unsalan" ]
[]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ac-dit-adaptive-coordination-diffusion
2507.01961
null
null
AC-DiT: Adaptive Coordination Diffusion Transformer for Mobile Manipulation
Recently, mobile manipulation has attracted increasing attention for enabling language-conditioned robotic control in household tasks. However, existing methods still face challenges in coordinating mobile base and manipulator, primarily due to two limitations. On the one hand, they fail to explicitly model the influence of the mobile base on manipulator control, which easily leads to error accumulation under high degrees of freedom. On the other hand, they treat the entire mobile manipulation process with the same visual observation modality (e.g., either all 2D or all 3D), overlooking the distinct multimodal perception requirements at different stages during mobile manipulation. To address this, we propose the Adaptive Coordination Diffusion Transformer (AC-DiT), which enhances mobile base and manipulator coordination for end-to-end mobile manipulation. First, since the motion of the mobile base directly influences the manipulator's actions, we introduce a mobility-to-body conditioning mechanism that guides the model to first extract base motion representations, which are then used as context prior for predicting whole-body actions. This enables whole-body control that accounts for the potential impact of the mobile base's motion. Second, to meet the perception requirements at different stages of mobile manipulation, we design a perception-aware multimodal conditioning strategy that dynamically adjusts the fusion weights between various 2D visual images and 3D point clouds, yielding visual features tailored to the current perceptual needs. This allows the model to, for example, adaptively rely more on 2D inputs when semantic information is crucial for action prediction, while placing greater emphasis on 3D geometric information when precise spatial understanding is required. We validate AC-DiT through extensive experiments on both simulated and real-world mobile manipulation tasks.
null
https://arxiv.org/abs/2507.01961v1
https://arxiv.org/pdf/2507.01961v1.pdf
null
[ "Sixiang Chen", "Jiaming Liu", "Siyuan Qian", "Han Jiang", "Lily Li", "Renrui Zhang", "Zhuoyang Liu", "Chenyang Gu", "Chengkai Hou", "Pengwei Wang", "Zhongyuan Wang", "Shanghang Zhang" ]
[]
2025-07-02T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" }, { "code_snippet_url": null, "description": "", "full_name": "Balanced Selection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Active Learning", "parent": null }, "name": "BASE", "source_title": "Active Learning at the ImageNet Scale", "source_url": "https://arxiv.org/abs/2111.12880v1" } ]
https://paperswithcode.com/paper/hitchhiking-rides-dataset-two-decades-of
2506.21946
null
null
Hitchhiking Rides Dataset: Two decades of crowd-sourced records on stochastic traveling
Hitchhiking, a spontaneous and decentralized mode of travel, has long eluded systematic study due to its informal nature. This paper presents and analyzes the largest known structured dataset of hitchhiking rides, comprising over 63,000 entries collected over nearly two decades through platforms associated with hitchwiki.org and lately on hitchmap.com. By leveraging crowd-sourced contributions, the dataset captures key spatiotemporal and strategic aspects of hitchhiking. This work documents the dataset's origins, evolution, and community-driven maintenance, highlighting its Europe-centric distribution, seasonal patterns, and reliance on a small number of highly active contributors. Through exploratory analyses, I examine waiting times, user behavior, and comment metadata, shedding light on the lived realities of hitchhikers. While the dataset has inherent biases and limitations - such as demographic skew and unverifiable entries it offers a rare and valuable window into an alternative form of mobility. I conclude by outlining future directions for enriching the dataset and advancing research on hitchhiking as both a transportation practice and cultural phenomenon.
Hitchhiking, a spontaneous and decentralized mode of travel, has long eluded systematic study due to its informal nature.
https://arxiv.org/abs/2506.21946v1
https://arxiv.org/pdf/2506.21946v1.pdf
null
[ "Till Wenke" ]
[]
2025-06-27T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-condition-number-as-a-scale-invariant
null
null
null
The Condition Number as a Scale-Invariant Proxy for Information Encoding in Neural Units
This paper explores the relationship between the condition number of a neural network's weight tensor and the extent of information encoded by the associated processing unit, viewed through the lens of information theory. We argue that a high condition number, though not sufficient for effective knowledge encoding, may indicate that the unit has learned to selectively amplify and compress information. We formalize this intuition, particularly for linear units with Gaussian inputs, linking the condition number and the transformation's log-volume scaling factor to the characteristics of the output entropy and the geometric properties of the learned transformation. Our analysis demonstrates that for a fixed weight norm, a concentrated distribution of singular values (high condition number) corresponds to reduced overall information transfer, indicating a specialized and efficient encoding strategy. Furthermore, we present a practical case study where these principles are applied to guide selective fine-tuning of a multimodal Large Language Model, aiming to mitigate catastrophic forgetting during cross-modal adaptation. Unlike many existing catastrophic forgetting mitigation methods that rely on access to pre-training statistics, which are often unavailable, our selective fine-tuning approach offers a way to bypass this common requirement.
This paper explores the relationship between the condition number of a neural network's weight tensor and the extent of information encoded by the associated processing unit, viewed through the lens of information theory.
https://arxiv.org/abs/2506.16289
https://arxiv.org/pdf/2506.16289
null
[ "Oswaldo Ludwig" ]
[ "Large Language Model", "Multimodal Large Language Model" ]
2025-06-19T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Singular Value Decomposition Parameterization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "SVD Parameterization", "source_title": "Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization", "source_url": "http://arxiv.org/abs/1803.09327v1" } ]
https://paperswithcode.com/paper/rlhgnn-reinforcement-learning-driven
2507.02690
null
null
RLHGNN: Reinforcement Learning-driven Heterogeneous Graph Neural Network for Next Activity Prediction in Business Processes
Next activity prediction represents a fundamental challenge for optimizing business processes in service-oriented architectures such as microservices environments, distributed enterprise systems, and cloud-native platforms, which enables proactive resource allocation and dynamic service composition. Despite the prevalence of sequence-based methods, these approaches fail to capture non-sequential relationships that arise from parallel executions and conditional dependencies. Even though graph-based approaches address structural preservation, they suffer from homogeneous representations and static structures that apply uniform modeling strategies regardless of individual process complexity characteristics. To address these limitations, we introduce RLHGNN, a novel framework that transforms event logs into heterogeneous process graphs with three distinct edge types grounded in established process mining theory. Our approach creates four flexible graph structures by selectively combining these edges to accommodate different process complexities, and employs reinforcement learning formulated as a Markov Decision Process to automatically determine the optimal graph structure for each specific process instance. RLHGNN then applies heterogeneous graph convolution with relation-specific aggregation strategies to effectively predict the next activity. This adaptive methodology enables precise modeling of both sequential and non-sequential relationships in service interactions. Comprehensive evaluation on six real-world datasets demonstrates that RLHGNN consistently outperforms state-of-the-art approaches. Furthermore, it maintains an inference latency of approximately 1 ms per prediction, representing a highly practical solution suitable for real-time business process monitoring applications. The source code is available at https://github.com/Joker3993/RLHGNN.
Next activity prediction represents a fundamental challenge for optimizing business processes in service-oriented architectures such as microservices environments, distributed enterprise systems, and cloud-native platforms, which enables proactive resource allocation and dynamic service composition.
https://arxiv.org/abs/2507.02690v1
https://arxiv.org/pdf/2507.02690v1.pdf
null
[ "Jiaxing Wang", "Yifeng Yu", "Jiahan Song", "Bin Cao", "Jing Fan", "Ji Zhang" ]
[ "Activity Prediction", "Graph Neural Network", "Service Composition" ]
2025-07-03T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/finai-bert-a-transformer-based-model-for
2507.01991
null
null
FinAI-BERT: A Transformer-Based Model for Sentence-Level Detection of AI Disclosures in Financial Reports
The proliferation of artificial intelligence (AI) in financial services has prompted growing demand for tools that can systematically detect AI-related disclosures in corporate filings. While prior approaches often rely on keyword expansion or document-level classification, they fall short in granularity, interpretability, and robustness. This study introduces FinAI-BERT, a domain-adapted transformer-based language model designed to classify AI-related content at the sentence level within financial texts. The model was fine-tuned on a manually curated and balanced dataset of 1,586 sentences drawn from 669 annual reports of U.S. banks (2015 to 2023). FinAI-BERT achieved near-perfect classification performance (accuracy of 99.37 percent, F1 score of 0.993), outperforming traditional baselines such as Logistic Regression, Naive Bayes, Random Forest, and XGBoost. Interpretability was ensured through SHAP-based token attribution, while bias analysis and robustness checks confirmed the model's stability across sentence lengths, adversarial inputs, and temporal samples. Theoretically, the study advances financial NLP by operationalizing fine-grained, theme-specific classification using transformer architectures. Practically, it offers a scalable, transparent solution for analysts, regulators, and scholars seeking to monitor the diffusion and framing of AI across financial institutions.
The proliferation of artificial intelligence (AI) in financial services has prompted growing demand for tools that can systematically detect AI-related disclosures in corporate filings.
https://arxiv.org/abs/2507.01991v1
https://arxiv.org/pdf/2507.01991v1.pdf
null
[ "Muhammad Bilal Zafar" ]
[ "Sentence" ]
2025-06-29T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" }, { "code_snippet_url": null, "description": "**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.\r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)\r\n\r\nImage: [Michaelg2015](https://commons.wikimedia.org/wiki/User:Michaelg2015)", "full_name": "Logistic Regression", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.", "name": "Generalized Linear Models", "parent": null }, "name": "Logistic Regression", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/mac-lookup-multi-axis-conditional-lookup
2507.02270
null
null
MAC-Lookup: Multi-Axis Conditional Lookup Model for Underwater Image Enhancement
Enhancing underwater images is crucial for exploration. These images face visibility and color issues due to light changes, water turbidity, and bubbles. Traditional prior-based methods and pixel-based methods often fail, while deep learning lacks sufficient high-quality datasets. We introduce the Multi-Axis Conditional Lookup (MAC-Lookup) model, which enhances visual quality by improving color accuracy, sharpness, and contrast. It includes Conditional 3D Lookup Table Color Correction (CLTCC) for preliminary color and quality correction and Multi-Axis Adaptive Enhancement (MAAE) for detail refinement. This model prevents over-enhancement and saturation while handling underwater challenges. Extensive experiments show that MAC-Lookup excels in enhancing underwater images by restoring details and colors better than existing methods. The code is https://github.com/onlycatdoraemon/MAC-Lookup.
Enhancing underwater images is crucial for exploration.
https://arxiv.org/abs/2507.02270v1
https://arxiv.org/pdf/2507.02270v1.pdf
null
[ "Fanghai Yi", "Zehong Zheng", "Zexiao Liang", "Yihang Dong", "Xiyang Fang", "Wangyu Wu", "Xuhang Chen" ]
[ "Image Enhancement" ]
2025-07-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dnn-based-precoding-in-ris-aided-mmwave-mimo
2507.02824
null
null
DNN-Based Precoding in RIS-Aided mmWave MIMO Systems With Practical Phase Shift
In this paper, the precoding design is investigated for maximizing the throughput of millimeter wave (mmWave) multiple-input multiple-output (MIMO) systems with obstructed direct communication paths. In particular, a reconfigurable intelligent surface (RIS) is employed to enhance MIMO transmissions, considering mmWave characteristics related to line-of-sight (LoS) and multipath effects. The traditional exhaustive search (ES) for optimal codewords in the continuous phase shift is computationally intensive and time-consuming. To reduce computational complexity, permuted discrete Fourier transform (DFT) vectors are used for finding codebook design, incorporating amplitude responses for practical or ideal RIS systems. However, even if the discrete phase shift is adopted in the ES, it results in significant computation and is time-consuming. Instead, the trained deep neural network (DNN) is developed to facilitate faster codeword selection. Simulation results show that the DNN maintains sub-optimal spectral efficiency even as the distance between the end-user and the RIS has variations in the testing phase. These results highlight the potential of DNN in advancing RIS-aided systems.
null
https://arxiv.org/abs/2507.02824v1
https://arxiv.org/pdf/2507.02824v1.pdf
null
[ "Po-Heng Chou", "Ching-Wen Chen", "Wan-Jen Huang", "Walid Saad", "Yu Tsao", "Ronald Y. Chang" ]
[]
2025-07-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/knowledge-protocol-engineering-a-new-paradigm
2507.02760
null
null
Knowledge Protocol Engineering: A New Paradigm for AI in Domain-Specific Knowledge Work
The capabilities of Large Language Models (LLMs) have opened new frontiers for interacting with complex, domain-specific knowledge. However, prevailing methods like Retrieval-Augmented Generation (RAG) and general-purpose Agentic AI, while powerful, often struggle with tasks that demand deep, procedural, and methodological reasoning inherent to expert domains. RAG provides factual context but fails to convey logical frameworks; autonomous agents can be inefficient and unpredictable without domain-specific heuristics. To bridge this gap, we introduce Knowledge Protocol Engineering (KPE), a new paradigm focused on systematically translating human expert knowledge, often expressed in natural language documents, into a machine-executable Knowledge Protocol (KP). KPE shifts the focus from merely augmenting LLMs with fragmented information to endowing them with a domain's intrinsic logic, operational strategies, and methodological principles. We argue that a well-engineered Knowledge Protocol allows a generalist LLM to function as a specialist, capable of decomposing abstract queries and executing complex, multi-step tasks. This position paper defines the core principles of KPE, differentiates it from related concepts, and illustrates its potential applicability across diverse fields such as law and bioinformatics, positing it as a foundational methodology for the future of human-AI collaboration.
null
https://arxiv.org/abs/2507.02760v1
https://arxiv.org/pdf/2507.02760v1.pdf
null
[ "Guangwei Zhang" ]
[ "RAG", "Retrieval-augmented Generation" ]
2025-07-03T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.", "full_name": "Linear Warmup With Linear Decay", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.", "name": "Learning Rate Schedules", "parent": null }, "name": "Linear Warmup With Linear Decay", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!", "full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Refunds@Expedia|||How do I get a full refund from Expedia?", "source_title": "Gaussian Error Linear Units (GELUs)", "source_url": "https://arxiv.org/abs/1606.08415v5" }, { "code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271", "description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$", "full_name": "Attention Dropout", "introduced_year": 2018, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Attention Dropout", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "https://github.com/google-research/bert", "description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.", "full_name": "BERT", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "BERT", "source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "source_url": "https://arxiv.org/abs/1810.04805v2" }, { "code_snippet_url": null, "description": "**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://paperswithcode.com/method/transformer)-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like [BERT](https://paperswithcode.com/method/bert)) and a left-to-right decoder (like [GPT](https://paperswithcode.com/method/gpt)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like [GPT2](https://paperswithcode.com/method/gpt-2).", "full_name": "BART", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "BART", "source_title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension", "source_url": "https://arxiv.org/abs/1910.13461v1" }, { "code_snippet_url": null, "description": "", "full_name": "Keypoint Pose Encoding", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Pose Estimation Blocks", "parent": null }, "name": "KPE", "source_title": "KPE: Keypoint Pose Encoding for Transformer-based Image Generation", "source_url": "https://arxiv.org/abs/2203.04907v2" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" }, { "code_snippet_url": "", "description": "**Retriever-Augmented Generation**, or **RAG**, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z\\_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents.", "full_name": "RAG", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "RAG", "source_title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", "source_url": "https://arxiv.org/abs/2005.11401v4" } ]
https://paperswithcode.com/paper/perception-oriented-latent-coding-for-high
2507.01608
null
null
Perception-Oriented Latent Coding for High-Performance Compressed Domain Semantic Inference
In recent years, compressed domain semantic inference has primarily relied on learned image coding models optimized for mean squared error (MSE). However, MSE-oriented optimization tends to yield latent spaces with limited semantic richness, which hinders effective semantic inference in downstream tasks. Moreover, achieving high performance with these models often requires fine-tuning the entire vision model, which is computationally intensive, especially for large models. To address these problems, we introduce Perception-Oriented Latent Coding (POLC), an approach that enriches the semantic content of latent features for high-performance compressed domain semantic inference. With the semantically rich latent space, POLC requires only a plug-and-play adapter for fine-tuning, significantly reducing the parameter count compared to previous MSE-oriented methods. Experimental results demonstrate that POLC achieves rate-perception performance comparable to state-of-the-art generative image coding methods while markedly enhancing performance in vision tasks, with minimal fine-tuning overhead. Code is available at https://github.com/NJUVISION/POLC.
To address these problems, we introduce Perception-Oriented Latent Coding (POLC), an approach that enriches the semantic content of latent features for high-performance compressed domain semantic inference.
https://arxiv.org/abs/2507.01608v1
https://arxiv.org/pdf/2507.01608v1.pdf
null
[ "Xu Zhang", "Ming Lu", "Yan Chen", "Zhan Ma" ]
[ "Image Classification", "Image Compression", "Semantic Segmentation" ]
2025-07-02T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Adapter", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Adapter", "source_title": "Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing", "source_url": "https://arxiv.org/abs/2101.03289v5" } ]
https://paperswithcode.com/paper/anyi2v-animating-any-conditional-image-with
2507.02857
null
null
AnyI2V: Animating Any Conditional Image with Motion Control
Recent advancements in video generation, particularly in diffusion models, have driven notable progress in text-to-video (T2V) and image-to-video (I2V) synthesis. However, challenges remain in effectively integrating dynamic motion signals and flexible spatial constraints. Existing T2V methods typically rely on text prompts, which inherently lack precise control over the spatial layout of generated content. In contrast, I2V methods are limited by their dependence on real images, which restricts the editability of the synthesized content. Although some methods incorporate ControlNet to introduce image-based conditioning, they often lack explicit motion control and require computationally expensive training. To address these limitations, we propose AnyI2V, a training-free framework that animates any conditional images with user-defined motion trajectories. AnyI2V supports a broader range of modalities as the conditional image, including data types such as meshes and point clouds that are not supported by ControlNet, enabling more flexible and versatile video generation. Additionally, it supports mixed conditional inputs and enables style transfer and editing via LoRA and text prompts. Extensive experiments demonstrate that the proposed AnyI2V achieves superior performance and provides a new perspective in spatial- and motion-controlled video generation. Code is available at https://henghuiding.com/AnyI2V/.
null
https://arxiv.org/abs/2507.02857v1
https://arxiv.org/pdf/2507.02857v1.pdf
null
[ "Ziye Li", "Hao Luo", "Xincheng Shuai", "Henghui Ding" ]
[ "Style Transfer", "Video Generation" ]
2025-07-03T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/continual-gradient-low-rank-projection-fine
2507.02503
null
null
Continual Gradient Low-Rank Projection Fine-Tuning for LLMs
Continual fine-tuning of Large Language Models (LLMs) is hampered by the trade-off between efficiency and expressiveness. Low-Rank Adaptation (LoRA) offers efficiency but constrains the model's ability to learn new tasks and transfer knowledge due to its low-rank nature and reliance on explicit parameter constraints. We propose GORP (Gradient LOw Rank Projection) for Continual Learning, a novel training strategy that overcomes these limitations by synergistically combining full and low-rank parameters and jointly updating within a unified low-rank gradient subspace. GORP expands the optimization space while preserving efficiency and mitigating catastrophic forgetting. Extensive experiments on continual learning benchmarks demonstrate GORP's superior performance compared to existing state-of-the-art approaches. Code is available at https://github.com/Wcxwcxw/GORP.
Continual fine-tuning of Large Language Models (LLMs) is hampered by the trade-off between efficiency and expressiveness.
https://arxiv.org/abs/2507.02503v1
https://arxiv.org/pdf/2507.02503v1.pdf
null
[ "Chenxu Wang", "Yilin Lyu", "Zicheng Sun", "Liping Jing" ]
[ "Continual Learning" ]
2025-07-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/early-signs-of-steganographic-capabilities-in
2507.02737
null
null
Early Signs of Steganographic Capabilities in Frontier LLMs
Monitoring Large Language Model (LLM) outputs is crucial for mitigating risks from misuse and misalignment. However, LLMs could evade monitoring through steganography: Encoding hidden information within seemingly benign generations. In this paper, we evaluate the steganography capabilities in frontier LLMs to better understand the risk they pose. We focus on two types of steganography: passing encoded messages and performing encoded reasoning. We find that current models are unable to encode short messages in their outputs without a monitor noticing under standard affordances. They can succeed, however, if given additional affordances such as using an unmonitored scratchpad and coordinating on what encoding scheme to use. We additionally find early signs that models can perform basic encoded reasoning in a simple state-tracking problem. This includes some ability to reason with their own and pre-defined schemes, including encoding schemes such as Hexadecimal. Despite this, they can rarely hide reasoning subtly within a cover task to fool a monitor. Overall, our results indicate that current LLMs exhibit nascent steganographic capabilities. While these capabilities are likely insufficient to bypass well-designed monitors at present, this could change in the future.
We additionally find early signs that models can perform basic encoded reasoning in a simple state-tracking problem.
https://arxiv.org/abs/2507.02737v1
https://arxiv.org/pdf/2507.02737v1.pdf
null
[ "Artur Zolkowski", "Kei Nishimura-Gasparian", "Robert McCarthy", "Roland S. Zimmermann", "David Lindner" ]
[ "Large Language Model" ]
2025-07-03T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/instant-particle-size-distribution
2507.00822
null
null
Instant Particle Size Distribution Measurement Using CNNs Trained on Synthetic Data
Accurate particle size distribution (PSD) measurement is important in industries such as mining, pharmaceuticals, and fertilizer manufacturing, significantly influencing product quality and operational efficiency. Traditional PSD methods like sieve analysis and laser diffraction are manual, time-consuming, and limited by particle overlap. Recent developments in convolutional neural networks (CNNs) enable automated, real-time PSD estimation directly from particle images. In this work, we present a CNN-based methodology trained on realistic synthetic particle imagery generated using Blender's advanced rendering capabilities. Synthetic data sets using this method can replicate various industrial scenarios by systematically varying particle shapes, textures, lighting, and spatial arrangements that closely resemble the actual configurations. We evaluated three CNN-based architectures, ResNet-50, InceptionV3, and EfficientNet-B0, for predicting critical PSD parameters (d10, d50, d90). Results demonstrated comparable accuracy across models, with EfficientNet-B0 achieving the best computational efficiency suitable for real-time industrial deployment. This approach shows the effectiveness of realistic synthetic data for robust CNN training, which offers significant potential for automated industrial PSD monitoring. The code is released at : https://github.com/YasserElj/Synthetic-Granular-Gen
Accurate particle size distribution (PSD) measurement is important in industries such as mining, pharmaceuticals, and fertilizer manufacturing, significantly influencing product quality and operational efficiency.
https://arxiv.org/abs/2507.00822v1
https://arxiv.org/pdf/2507.00822v1.pdf
null
[ "Yasser El Jarida", "Youssef Iraqi", "Loubna Mekouar" ]
[ "Computational Efficiency" ]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/core-reid-v2-advancing-the-domain-adaptation
null
null
null
CORE-ReID V2: Advancing the Domain Adaptation for Object Re-Identification with Optimized Training and Ensemble Fusion
This study presents CORE-ReID V2, an enhanced framework built upon CORE-ReID V1. The new framework extends its predecessor by addressing unsupervised domain adaptation (UDA) challenges in person ReID and vehicle ReID, with further applicability to object ReID. During pre-training, CycleGAN is employed to synthesize diverse data, bridging image characteristic gaps across different domains. In the fine-tuning, an advanced ensemble fusion mechanism, consisting of the Efficient Channel Attention Block (ECAB) and the Simplified Efficient Channel Attention Block (SECAB), enhances both local and global feature representations while reducing ambiguity in pseudo-labels for target samples. Experimental results on widely used UDA person ReID and vehicle ReID datasets demonstrate that the proposed framework outperforms state-of-the-art methods, achieving top performance in mean average precision (mAP) and Rank-k Accuracy (Top-1, Top-5, Top-10). Moreover, the framework supports lightweight backbones such as ResNet18 and ResNet34, ensuring both scalability and efficiency. Our work not only pushes the boundaries of UDA-based object ReID but also provides a solid foundation for further research and advancements in this domain.
The new framework extends its predecessor by addressing unsupervised domain adaptation (UDA) challenges in person ReID and vehicle ReID, with further applicability to object ReID.
https://www.mdpi.com/3042-5999/1/1/4
https://www.mdpi.com/3042-5999/1/1/4/pdf
AI Sensors 2025 7
[ "Nguyen T.Q.", "Prima O.D.A.", "Irfan S.A.", "Purnomo H.D.", "Tanone R." ]
[ "Domain Adaptation", "Person Re-Identification", "Unsupervised Domain Adaptation", "Vehicle Re-Identification" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/when-does-pruning-benefit-vision
2507.01722
null
null
When Does Pruning Benefit Vision Representations?
Pruning is widely used to reduce the complexity of deep learning models, but its effects on interpretability and representation learning remain poorly understood. This paper investigates how pruning influences vision models across three key dimensions: (i) interpretability, (ii) unsupervised object discovery, and (iii) alignment with human perception. We first analyze different vision network architectures to examine how varying sparsity levels affect feature attribution interpretability methods. Additionally, we explore whether pruning promotes more succinct and structured representations, potentially improving unsupervised object discovery by discarding redundant information while preserving essential features. Finally, we assess whether pruning enhances the alignment between model representations and human perception, investigating whether sparser models focus on more discriminative features similarly to humans. Our findings also reveal the presence of sweet spots, where sparse models exhibit higher interpretability, downstream generalization and human alignment. However, these spots highly depend on the network architectures and their size in terms of trainable parameters. Our results suggest a complex interplay between these three dimensions, highlighting the importance of investigating when and how pruning benefits vision representations.
Pruning is widely used to reduce the complexity of deep learning models, but its effects on interpretability and representation learning remain poorly understood.
https://arxiv.org/abs/2507.01722v1
https://arxiv.org/pdf/2507.01722v1.pdf
null
[ "Enrico Cassano", "Riccardo Renzulli", "Andrea Bragagnolo", "Marco Grangetto" ]
[ "Object Discovery", "Representation Learning" ]
2025-07-02T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Pruning", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Model Compression", "parent": null }, "name": "Pruning", "source_title": "Pruning Filters for Efficient ConvNets", "source_url": "http://arxiv.org/abs/1608.08710v3" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/automatic-room-light-controller-management
null
null
null
AUTOMATIC ROOM LIGHT CONTROLLER MANAGEMENT SYSTEM.
The AT89S51 is a low-power, high- performance CMOS 8-bit microcontroller with 4K bytes of In-System Programmable Flash memory. The device is manufactured using Atmel’s high-density non-volatile memory technology and is compatible with the industry-standard 80C51 instruction set and pin-out. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional non-volatile memory programmer. By combining a versatile 8-bit CPU with In-System Programmable Flash on a monolithic chip, the Atmel AT89S51 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications.
null
https://doi.org/10.5281/zenodo.15738085
https://doi.org/10.5281/zenodo.15738085
Zenodo 2025 6
[ "Kamal Acharya" ]
[ "4k", "CPU", "Management" ]
2025-06-25T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/voice-control-robot-using-arduino-management
null
null
null
VOICE CONTROL ROBOT USING ARDUINO MANAGEMENT SYSTEM PROJECT.
This robotic is designed to control vehicle by using human voice command through Bluetooth module. Voice Control Robot is used to complete specific commands like Forward, Backward, Stop, Left, Right and dancing (or rotation of robot) etc. Voice Control Robot is based on Speech Recognition. The commands are given to robot using Android application. The Android application (AMR-Voice) is connected to Bluetooth Module (HC-05), which is directly connected to Arduino Uno R3. We give command to the robot and it performs work according to the given command. Voice Control Robot is much useful for those areas where humans can't reached. Robot can work in all type of situations like toxic area, in fire situations, polluted area and also on hills. This robot is very useful for those who is physically handicapped. This robot is very small in size so we can use this project for spying or espial. If we implement in this project so we can use this robot in military application, agriculture purpose, and industrial purpose and also for surveillance device.
null
https://doi.org/10.5281/zenodo.15738547
https://doi.org/10.5281/zenodo.15738547
Zenodo 2025 6
[ "Kamal Acharya" ]
[ "Management", "speech-recognition", "Speech Recognition" ]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/digital-watermarking-system-project-report
null
null
null
Digital watermarking system project report.
Data security is the essential in the today’s world of internet and networking. In any organization information is critical. In today’s world people are ready to spent thousands and lacks of money in order to ensure high level of information security. In spite of spending such a huge amount, still the objective of securing the information is not achieved as the data some how gets in the hands of hacker. As the technology for securing the data is advancing, hackers are also keeping pace with this technology. Hackers now make use of certain algorithm or other techniques to decode the data encoded by the senders. One of the ways to ensure security is to ensure that data is not visible to the hacker. This can be done by hiding the message itself behind some other objects. Here we are achieving this data security concept through the technique of Steganography.
null
https://doi.org/10.5281/zenodo.15783157
https://doi.org/10.5281/zenodo.15783157
Zenodo 2025 6
[ "Kamal Acharya" ]
[]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/cakeshop-management-system-project-report
null
null
null
Cakeshop management system project report.
A cake shop management system is a computerized management system. This system provides the Records of hardware assets beside software of the organization. the proposed system keep a track of different types of cakes available in the system. The main objective of cake shop management system is to provide solution for the customers to manage their work using computerized process. this software application will help to handle the customer information details about the products, payment detailed information etc. detailed explanation about modules and design are provided in the documentation.
null
https://doi.org/10.5281/zenodo.15795036
https://doi.org/10.5281/zenodo.15795036
Zenodo 2025 7
[ "Kamal Acharya" ]
[ "Management" ]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/online-discussion-project-management-system
null
null
null
Online discussion project management system report.
The project titled “Online Discuss-forum” is designed using Active Server Pages .NET with Microsoft Visual Studio.Net 2008 as front end and Microsoft SQL Server 2000 as back end which works in .Net framework version 3.5. The coding language used is C# .Net. This project is aimed at developing online form for the group discussion. This is a web-based tool. Any user can post the doubts topics and can reply for the other user doubts. The user can invites others for Discussion and submit query. This is useful for a small office, school or a department or for that matter any group who is interested to organize it effectively. Facility to share the resource and post articles that can be viewed by registered user.
null
https://doi.org/10.5281/zenodo.15795041
https://doi.org/10.5281/zenodo.15795041
Zenodo 2025 7
[ "Kamal Acharya" ]
[ "Articles", "Management" ]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/detection-of-cyber-attack-in-network-using
null
null
null
Detection of Cyber Attack in Network using Machine Learning Techniques.
Contrasted with the past, improvements in PC and correspondence innovations have given broad and propelled changes. The use of new innovations give incredible advantages to people, organizations, and governments, be that as it may, messes some up against them. For instance, the protection of significant data, security of put away information stages, accessibility of information and so forth. Contingent upon these issues, digital fear based oppression is one of the most significant issues in this day and age. Digital fear, which made a great deal of issues people and establishments, has arrived at a level that could undermine open and nation security by different gatherings, for example, criminal association, proficient people and digital activists. Along these lines, Intrusion Detection Systems (IDS) has been created to maintain a strategic distance from digital assaults. Right now, learning the bolster support vector machine (SVM) calculations were utilized to recognize port sweep endeavors dependent on the new CICIDS2017 dataset with 97.80%, 69.79% precision rates were accomplished individually.
null
https://doi.org/10.5281/zenodo.15795049
https://doi.org/10.5281/zenodo.15795049
Zenodo 2025 7
[ "Kamal Acharya" ]
[ "Intrusion Detection" ]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/face-mask-detection-project-report
null
null
null
Face mask detection project report.
Recognition from faces is a popular and significant technology in recent years. In the real-world, when a person is uncooperative with the systems such as in video surveillance then masking is further common scenarios. For these masks, current face recognition performance degrades. Still, difficulties created by masks are usually disregarded. Face recognition is a promising area of applied computer vision. This technique is used to recognize a face or identify a person automatically from given images. In our daily life activates like, in a passport checking, smart door, access control, voter verification, criminal investigation, and many other purposes face recognition is widely used to authenticate a person correctly and automatically. Face recognition has gained much attention as a unique, reliable biometric recognition technology that makes it most popular than any other biometric technique likes password, pin, fingerprint, etc. The primary concern to this work is about facial masks, and especially to enhance the recognition accuracy of different masked faces. A feasible approach has been proposed that consists of first detecting the facial regions. The occluded face detection problem has been approached using Cascaded Convolutional Neural Network (CNN). Besides, its performance has been also evaluated within excessive facial masks and found attractive outcomes. Finally, a correlative study also made here for a better understanding.
null
https://doi.org/10.5281/zenodo.15795053
https://doi.org/10.5281/zenodo.15795053
Zenodo 2025 7
[ "Kamal Acharya" ]
[ "Face Detection", "Face Recognition", "Occluded Face Detection" ]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/entrance-seat-allotment-system-project-report
null
null
null
Entrance seat allotment system project report.
This project Entrance Seat Allotment System is windows application in which students can register with their rank number for the entrance examination and the administrator can allot the seats for the students. Administrator can add the college details and he batch details. Using this software the entrance seat allotment became easier and can be implemented using system. The main advantage of the project is the computerization of the entrance seat allotment process. Administrator has the power for the allotment. He can add the allotted seats into a file and the details are saved into the system. The total time for the entrance allotment became lesser and the allotment process became faster.
null
https://doi.org/10.5281/zenodo.15803273
https://doi.org/10.5281/zenodo.15803273
Zenodo 2025 7
[ "Kamal Acharya" ]
[]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/house-tax-billing-management-system-report
null
null
null
HOUSE TAX BILLING MANAGEMENT SYSTEM REPORT.
Main aim of this project is to implement an application which deals with maintaining house tax activities like generating house tax bill, Customer personal records and other Administration activities. Initially, all the information about residence and the address will be entered and maintained, which in turn helps to generate an House Tax bill .This system will reduce manual work for maintaining records in files. This system provides new mechanism in maintaining house tax records effectively. Regular transactions which include bill generation, payment etc. and exceptional transactions that are related to, change of customer’s address, not clearing bills within due date etc. also will have to be handled by the system.
null
https://doi.org/10.5281/zenodo.15803277
https://doi.org/10.5281/zenodo.15803277
Zenodo 2025 7
[ "Kamal Acharya" ]
[ "Management" ]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/design-and-development-of-an-smart
null
null
null
DESIGN AND DEVELOPMENT OF AN SMART ELECTRICITY BILL PAYMENT MANAGEMENT SYSTEM.
The project is a web based application where users can get instant electricity bill and pay them online via credit card. The system automates the conventional process of paying electricity bill by visiting the place. Users have to stand in queue for paying bill and wait for their turn. The process is tiresome and time consuming. They even have to wait for the bill being delivered to their place which sometimes can be delivered late by the delivery boy. Hence the system is developed to automate the electricity bill calculation and payment for user convenience. The system would be having two logins admin and user login. Admin can view user account details and can even add or updates things in their account. Admin has to feed the system with electricity usage data into respective users account. The system then calculates the electricity bill for every user and updates the information into their account every month. User can view their electricity bill and then the user can get the information like how voltages are used in individual home appliances. Then the system is used to calculate the voltage level of particular home appliance and how much of cost is used in that particular appliance and also individually pay on the spot before month end. If user is incapable of paying the bill before month end, it then calculates fine for each day.
null
https://doi.org/10.5281/zenodo.15803281
https://doi.org/10.5281/zenodo.15803281
Zenodo 2025 7
[ "Kamal Acharya" ]
[ "Management" ]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/indianbailjudgments-1200-a-multi-attribute
2507.02506
null
null
IndianBailJudgments-1200: A Multi-Attribute Dataset for Legal NLP on Indian Bail Orders
Legal NLP remains underdeveloped in regions like India due to the scarcity of structured datasets. We introduce IndianBailJudgments-1200, a new benchmark dataset comprising 1200 Indian court judgments on bail decisions, annotated across 20+ attributes including bail outcome, IPC sections, crime type, and legal reasoning. Annotations were generated using a prompt-engineered GPT-4o pipeline and verified for consistency. This resource supports a wide range of legal NLP tasks such as outcome prediction, summarization, and fairness analysis, and is the first publicly available dataset focused specifically on Indian bail jurisprudence.
null
https://arxiv.org/abs/2507.02506v1
https://arxiv.org/pdf/2507.02506v1.pdf
null
[ "Sneha Deshmukh", "Prathmesh Kamble" ]
[ "Attribute", "Fairness", "Jurisprudence", "Legal Reasoning" ]
2025-07-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/adaptive-action-duration-with-contextual
2507.00030
null
null
Adaptive Action Duration with Contextual Bandits for Deep Reinforcement Learning in Dynamic Environments
Deep Reinforcement Learning (DRL) has achieved remarkable success in complex sequential decision-making tasks, such as playing Atari 2600 games and mastering board games. A critical yet underexplored aspect of DRL is the temporal scale of action execution. We propose a novel paradigm that integrates contextual bandits with DRL to adaptively select action durations, enhancing policy flexibility and computational efficiency. Our approach augments a Deep Q-Network (DQN) with a contextual bandit module that learns to choose optimal action repetition rates based on state contexts. Experiments on Atari 2600 games demonstrate significant performance improvements over static duration baselines, highlighting the efficacy of adaptive temporal abstractions in DRL. This paradigm offers a scalable solution for real-time applications like gaming and robotics, where dynamic action durations are critical.
Deep Reinforcement Learning (DRL) has achieved remarkable success in complex sequential decision-making tasks, such as playing Atari 2600 games and mastering board games.
https://arxiv.org/abs/2507.00030v1
https://arxiv.org/pdf/2507.00030v1.pdf
null
[ "Abhishek Verma", "Nallarasan V", "Balaraman Ravindran" ]
[ "Atari Games", "Board Games", "Computational Efficiency", "Decision Making", "Deep Learning", "Deep Reinforcement Learning", "Multi-Armed Bandits", "Reinforcement Learning", "Reinforcement Learning (Atari Games)", "Sequential Decision Making" ]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ann-based-grid-impedance-estimation-for
2506.23304
null
null
ANN-Based Grid Impedance Estimation for Adaptive Gain Scheduling in VSG Under Dynamic Grid Conditions
In contrast to grid-following inverters, Virtual Synchronous Generators (VSGs) perform well under weak grid conditions but may become unstable when the grid is strong. Grid strength depends on grid impedance, which unfortunately varies over time. In this paper, we propose a novel adaptive gain-scheduling control scheme for VSGs. First, an Artificial Neural Network (ANN) estimates the fundamental-frequency grid impedance; then these estimates are fed into an adaptive gain-scheduling function to recalculate controller parameters under varying grid conditions. The proposed method is validated in Simulink and compared with a conventional VSG employing fixed controller gains. The results demonstrate that settling times and overshoot percentages remain consistent across different grid conditions. Additionally, previously unseen grid impedance values are estimated with high accuracy and minimal time delay, making the approach well suited for real-time gain-scheduling control.
In contrast to grid-following inverters, Virtual Synchronous Generators (VSGs) perform well under weak grid conditions but may become unstable when the grid is strong.
https://arxiv.org/abs/2506.23304v1
https://arxiv.org/pdf/2506.23304v1.pdf
null
[ "Quang-Manh Hoang", "Van Nam Nguyen", "Taehyung Kim", "Guilherme Vieira Hollweg", "Wencong Su", "Van-Hai Bui" ]
[ "Scheduling" ]
2025-06-29T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/can-consciousness-be-observed-from-large
2506.22516
null
null
Can "consciousness" be observed from large language model (LLM) internal states? Dissecting LLM representations obtained from Theory of Mind test with Integrated Information Theory and Span Representation analysis
Integrated Information Theory (IIT) provides a quantitative framework for explaining consciousness phenomenon, positing that conscious systems comprise elements integrated through causal properties. We apply IIT 3.0 and 4.0 -- the latest iterations of this framework -- to sequences of Large Language Model (LLM) representations, analyzing data derived from existing Theory of Mind (ToM) test results. Our study systematically investigates whether the differences of ToM test performances, when presented in the LLM representations, can be revealed by IIT estimates, i.e., $\Phi^{\max}$ (IIT 3.0), $\Phi$ (IIT 4.0), Conceptual Information (IIT 3.0), and $\Phi$-structure (IIT 4.0). Furthermore, we compare these metrics with the Span Representations independent of any estimate for consciousness. This additional effort aims to differentiate between potential "consciousness" phenomena and inherent separations within LLM representational space. We conduct comprehensive experiments examining variations across LLM transformer layers and linguistic spans from stimuli. Our results suggest that sequences of contemporary Transformer-based LLM representations lack statistically significant indicators of observed "consciousness" phenomena but exhibit intriguing patterns under $\textit{spatio}$-permutational analyses. The Appendix and code are available as Supplementary Materials at: https://doi.org/10.1016/j.nlp.2025.100163.
null
https://arxiv.org/abs/2506.22516v1
https://arxiv.org/pdf/2506.22516v1.pdf
null
[ "Jingkai Li" ]
[ "Explainable Artificial Intelligence (XAI)", "Interpretable Machine Learning", "Language Modeling", "Language Modelling", "Large Language Model" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/no-time-to-train-training-free-reference
2507.02798
null
null
No time to train! Training-Free Reference-Based Instance Segmentation
The performance of image segmentation models has historically been constrained by the high cost of collecting large-scale annotated data. The Segment Anything Model (SAM) alleviates this original problem through a promptable, semantics-agnostic, segmentation paradigm and yet still requires manual visual-prompts or complex domain-dependent prompt-generation rules to process a new image. Towards reducing this new burden, our work investigates the task of object segmentation when provided with, alternatively, only a small set of reference images. Our key insight is to leverage strong semantic priors, as learned by foundation models, to identify corresponding regions between a reference and a target image. We find that correspondences enable automatic generation of instance-level segmentation masks for downstream tasks and instantiate our ideas via a multi-stage, training-free method incorporating (1) memory bank construction; (2) representation aggregation and (3) semantic-aware feature matching. Our experiments show significant improvements on segmentation metrics, leading to state-of-the-art performance on COCO FSOD (36.8% nAP), PASCAL VOC Few-Shot (71.2% nAP50) and outperforming existing training-free approaches on the Cross-Domain FSOD benchmark (22.4% nAP).
The performance of image segmentation models has historically been constrained by the high cost of collecting large-scale annotated data.
https://arxiv.org/abs/2507.02798v1
https://arxiv.org/pdf/2507.02798v1.pdf
null
[ "Miguel Espinosa", "Chenhongyi Yang", "Linus Ericsson", "Steven McDonagh", "Elliot J. Crowley" ]
[ "Cross-Domain Few-Shot Object Detection", "Few-Shot Object Detection", "Image Segmentation", "Instance Segmentation", "Segmentation", "Semantic Segmentation" ]
2025-07-03T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/seeding-neural-network-quantum-states-with
2506.23550
null
null
Seeding neural network quantum states with tensor network states
We find an efficient approach to approximately convert matrix product states (MPSs) into restricted Boltzmann machine wave functions consisting of a multinomial hidden unit through a canonical polyadic (CP) decomposition of the MPSs. This method allows us to generate well-behaved initial neural network quantum states for quantum many-body ground-state calculations in polynomial time of the number of variational parameters and systematically shorten the distance between the initial states and the ground states with increasing the rank of the CP decomposition. We demonstrate the efficiency of our method by taking the transverse-field Ising model as an example and discuss possible applications of our method to more general quantum many-body systems in which the ground-state wave functions possess complex nodal structures.
We find an efficient approach to approximately convert matrix product states (MPSs) into restricted Boltzmann machine wave functions consisting of a multinomial hidden unit through a canonical polyadic (CP) decomposition of the MPSs.
https://arxiv.org/abs/2506.23550v1
https://arxiv.org/pdf/2506.23550v1.pdf
null
[ "Ryui Kaneko", "Shimpei Goto" ]
[]
2025-06-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/sentence-fusion-for-multidocument-news
null
null
null
Sentence Fusion for Multidocument News Summarization
A system that can produce informative summaries, highlighting common information found in many online documents, will help Web users to pinpoint information that they need without extensive reading. In this article, we introduce sentence fusion, a novel text-to-text generation technique for synthesizing common information across documents. Sentence fusion involves bottom-up local multisequence alignment to identify phrases conveying similar information and statistical generation to combine common phrases into a sentence. Sentence fusion moves the summarization field from the use of purely extractive methods to the generation of abstracts that contain sentences not found in any of the input documents and can synthesize information across sources.
null
https://aclanthology.org/J05-3002.pdf
https://aclanthology.org/J05-3002.pdf
Computational Linguistics 2005 1
[ "Regina Barzilay", "Kathleen R. McKeown" ]
[ "News Summarization", "Sentence", "Sentence Fusion", "Text Generation", "Text Summarization" ]
2005-01-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lost-in-latent-space-an-empirical-study-of
2507.02608
null
null
Lost in Latent Space: An Empirical Study of Latent Diffusion Models for Physics Emulation
The steep computational cost of diffusion models at inference hinders their use as fast physics emulators. In the context of image and video generation, this computational drawback has been addressed by generating in the latent space of an autoencoder instead of the pixel space. In this work, we investigate whether a similar strategy can be effectively applied to the emulation of dynamical systems and at what cost. We find that the accuracy of latent-space emulation is surprisingly robust to a wide range of compression rates (up to 1000x). We also show that diffusion-based emulators are consistently more accurate than non-generative counterparts and compensate for uncertainty in their predictions with greater diversity. Finally, we cover practical design choices, spanning from architectures to optimizers, that we found critical to train latent-space emulators.
The steep computational cost of diffusion models at inference hinders their use as fast physics emulators.
https://arxiv.org/abs/2507.02608v1
https://arxiv.org/pdf/2507.02608v1.pdf
null
[ "François Rozet", "Ruben Ohana", "Michael McCabe", "Gilles Louppe", "François Lanusse", "Shirley Ho" ]
[ "Diversity", "Video Generation" ]
2025-07-03T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/a-diagrammatic-calculus-for-a-functional
2507.00782
null
null
A Diagrammatic Calculus for a Functional Model of Natural Language Semantics
In this paper, we study a functional programming approach to natural language semantics, allowing us to increase the expressivity of a more traditional denotation style. We will formalize a category based type and effect system, and construct a diagrammatic calculus to model parsing and handling of effects, and use it to efficiently compute the denotations for sentences.
null
https://arxiv.org/abs/2507.00782v1
https://arxiv.org/pdf/2507.00782v1.pdf
null
[ "Matthieu Pierre Boyer" ]
[]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/amped-adaptive-multi-objective-projection-for
2506.05980
null
null
AMPED: Adaptive Multi-objective Projection for balancing Exploration and skill Diversification
Skill-based reinforcement learning (SBRL) enables rapid adaptation in environments with sparse rewards by pretraining a skill-conditioned policy. Effective skill learning requires jointly maximizing both exploration and skill diversity. However, existing methods often face challenges in simultaneously optimizing for these two conflicting objectives. In this work, we propose a new method, Adaptive Multi-objective Projection for balancing Exploration and skill Diversification (AMPED), which explicitly addresses both exploration and skill diversification. We begin by conducting extensive ablation studies to identify and define a set of objectives that effectively capture the aspects of exploration and skill diversity, respectively. During the skill pretraining phase, AMPED introduces a gradient surgery technique to balance the objectives of exploration and skill diversity, mitigating conflicts and reducing reliance on heuristic tuning. In the subsequent fine-tuning phase, AMPED incorporates a skill selector module that dynamically selects suitable skills for downstream tasks, based on task-specific performance signals. Our approach achieves performance that surpasses SBRL baselines across various benchmarks. These results highlight the importance of explicitly harmonizing exploration and diversity and demonstrate the effectiveness of AMPED in enabling robust and generalizable skill learning. Project Page: https://geonwoo.me/amped/
These results highlight the importance of explicitly harmonizing exploration and diversity and demonstrate the effectiveness of AMPED in enabling robust and generalizable skill learning.
https://arxiv.org/abs/2506.05980v1
https://arxiv.org/pdf/2506.05980v1.pdf
null
[ "Geonwoo Cho", "Jaemoon Lee", "Jaegyun Im", "Subi Lee", "JIhwan Lee", "Sundong Kim" ]
[ "Diversity" ]
2025-06-06T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/graph-style-transfer-for-counterfactual
2505.17542
null
null
Graph Style Transfer for Counterfactual Explainability
Counterfactual explainability seeks to uncover model decisions by identifying minimal changes to the input that alter the predicted outcome. This task becomes particularly challenging for graph data due to preserving structural integrity and semantic meaning. Unlike prior approaches that rely on forward perturbation mechanisms, we introduce Graph Inverse Style Transfer (GIST), the first framework to re-imagine graph counterfactual generation as a backtracking process, leveraging spectral style transfer. By aligning the global structure with the original input spectrum and preserving local content faithfulness, GIST produces valid counterfactuals as interpolations between the input style and counterfactual content. Tested on 8 binary and multi-class graph classification benchmarks, GIST achieves a remarkable +7.6% improvement in the validity of produced counterfactuals and significant gains (+45.5%) in faithfully explaining the true class distribution. Additionally, GIST's backtracking mechanism effectively mitigates overshooting the underlying predictor's decision boundary, minimizing the spectral differences between the input and the counterfactuals. These results challenge traditional forward perturbation methods, offering a novel perspective that advances graph explainability.
Counterfactual explainability seeks to uncover model decisions by identifying minimal changes to the input that alter the predicted outcome.
https://arxiv.org/abs/2505.17542v1
https://arxiv.org/pdf/2505.17542v1.pdf
null
[ "Bardh Prenkaj", "Efstratios Zaradoukas", "Gjergji Kasneci" ]
[ "counterfactual", "Counterfactual Explanation", "Graph Classification", "Style Transfer", "valid" ]
2025-05-23T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Counterfactuals Explanations", "introduced_year": 2000, "main_collection": { "area": "Reinforcement Learning", "description": "", "name": "Exploration Strategies", "parent": null }, "name": "Counterfactuals", "source_title": "Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR", "source_url": "http://arxiv.org/abs/1711.00399v3" } ]
https://paperswithcode.com/paper/coordinated-pso-pid-based-longitudinal
null
null
null
Coordinated PSO-PID based longitudinal control with LPV-MPC based lateral control for autonomous vehicles
Autonomous driving is achieved by controlling the coupled nonlinear longitudinal and lateral vehicle dynamics. Longitudinal control greatly affects lateral dynamics and must preserve lateral stability conditions, while lateral controllers must take into account actuator limits and ride comfort. This work deals with the coordinated longitudinal and lateral control for autonomous driving. An improved particle swarm optimized PID (PSO-PID) is proposed to handle the task of speed tracking based on nonlinear longitudinal dynamics. An enhanced linear parameter varying model predictive controller (LPV-MPC) is also designed to control lateral dynamics, the latter is formulated with an adaptive LPV model in which the tire cornering stiffness coefficients are estimated by a recursive estimator. The proposed LPV-MPC is enhanced with an improved cost function to provide better performance and stability. Matlab/Carsim co-simulations are carried out to validate the proposed controllers.
Autonomous driving is achieved by controlling the coupled nonlinear longitudinal and lateral vehicle dynamics.
https://ieeexplore.ieee.org/abstract/document/9838192/authors#authors
https://univ-evry.hal.science/hal-03749480/document
2022 European Control Conference (ECC) 2022 8
[ "Yassine Kebbati", "Naima Ait-Oufroukh", "Vincent Vigneron", "Dalil Ichalal" ]
[ "Autonomous Driving", "Autonomous Vehicles" ]
2022-08-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fedref-communication-efficient-bayesian-fine
2506.23210
null
null
FedRef: Communication-Efficient Bayesian Fine Tuning with Reference Model
Federated learning(FL) is used for distributed scenarios to train artificial intelligence(AI) models while ensuring users' privacy. In federated learning scenario, the server generally never knows about users' data. This type of concept makes the AI training process efficient in terms of data privacy. However, regarding model performance, federated AI models may not sufficiently satisfy AI users' expectations. Furthermore, AI users have a wide range of different needs. It is not easy to satisfy the whole users needs. These types of issues can be addressed through AI model optimization, fine-tuning, or personalization to achieve optimal model performance. To address model optimization challenges, we propose reference model-based federated learning for optimal fine-tuning, which overcomes catastrophic forgetting in each round. This method is derived from Bayesian parameter-efficient transfer learning, which includes an optimal proximal term and enables overcoming the catastrophic forgetting issue in each round by utilizing a reference model that incorporates previous model parameters. As a result, this method achieves both high model performance and low computing cost.
However, regarding model performance, federated AI models may not sufficiently satisfy AI users' expectations.
https://arxiv.org/abs/2506.23210v1
https://arxiv.org/pdf/2506.23210v1.pdf
null
[ "Taehwan Yoon", "Bongjun Choi" ]
[ "Brain Tumor Segmentation", "Federated Learning", "Model Optimization", "Transfer Learning" ]
2025-06-29T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/structural-feature-enhanced-transformer-for
null
null
null
Structural feature enhanced transformer for fine-grained image recognition
Existing fine-grained image recognition (FGIR) models mainly rely on high-level semantic features to extract discriminative information, ignoring the potential role of the overall structural information of objects and the structural relationships between key parts. To address this issue, we propose the Structural Feature Enhancement Transformer (SFETrans). SFETrans consists of a visual transformer backbone network responsible for extracting complex semantic features. Additionally, it includes a structural modeling (SM) branch and an amplitude component exchange (ACE) module, both dedicated to enhancing the learning of structural features. The SM branch actively models the structural relationships between key parts of objects and extracts corresponding structural features, while the ACE module guides the model to learn structural information in the phase spectrum by introducing implicit constraints during training. By synergizing the backbone network and the two modules, SFETrans exhibits competitive performance on four benchmark datasets and outperforms other comparison methods in terms of computational efficiency.
null
https://www.sciencedirect.com/science/article/abs/pii/S0031320325006156?dgcid=rss_sd_all
https://www.sciencedirect.com/science/article/abs/pii/S0031320325006156?dgcid=rss_sd_all
Pattern Recognition 2025 6
[ "Ying Yu", "Wei Wei", "Cairong Zhao", "Jin Qian", "Enhong Chen" ]
[ "Computational Efficiency", "Fine-Grained Image Classification", "Fine-Grained Image Recognition" ]
2025-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" } ]
https://paperswithcode.com/paper/seg-r1-segmentation-can-be-surprisingly
2506.22624
null
null
Seg-R1: Segmentation Can Be Surprisingly Simple with Reinforcement Learning
We present Seg-R1, a preliminary exploration of using reinforcement learning (RL) to enhance the pixel-level understanding and reasoning capabilities of large multimodal models (LMMs). Starting with foreground segmentation tasks, specifically camouflaged object detection (COD) and salient object detection (SOD), our approach enables the LMM to generate point and bounding box prompts in the next-token fashion, which are then used to guide SAM2 in producing segmentation masks. We introduce Group Relative Policy Optimization (GRPO) into the segmentation domain, equipping the LMM with pixel-level comprehension through a carefully designed training strategy. Notably, Seg-R1 achieves remarkable performance with purely RL-based training, achieving .873 S-measure on COD10K without complex model modification. Moreover, we found that pure RL training demonstrates strong open-world generalization. Despite being trained solely on foreground segmentation image-mask pairs without text supervision, Seg-R1 achieves impressive zero-shot performance on referring segmentation and reasoning segmentation tasks, with 71.4 cIoU on RefCOCOg test and 56.7 gIoU on ReasonSeg test, outperforming models fully supervised on these datasets.
We present Seg-R1, a preliminary exploration of using reinforcement learning (RL) to enhance the pixel-level understanding and reasoning capabilities of large multimodal models (LMMs).
https://arxiv.org/abs/2506.22624v1
https://arxiv.org/pdf/2506.22624v1.pdf
null
[ "Zuyao You", "Zuxuan Wu" ]
[ "Foreground Segmentation", "object-detection", "Object Detection", "Reasoning Segmentation", "Reinforcement Learning (RL)", "Salient Object Detection", "Segmentation" ]
2025-06-27T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/msgcn-multiplex-spatial-graph-convolution
2504.17749
null
null
MSGCN: Multiplex Spatial Graph Convolution Network for Interlayer Link Weight Prediction
Graph Neural Networks (GNNs) have been widely used for various learning tasks, ranging from node classification to link prediction. They have demonstrated excellent performance in multiple domains involving graph-structured data. However, an important category of learning tasks, namely link weight prediction, has received less emphasis due to its increased complexity compared to binary link classification. Link weight prediction becomes even more challenging when considering multilayer networks, where nodes can be interconnected across multiple layers. To address these challenges, we propose a new method named Multiplex Spatial Graph Convolution Network (MSGCN), which spatially embeds information across multiple layers to predict interlayer link weights. The MSGCN model generalizes spatial graph convolution to multiplex networks and captures the geometric structure of nodes across multiple layers. Extensive experiments using data with known interlayer link information show that the MSGCN model has robust, accurate, and generalizable link weight prediction performance across a wide variety of multiplex network structures.
Graph Neural Networks (GNNs) have been widely used for various learning tasks, ranging from node classification to link prediction.
https://arxiv.org/abs/2504.17749v1
https://arxiv.org/pdf/2504.17749v1.pdf
null
[ "Steven E. Wilson", "Sina Khanmohammadi" ]
[ "Link Prediction", "Node Classification", "Prediction" ]
2025-04-24T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/energy-based-transformers-are-scalable
2507.02092
null
null
Energy-Based Transformers are Scalable Learners and Thinkers
Inference-time computation techniques, analogous to human System 2 Thinking, have recently become popular for improving model performances. However, most existing approaches suffer from several limitations: they are modality-specific (e.g., working only in text), problem-specific (e.g., verifiable domains like math and coding), or require additional supervision/training on top of unsupervised pretraining (e.g., verifiers or verifiable rewards). In this paper, we ask the question "Is it possible to generalize these System 2 Thinking approaches, and develop models that learn to think solely from unsupervised learning?" Interestingly, we find the answer is yes, by learning to explicitly verify the compatibility between inputs and candidate-predictions, and then re-framing prediction problems as optimization with respect to this verifier. Specifically, we train Energy-Based Transformers (EBTs) -- a new class of Energy-Based Models (EBMs) -- to assign an energy value to every input and candidate-prediction pair, enabling predictions through gradient descent-based energy minimization until convergence. Across both discrete (text) and continuous (visual) modalities, we find EBTs scale faster than the dominant Transformer++ approach during training, achieving an up to 35% higher scaling rate with respect to data, batch size, parameters, FLOPs, and depth. During inference, EBTs improve performance with System 2 Thinking by 29% more than the Transformer++ on language tasks, and EBTs outperform Diffusion Transformers on image denoising while using fewer forward passes. Further, we find that EBTs achieve better results than existing models on most downstream tasks given the same or worse pretraining performance, suggesting that EBTs generalize better than existing approaches. Consequently, EBTs are a promising new paradigm for scaling both the learning and thinking capabilities of models.
Further, we find that EBTs achieve better results than existing models on most downstream tasks given the same or worse pretraining performance, suggesting that EBTs generalize better than existing approaches.
https://arxiv.org/abs/2507.02092v1
https://arxiv.org/pdf/2507.02092v1.pdf
null
[ "Alexi Gladstone", "Ganesh Nanduru", "Md Mofijul Islam", "Peixuan Han", "Hyeonjeong Ha", "Aman Chadha", "Yilun Du", "Heng Ji", "Jundong Li", "Tariq Iqbal" ]
[ "Denoising", "Image Denoising", "Math" ]
2025-07-02T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/hindsight-guided-momentum-hgm-optimizer-an
2506.22479
null
null
Hindsight-Guided Momentum (HGM) Optimizer: An Approach to Adaptive Learning Rate
We introduce Hindsight-Guided Momentum (HGM), a first-order optimization algorithm that adaptively scales learning rates based on the directional consistency of recent updates. Traditional adaptive methods, such as Adam or RMSprop , adapt learning dynamics using only the magnitude of gradients, often overlooking important geometric cues.Geometric cues refer to directional information, such as the alignment between current gradients and past updates, which reflects the local curvature and consistency of the optimization path. HGM addresses this by incorporating a hindsight mechanism that evaluates the cosine similarity between the current gradient and accumulated momentum. This allows it to distinguish between coherent and conflicting gradient directions, increasing the learning rate when updates align and reducing it in regions of oscillation or noise. The result is a more responsive optimizer that accelerates convergence in smooth regions of the loss surface while maintaining stability in sharper or more erratic areas. Despite this added adaptability, the method preserves the computational and memory efficiency of existing optimizers.By more intelligently responding to the structure of the optimization landscape, HGM provides a simple yet effective improvement over existing approaches, particularly in non-convex settings like that of deep neural network training.
null
https://arxiv.org/abs/2506.22479v1
https://arxiv.org/pdf/2506.22479v1.pdf
null
[ "Krisanu Sarkar" ]
[]
2025-06-22T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/fd8e2064e094f301d910b91a757b860aae3e3116/torch/optim/rmsprop.py#L69-L108", "description": "**RMSProp** is an unpublished adaptive learning rate optimizer [proposed by Geoff Hinton](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). The motivation is that the magnitude of gradients can differ for different weights, and can change during learning, making it hard to choose a single global learning rate. RMSProp tackles this by keeping a moving average of the squared gradient and adjusting the weight updates by this magnitude. The gradient updates are performed as:\r\n\r\n$$E\\left[g^{2}\\right]\\_{t} = \\gamma E\\left[g^{2}\\right]\\_{t-1} + \\left(1 - \\gamma\\right) g^{2}\\_{t}$$\r\n\r\n$$\\theta\\_{t+1} = \\theta\\_{t} - \\frac{\\eta}{\\sqrt{E\\left[g^{2}\\right]\\_{t} + \\epsilon}}g\\_{t}$$\r\n\r\nHinton suggests $\\gamma=0.9$, with a good default for $\\eta$ as $0.001$.\r\n\r\nImage: [Alec Radford](https://twitter.com/alecrad)", "full_name": "RMSProp", "introduced_year": 2013, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "RMSProp", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" } ]
https://paperswithcode.com/paper/fr-capsnet-enhancing-low-resolution-image
null
null
null
FR-CapsNet: Enhancing Low-Resolution Image Classification via Frequency Routed Capsules
Very low-resolution (VLR) images present a significant challenge for deep classification networks due to their inherent lack of fine spatial detail. While capsule networks (CapsNets), which encode spatial and pose information, are robust to resolution changes, they often struggle to perform and scale effectively on complex datasets. In this study, we introduce a novel frequency routing-based CapsNet (FR-CapsNet) to replace the conventional spatial routing in CapsNets. Though VLR images lose fine grained features, they retain high-level features captured by the low-frequency components. By computing capsule activation and pose information in the frequency domain and subsequently encoding them in the spatial domain, FR-CapsNet improves robustness to resolution degradation. Furthermore, our method utilizes a global routing framework that considerably reduces computational demands, enabling FR-CapsNet to scale effectively to larger and more diverse datasets. FR-CapsNet outperforms state-of-the-art (SOTA) convolutional neural networks (CNNs), other CapsNets, Transformers, and other advanced architectures in real-world VLR digit and image classification tasks. Specifically, on the VLR CIFAR-10 dataset, FR-CapsNet surpasses the current benchmark by 4.77% while using 4 times fewer parameters. Similarly, on the VLR SVHN and CIFAR-100 datasets, it exceeds the benchmark by 0.27% and 1.55%, respectively. Extensive experiments further demonstrate the superior generalization and robustness of FR-CapsNet compared to other SOTA methods. The codes for our models are available at https://github.com/kdhasi/FR-CapsNet.git
While capsule networks (CapsNets), which encode spatial and pose information, are robust to resolution changes, they often struggle to perform and scale effectively on complex datasets.
https://doi.org/10.1109/ACCESS.2025.3583688
https://doi.org/10.1109/ACCESS.2025.3583688
IEEE Access 2025 6
[ "Hasindu Dewasurendra", "Kunmin Yeo", "Nhan Thi Cao", "Taejoon Kim" ]
[ "image-classification", "Image Classification" ]
2025-06-25T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**Capsule Network** is a machine learning system that is a type of artificial neural network that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization.", "full_name": "Capsule Network", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.", "name": "Convolutional Neural Networks", "parent": "Image Models" }, "name": "CapsNet", "source_title": "Dynamic Routing Between Capsules", "source_url": "http://arxiv.org/abs/1710.09829v2" } ]
https://paperswithcode.com/paper/understanding-and-improving-length
2507.02782
null
null
Understanding and Improving Length Generalization in Recurrent Models
Recently, recurrent models such as state space models and linear attention have become popular due to their linear complexity in the sequence length. Thanks to their recurrent nature, in principle they can process arbitrarily long sequences, but their performance sometimes drops considerably beyond their training context lengths-i.e. they fail to length generalize. In this work, we provide comprehensive empirical and theoretical analysis to support the unexplored states hypothesis, which posits that models fail to length generalize when during training they are only exposed to a limited subset of the distribution of all attainable states (i.e. states that would be attained if the recurrence was applied to long sequences). Furthermore, we investigate simple training interventions that aim to increase the coverage of the states that the model is trained on, e.g. by initializing the state with Gaussian noise or with the final state of a different input sequence. With only 500 post-training steps ($\sim 0.1\%$ of the pre-training budget), these interventions enable length generalization for sequences that are orders of magnitude longer than the training context (e.g. $2k\longrightarrow 128k$) and show improved performance in long context tasks, thus presenting a simple and efficient way to enable robust length generalization in general recurrent models.
null
https://arxiv.org/abs/2507.02782v1
https://arxiv.org/pdf/2507.02782v1.pdf
null
[ "Ricardo Buitrago Ruiz", "Albert Gu" ]
[ "2k", "State Space Models" ]
2025-07-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/pseudo-siamese-blind-spot-transformers-for
null
null
null
Pseudo-Siamese Blind-Spot Transformers for Self-Supervised Real-World Denoising
Real-world image denoising remains a challenge task. This paper studies self-supervised image denoising, requiring only noisy images captured in a single shot. We revamping the blind-spot technique by leveraging the transformer’s capability for long-range pixel interactions, which is crucial for effectively removing noise dependence in relating pixel–a requirement for achieving great performance for the blind-spot technique. The proposed method integrates these elements with two key innovations: a directional self-attention (DSA) module using a halfplane grid for self-attention, creating a sophisticated blind-spot structure, and a Siamese architecture with mutual learning to mitigate the performance impacts from the restricted attention grid in DSA. Experiments on benchmark datasets demonstrate that our method outperforms existing self-supervised and clean-imagefree methods. This combination of blind-spot and transformer techniques provides a natural synergy for tackling real-world image denoising challenges.
Real-world image denoising remains a challenge task.
https://dl.acm.org/doi/10.5555/3737916.3738358
https://proceedings.neurips.cc/paper_files/paper/2024/file/19305d2dbcc81c44d4a0120e7569856e-Paper-Conference.pdf
The Annual Conference on Neural Information Processing Systems 2025 6
[ "Yuhui Quan; Tianxiang Zheng; Hui Ji" ]
[ "Denoising", "Image Denoising" ]
2025-06-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/toward-cyclic-a-i-modelling-of-self-regulated
2507.02913
null
null
Toward Cyclic A.I. Modelling of Self-Regulated Learning: A Case Study with E-Learning Trace Data
Many e-learning platforms assert their ability or potential to improve students' self-regulated learning (SRL), however the cyclical and undirected nature of SRL theoretical models represent significant challenges for representation within contemporary machine learning frameworks. We apply SRL-informed features to trace data in order to advance modelling of students' SRL activities, to improve predictability and explainability regarding the causal effects of learning in an eLearning environment. We demonstrate that these features improve predictive accuracy and validate the value of further research into cyclic modelling techniques for SRL.
null
https://arxiv.org/abs/2507.02913v1
https://arxiv.org/pdf/2507.02913v1.pdf
null
[ "Andrew Schwabe", "Özgür Akgün", "Ella Haig" ]
[]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/just-enough-shifts-mitigating-over-refusal-in
2507.04250
null
null
Just Enough Shifts: Mitigating Over-Refusal in Aligned Language Models with Targeted Representation Fine-Tuning
Safety alignment is crucial for large language models (LLMs) to resist malicious instructions but often results in over-refusals, where benign prompts are unnecessarily rejected, impairing user experience and model utility. We introduce ACTOR (Activation-Based Training for Over-Refusal Reduction), a robust and compute- and data-efficient training framework that minimizes over-refusals by leveraging internal activation patterns from diverse queries. ACTOR precisely identifies and adjusts the activation components that trigger refusals, providing stronger control over the refusal mechanism. By fine-tuning only a single model layer, ACTOR effectively reduces over-refusals across multiple benchmarks while maintaining the model's ability to handle harmful queries and preserve overall utility.
null
https://arxiv.org/abs/2507.04250v1
https://arxiv.org/pdf/2507.04250v1.pdf
null
[ "Mahavir Dabas", "Si Chen", "Charles Fleming", "Ming Jin", "Ruoxi Jia" ]
[ "Safety Alignment" ]
2025-07-06T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/query-based-adaptive-aggregation-for-multi
2507.03831
null
null
Query-Based Adaptive Aggregation for Multi-Dataset Joint Training Toward Universal Visual Place Recognition
Deep learning methods for Visual Place Recognition (VPR) have advanced significantly, largely driven by large-scale datasets. However, most existing approaches are trained on a single dataset, which can introduce dataset-specific inductive biases and limit model generalization. While multi-dataset joint training offers a promising solution for developing universal VPR models, divergences among training datasets can saturate limited information capacity in feature aggregation layers, leading to suboptimal performance. To address these challenges, we propose Query-based Adaptive Aggregation (QAA), a novel feature aggregation technique that leverages learned queries as reference codebooks to effectively enhance information capacity without significant computational or parameter complexity. We show that computing the Cross-query Similarity (CS) between query-level image features and reference codebooks provides a simple yet effective way to generate robust descriptors. Our results demonstrate that QAA outperforms state-of-the-art models, achieving balanced generalization across diverse datasets while maintaining peak performance comparable to dataset-specific models. Ablation studies further explore QAA's mechanisms and scalability. Visualizations reveal that the learned queries exhibit diverse attention patterns across datasets. Code will be publicly released.
null
https://arxiv.org/abs/2507.03831v1
https://arxiv.org/pdf/2507.03831v1.pdf
null
[ "Jiuhong Xiao", "Yang Zhou", "Giuseppe Loianno" ]
[ "Visual Place Recognition" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/structsense-a-task-agnostic-agentic-framework
2507.03674
null
null
STRUCTSENSE: A Task-Agnostic Agentic Framework for Structured Information Extraction with Human-In-The-Loop Evaluation and Benchmarking
The ability to extract structured information from unstructured sources-such as free-text documents and scientific literature-is critical for accelerating scientific discovery and knowledge synthesis. Large Language Models (LLMs) have demonstrated remarkable capabilities in various natural language processing tasks, including structured information extraction. However, their effectiveness often diminishes in specialized, domain-specific contexts that require nuanced understanding and expert-level domain knowledge. In addition, existing LLM-based approaches frequently exhibit poor transferability across tasks and domains, limiting their scalability and adaptability. To address these challenges, we introduce StructSense, a modular, task-agnostic, open-source framework for structured information extraction built on LLMs. StructSense is guided by domain-specific symbolic knowledge encoded in ontologies, enabling it to navigate complex domain content more effectively. It further incorporates agentic capabilities through self-evaluative judges that form a feedback loop for iterative refinement, and includes human-in-the-loop mechanisms to ensure quality and validation. We demonstrate that StructSense can overcome both the limitations of domain sensitivity and the lack of cross-task generalizability, as shown through its application to diverse neuroscience information extraction tasks.
The ability to extract structured information from unstructured sources-such as free-text documents and scientific literature-is critical for accelerating scientific discovery and knowledge synthesis.
https://arxiv.org/abs/2507.03674v1
https://arxiv.org/pdf/2507.03674v1.pdf
null
[ "Tek Raj Chhetri", "Yibei Chen", "Puja Trivedi", "Dorota Jarecka", "Saif Haobsh", "Patrick Ray", "Lydia Ng", "Satrajit S. Ghosh" ]
[ "Benchmarking", "Navigate", "scientific discovery" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-disentangled-stain-and-structural
2507.03923
null
null
Learning Disentangled Stain and Structural Representations for Semi-Supervised Histopathology Segmentation
Accurate gland segmentation in histopathology images is essential for cancer diagnosis and prognosis. However, significant variability in Hematoxylin and Eosin (H&E) staining and tissue morphology, combined with limited annotated data, poses major challenges for automated segmentation. To address this, we propose Color-Structure Dual-Student (CSDS), a novel semi-supervised segmentation framework designed to learn disentangled representations of stain appearance and tissue structure. CSDS comprises two specialized student networks: one trained on stain-augmented inputs to model chromatic variation, and the other on structure-augmented inputs to capture morphological cues. A shared teacher network, updated via Exponential Moving Average (EMA), supervises both students through pseudo-labels. To further improve label reliability, we introduce stain-aware and structure-aware uncertainty estimation modules that adaptively modulate the contribution of each student during training. Experiments on the GlaS and CRAG datasets show that CSDS achieves state-of-the-art performance in low-label settings, with Dice score improvements of up to 1.2% on GlaS and 0.7% on CRAG at 5% labeled data, and 0.7% and 1.4% at 10%. Our code and pre-trained models are available at https://github.com/hieuphamha19/CSDS.
Accurate gland segmentation in histopathology images is essential for cancer diagnosis and prognosis.
https://arxiv.org/abs/2507.03923v1
https://arxiv.org/pdf/2507.03923v1.pdf
null
[ "Ha-Hieu Pham", "Nguyen Lan Vi Vu", "Thanh-Huy Nguyen", "Ulas Bagci", "Min Xu", "Trung-Nghia Le", "Huy-Hieu Pham" ]
[ "Prognosis", "Segmentation" ]
2025-07-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/sv-drr-high-fidelity-novel-view-x-ray
2507.05148
null
null
SV-DRR: High-Fidelity Novel View X-Ray Synthesis Using Diffusion Model
X-ray imaging is a rapid and cost-effective tool for visualizing internal human anatomy. While multi-view X-ray imaging provides complementary information that enhances diagnosis, intervention, and education, acquiring images from multiple angles increases radiation exposure and complicates clinical workflows. To address these challenges, we propose a novel view-conditioned diffusion model for synthesizing multi-view X-ray images from a single view. Unlike prior methods, which are limited in angular range, resolution, and image quality, our approach leverages the Diffusion Transformer to preserve fine details and employs a weak-to-strong training strategy for stable high-resolution image generation. Experimental results demonstrate that our method generates higher-resolution outputs with improved control over viewing angles. This capability has significant implications not only for clinical applications but also for medical education and data extension, enabling the creation of diverse, high-quality datasets for training and analysis. Our code is available at GitHub.
X-ray imaging is a rapid and cost-effective tool for visualizing internal human anatomy.
https://arxiv.org/abs/2507.05148v1
https://arxiv.org/pdf/2507.05148v1.pdf
null
[ "Chun Xie", "Yuichi Yoshii", "Itaru Kitahara" ]
[ "Anatomy", "Image Generation" ]
2025-07-07T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/learning-robust-stereo-matching-in-the-wild
2507.04631
null
null
Learning Robust Stereo Matching in the Wild with Selective Mixture-of-Experts
Recently, learning-based stereo matching networks have advanced significantly. However, they often lack robustness and struggle to achieve impressive cross-domain performance due to domain shifts and imbalanced disparity distributions among diverse datasets. Leveraging Vision Foundation Models (VFMs) can intuitively enhance the model's robustness, but integrating such a model into stereo matching cost-effectively to fully realize their robustness remains a key challenge. To address this, we propose SMoEStereo, a novel framework that adapts VFMs for stereo matching through a tailored, scene-specific fusion of Low-Rank Adaptation (LoRA) and Mixture-of-Experts (MoE) modules. SMoEStereo introduces MoE-LoRA with adaptive ranks and MoE-Adapter with adaptive kernel sizes. The former dynamically selects optimal experts within MoE to adapt varying scenes across domains, while the latter injects inductive bias into frozen VFMs to improve geometric feature extraction. Importantly, to mitigate computational overhead, we further propose a lightweight decision network that selectively activates MoE modules based on input complexity, balancing efficiency with accuracy. Extensive experiments demonstrate that our method exhibits state-of-the-art cross-domain and joint generalization across multiple benchmarks without dataset-specific adaptation. The code is available at \textcolor{red}{https://github.com/cocowy1/SMoE-Stereo}.
To address this, we propose SMoEStereo, a novel framework that adapts VFMs for stereo matching through a tailored, scene-specific fusion of Low-Rank Adaptation (LoRA) and Mixture-of-Experts (MoE) modules.
https://arxiv.org/abs/2507.04631v1
https://arxiv.org/pdf/2507.04631v1.pdf
null
[ "Yun Wang", "Longguang Wang", "Chenghao Zhang", "Yongjian Zhang", "Zhanjie Zhang", "Ao Ma", "Chenyou Fan", "Tin Lun Lam", "Junjie Hu" ]
[ "Inductive Bias", "Mixture-of-Experts", "Stereo Matching" ]
2025-07-07T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Mixture of Experts", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Ensembling", "parent": null }, "name": "MoE", "source_title": "Equipping Computational Pathology Systems with Artifact Processing Pipelines: A Showcase for Computation and Performance Trade-offs", "source_url": "https://arxiv.org/abs/2403.07743v3" } ]
https://paperswithcode.com/paper/taming-anomalies-with-down-up-sampling
2507.03903
null
null
Taming Anomalies with Down-Up Sampling Networks: Group Center Preserving Reconstruction for 3D Anomaly Detection
Reconstruction-based methods have demonstrated very promising results for 3D anomaly detection. However, these methods face great challenges in handling high-precision point clouds due to the large scale and complex structure. In this study, a Down-Up Sampling Network (DUS-Net) is proposed to reconstruct high-precision point clouds for 3D anomaly detection by preserving the group center geometric structure. The DUS-Net first introduces a Noise Generation module to generate noisy patches, which facilitates the diversity of training data and strengthens the feature representation for reconstruction. Then, a Down-sampling Network~(Down-Net) is developed to learn an anomaly-free center point cloud from patches with noise injection. Subsequently, an Up-sampling Network (Up-Net) is designed to reconstruct high-precision point clouds by fusing multi-scale up-sampling features. Our method leverages group centers for construction, enabling the preservation of geometric structure and providing a more precise point cloud. Extensive experiments demonstrate the effectiveness of our proposed method, achieving state-of-the-art (SOTA) performance with an Object-level AUROC of 79.9% and 79.5%, and a Point-level AUROC of 71.2% and 84.7% on the Real3D-AD and Anomaly-ShapeNet datasets, respectively.
null
https://arxiv.org/abs/2507.03903v1
https://arxiv.org/pdf/2507.03903v1.pdf
null
[ "Hanzhe Liang", "Jie Zhang", "Tao Dai", "Linlin Shen", "Jinbao Wang", "Can Gao" ]
[ "3D Anomaly Detection", "Anomaly Detection" ]
2025-07-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/unsupervised-vehicle-re-identification-based
null
null
null
Unsupervised Vehicle Re-Identification Based on Cross-Style Semi-Supervised Pre-Training and Feature Cross-Division
Vehicle Re-Identification (Re-ID) based on Unsupervised Domain Adaptation (UDA) has shown promising performance. However, two main issues still exist: (1) existing methods that use Generative Adversarial Networks (GANs) for domain gap alleviation combine supervised learning with hard labels of the source domain, resulting in a mismatch between style transfer data and hard labels; (2) pseudo label assignment in the fine-tuning stage is solely determined by similarity measures of global features using clustering algorithms, leading to inevitable label noise in generated pseudo labels. To tackle these issues, this paper proposes an unsupervised vehicle re-identification framework based on cross-style semi-supervised pre-training and feature cross-division. The framework consists of two parts: cross-style semi-supervised pre-training (CSP) and feature cross-division (FCD) for model fine-tuning. The CSP module generates style transfer data containing source domain content and target domain style using a style transfer network, and then pre-trains the model in a semi-supervised manner using both source domain and style transfer data. A pseudo-label reassignment strategy is designed to generate soft labels assigned to the style transfer data. The FCD module obtains feature partitions through a novel interactive division to reduce the dependence of pseudo-labels on global features, and the final similarity measurement combines the results of partition features and global features. Experimental results on the VehicleID and VeRi-776 datasets show that the proposed method outperforms existing unsupervised vehicle re-identification methods. Compared with the last best method on each dataset, the method proposed in this paper improves the mAP by 0.63% and the Rank-1 by 0.73% on the three sub-datasets of VehicleID on average, and it improves mAP by 0.9% and Rank-1 by 1% on VeRi-776 dataset.
null
https://www.mdpi.com/2079-9292/12/13/2931
https://www.mdpi.com/2079-9292/12/13/2931/pdf
Electronics 2023 7
[ "Zhan G", "Wang Q", "Min W", "Han Q", "Zhao H", "Wei Z" ]
[ "Domain Adaptation", "Pseudo Label", "Style Transfer", "Unsupervised Domain Adaptation", "Unsupervised Vehicle Re-Identification", "Vehicle Re-Identification" ]
2023-07-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/latent-thermodynamic-flows-unified
2507.03174
null
null
Latent Thermodynamic Flows: Unified Representation Learning and Generative Modeling of Temperature-Dependent Behaviors from Limited Data
Accurate characterization of the equilibrium distributions of complex molecular systems and their dependence on environmental factors such as temperature is essential for understanding thermodynamic properties and transition mechanisms. Projecting these distributions onto meaningful low-dimensional representations enables interpretability and downstream analysis. Recent advances in generative AI, particularly flow models such as Normalizing Flows (NFs), have shown promise in modeling such distributions, but their scope is limited without tailored representation learning. In this work, we introduce Latent Thermodynamic Flows (LaTF), an end-to-end framework that tightly integrates representation learning and generative modeling. LaTF unifies the State Predictive Information Bottleneck (SPIB) with NFs to simultaneously learn low-dimensional latent representations, referred to as Collective Variables (CVs), classify metastable states, and generate equilibrium distributions across temperatures beyond the training data. The two components of representation learning and generative modeling are optimized jointly, ensuring that the learned latent features capture the system's slow, important degrees of freedom while the generative model accurately reproduces the system's equilibrium behavior. We demonstrate LaTF's effectiveness across diverse systems, including a model potential, the Chignolin protein, and cluster of Lennard Jones particles, with thorough evaluations and benchmarking using multiple metrics and extensive simulations. Finally, we apply LaTF to a RNA tetraloop system, where despite using simulation data from only two temperatures, LaTF reconstructs the temperature-dependent structural ensemble and melting behavior, consistent with experimental and prior extensive computational results.
Finally, we apply LaTF to a RNA tetraloop system, where despite using simulation data from only two temperatures, LaTF reconstructs the temperature-dependent structural ensemble and melting behavior, consistent with experimental and prior extensive computational results.
https://arxiv.org/abs/2507.03174v1
https://arxiv.org/pdf/2507.03174v1.pdf
null
[ "Yunrui Qiu", "Richard John", "Lukas Herron", "Pratyush Tiwary" ]
[ "Benchmarking", "Representation Learning" ]
2025-07-03T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/ex4sperans/variational-inference-with-normalizing-flows/blob/922b569f851e02fa74700cd0754fe2ef5c1f3180/flow.py#L9", "description": "**Normalizing Flows** are a method for constructing complex distributions by transforming a\r\nprobability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.\r\n\r\nIn the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \\mathbb{R}^{d} \\rightarrow \\mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \\cdot f\\left(z\\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\\left(z\\right)$, the resulting random variable $z' = f\\left(z\\right)$ has a distribution:\r\n\r\n$$ q\\left(\\mathbf{z}'\\right) = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}^{-1}}{\\delta{\\mathbf{z'}}}\\bigr\\vert = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}}{\\delta{\\mathbf{z}}}\\bigr\\vert ^{-1} $$\r\n\f\r\nwhere the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\\_{K}\\left(\\mathbf{z}\\right)$ obtained by successively transforming a random variable $z\\_{0}$ with distribution $q\\_{0}$ through a chain of $K$ transformations $f\\_{k}$ is:\r\n\r\n$$ z\\_{K} = f\\_{K} \\cdot \\dots \\cdot f\\_{2} \\cdot f\\_{1}\\left(z\\_{0}\\right) $$\r\n\r\n$$ \\ln{q}\\_{K}\\left(z\\_{K}\\right) = \\ln{q}\\_{0}\\left(z\\_{0}\\right) − \\sum^{K}\\_{k=1}\\ln\\vert\\det\\frac{\\delta{f\\_{k}}}{\\delta{\\mathbf{z\\_{k-1}}}}\\vert $$\r\n\f\r\nThe path traversed by the random variables $z\\_{k} = f\\_{k}\\left(z\\_{k-1}\\right)$ with initial distribution $q\\_{0}\\left(z\\_{0}\\right)$ is called the flow and the path formed by the successive distributions $q\\_{k}$ is a normalizing flow.", "full_name": "Normalizing Flows", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Distribution Approximation** methods aim to approximate a complex distribution. Below you can find a continuously updating list of distribution approximation methods.", "name": "Distribution Approximation", "parent": null }, "name": "Normalizing Flows", "source_title": "Variational Inference with Normalizing Flows", "source_url": "http://arxiv.org/abs/1505.05770v6" } ]
https://paperswithcode.com/paper/exploring-remote-physiological-signal
2507.04306
null
null
Exploring Remote Physiological Signal Measurement under Dynamic Lighting Conditions at Night: Dataset, Experiment, and Analysis
Remote photoplethysmography (rPPG) is a non-contact technique for measuring human physiological signals. Due to its convenience and non-invasiveness, it has demonstrated broad application potential in areas such as health monitoring and emotion recognition. In recent years, the release of numerous public datasets has significantly advanced the performance of rPPG algorithms under ideal lighting conditions. However, the effectiveness of current rPPG methods in realistic nighttime scenarios with dynamic lighting variations remains largely unknown. Moreover, there is a severe lack of datasets specifically designed for such challenging environments, which has substantially hindered progress in this area of research. To address this gap, we present and release a large-scale rPPG dataset collected under dynamic lighting conditions at night, named DLCN. The dataset comprises approximately 13 hours of video data and corresponding synchronized physiological signals from 98 participants, covering four representative nighttime lighting scenarios. DLCN offers high diversity and realism, making it a valuable resource for evaluating algorithm robustness in complex conditions. Built upon the proposed Happy-rPPG Toolkit, we conduct extensive experiments and provide a comprehensive analysis of the challenges faced by state-of-the-art rPPG methods when applied to DLCN. The dataset and code are publicly available at https://github.com/dalaoplan/Happp-rPPG-Toolkit.
To address this gap, we present and release a large-scale rPPG dataset collected under dynamic lighting conditions at night, named DLCN.
https://arxiv.org/abs/2507.04306v1
https://arxiv.org/pdf/2507.04306v1.pdf
null
[ "Zhipeng Li", "Kegang Wang", "Hanguang Xiao", "Xingyue Liu", "Feizhong Zhou", "Jiaxin Jiang", "Tianqi Liu" ]
[ "Emotion Recognition" ]
2025-07-06T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hasaan-raza
null
null
null
Hasaan Raza
Entrepreneur and founder of Civsec Group, an investment firm focused on acquiring and growing companies in the security sector. Passionate about innovation, strategy, and building a strong portfolio, shaping the future of safety and protection.
null
https://civsecgroup.com/
https://www.youtube.com/@hasaanraza
July 8 2025 7
[ "Hasaan Raza" ]
[]
2025-07-08T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/differential-attention-for-multimodal-crisis
2507.05165
null
null
Differential Attention for Multimodal Crisis Event Analysis
Social networks can be a valuable source of information during crisis events. In particular, users can post a stream of multimodal data that can be critical for real-time humanitarian response. However, effectively extracting meaningful information from this large and noisy data stream and effectively integrating heterogeneous data remains a formidable challenge. In this work, we explore vision language models (VLMs) and advanced fusion strategies to enhance the classification of crisis data in three different tasks. We incorporate LLaVA-generated text to improve text-image alignment. Additionally, we leverage Contrastive Language-Image Pretraining (CLIP)-based vision and text embeddings, which, without task-specific fine-tuning, outperform traditional models. To further refine multimodal fusion, we employ Guided Cross Attention (Guided CA) and combine it with the Differential Attention mechanism to enhance feature alignment by emphasizing critical information while filtering out irrelevant content. Our results show that while Differential Attention improves classification performance, Guided CA remains highly effective in aligning multimodal features. Extensive experiments on the CrisisMMD benchmark data set demonstrate that the combination of pretrained VLMs, enriched textual descriptions, and adaptive fusion strategies consistently outperforms state-of-the-art models in classification accuracy, contributing to more reliable and interpretable models for three different tasks that are crucial for disaster response. Our code is available at https://github.com/Munia03/Multimodal_Crisis_Event.
To further refine multimodal fusion, we employ Guided Cross Attention (Guided CA) and combine it with the Differential Attention mechanism to enhance feature alignment by emphasizing critical information while filtering out irrelevant content.
https://arxiv.org/abs/2507.05165v1
https://arxiv.org/pdf/2507.05165v1.pdf
null
[ "Nusrat Munia", "Junfeng Zhu", "Olfa Nasraoui", "Abdullah-Al-Zubaer Imran" ]
[ "Disaster Response", "Humanitarian" ]
2025-07-07T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/image-to-image-domain-adaptation-for-vehicle
null
null
null
Image-to-image domain adaptation for vehicle re-identification
Cross-domain vehicle re-identification (ReID) is an interesting but challenging task in computer vision. A ReID model well-trained on one dataset often experiences a severe performance drop when applied to another dataset due to the domain discrepancy between the different datasets. This is especially true for low-resolution images. In this paper, we present a vehicle image domain adaptation framework (VDAF) which contains a single-image super resolution network (SISR) and a vehicle transfer generative adversarial network (VTGAN). SISR is an enhancement task for mapping low-resolution (LR) images to high-resolution (HR) images. Based on the reconstructed HR images, VTGAN can translate vehicle images from a source domain to a target domain with consistent styles and identities. VTGAN is an unsupervised approach designed for source-target translation for vehicle ReID and is composed of two adversarial networks and one Siamese network. Based on the translated images, we can infer an enhanced vehicle representation free of influences from style variations, allowing distance metrics for vehicle ReID to be learned. Through extensive experiments on the VeRi, VehicleID, and VRIC datasets, we show that images translated by VTGAN are effective for domain adaptation and are superior at promoting the accuracy of vehicle ReID.
null
https://link.springer.com/article/10.1007/s11042-023-14839-7
https://link.springer.com/article/10.1007/s11042-023-14839-7
Multimed Tools Appl 2023 3
[ "Fukai Zhang", "Lulu Zhang", "Haiyan Zhang", "Yongqiang Ma" ]
[ "Domain Adaptation", "Generative Adversarial Network", "Image Super-Resolution", "Super-Resolution", "Unsupervised Domain Adaptation", "Vehicle Re-Identification" ]
2023-03-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/progressive-learning-with-multi-scale
null
null
null
Progressive learning with multi-scale attention network for cross-domain vehicle re-identification
Vehicle re-identification (reID) aims to identify vehicles across different cameras that have non-overlapping views. Most existing vehicle reID approaches train the reID model with well-labeled datasets via a supervised manner, which inevitably causes a severe drop in performance when tested in an unknown domain. Moreover, these supervised approaches require full annotations, which is limiting owing to the amount of unlabeled data. Therefore, with the aim of addressing the aforementioned problems, unsupervised vehicle reID models have attracted considerable attention. It always adopts domain adaptation to transfer discriminative information from supervised domains to unsupervised ones. In this paper, a novel progressive learning method with a multi-scale fusion network is proposed, named PLM, for vehicle reID in the unknown domain, which directly exploits inference from the available abundant data without any annotations. For PLM, a domain adaptation module is employed to smooth the domain bias, which generates images with similar data distribution to unlabeled target domain as “pseudo target samples”. Furthermore, to better exploit the distinct features of vehicle images in the unknown domain, a multi-scale attention network is proposed to train the reID model with the “pseudo target samples” and unlabeled samples; this network embeds low-layer texture features with high-level semantic features to train the reID model. Moreover, a weighted label smoothing (WLS) loss is proposed, which considers the distance between samples and different clusters to balance the confidence of pseudo labels in the feature learning module. Extensive experiments are carried out to verify that our proposed PLM achieves excellent performance on several benchmark datasets.
null
https://link.springer.com/article/10.1007/s11432-021-3383-y
https://link.springer.com/article/10.1007/s11432-021-3383-y.pdf
Sci. China Inf. Sci 2021 11
[ "Yang Wang", "Jinjia Peng", "Huibing Wang", "Meng Wang" ]
[ "Domain Adaptation", "Unsupervised Domain Adaptation", "Vehicle Re-Identification" ]
2021-11-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/rag-r1-incentivize-the-search-and-reasoning
2507.02962
null
null
RAG-R1 : Incentivize the Search and Reasoning Capabilities of LLMs through Multi-query Parallelism
Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks, while they remain prone to generating hallucinated or outdated responses due to their static internal knowledge. Recent advancements in Retrieval-Augmented Generation (RAG) methods have explored enhancing models' search and reasoning capabilities through reinforcement learning (RL). Although these methods demonstrate promising results, they face challenges in training stability and encounter issues such as substantial inference time and restricted capabilities due to the single-query mode. In this paper, we propose RAG-R1, a novel training framework designed to enable LLMs to adaptively leverage internal and external knowledge during the reasoning process. We further expand the generation and retrieval processes within the framework from single-query mode to multi-query parallelism, aimed at reducing inference time and enhancing the model's capabilities. Extensive experiments on seven question-answering benchmarks demonstrate that our method outperforms the strongest baseline by up to 13.2% and decreases inference time by 11.1%.
Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks, while they remain prone to generating hallucinated or outdated responses due to their static internal knowledge.
https://arxiv.org/abs/2507.02962v1
https://arxiv.org/pdf/2507.02962v1.pdf
null
[ "Zhiwen Tan", "Jiaming Huang", "Qintong Wu", "Hongxuan Zhang", "Chenyi Zhuang", "Jinjie Gu" ]
[ "Question Answering", "RAG", "Reinforcement Learning (RL)", "Retrieval", "Retrieval-augmented Generation" ]
2025-06-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/latent-chain-of-thought-decoding-the-depth
2507.02199
null
null
Latent Chain-of-Thought? Decoding the Depth-Recurrent Transformer
Chain-of-thought (CoT) reasoning has enabled transformer-based language models to excel at complex mathematics and multi-step planning. However, in standard decoder-only architectures, these reasoning steps are externalized in natural language, improving interpretability at the cost of efficiency. To capture reasoning that is not easily represented in words, many works have explored recurrent architectures that aim to internalize reasoning in latent space, potentially supporting latent CoT. In this paper, we investigate whether such reasoning structures emerge in Huginn-3.5B, a depth-recurrent Transformer that reuses layers at inference time without increasing parameter count. We examine the model's internal behavior on arithmetic tasks using a suite of probing techniques including the Logit Lens and Coda Lens. Our findings reveal limited evidence of interpretable latent CoT by tracking rank trajectories of final and intermediate result tokens. Furthermore, we uncover significant probing inconsistencies across recurrent blocks, where the interpretability of hidden states depends heavily on both the layer index and the decoding method. Finally, we empirically show that increasing recurrence depth yields only marginal gains and falls well short of models that explicitly externalize reasoning steps. The code is available at https://github.com/wenquanlu/huginn-latent-cot.
To capture reasoning that is not easily represented in words, many works have explored recurrent architectures that aim to internalize reasoning in latent space, potentially supporting latent CoT.
https://arxiv.org/abs/2507.02199v1
https://arxiv.org/pdf/2507.02199v1.pdf
null
[ "Wenquan Lu", "Yuechuan Yang", "Kyle Lee", "Yanshu Li", "Enqi Liu" ]
[]
2025-07-02T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" } ]
https://paperswithcode.com/paper/clip-guided-backdoor-defense-through-entropy
2507.05113
null
null
CLIP-Guided Backdoor Defense through Entropy-Based Poisoned Dataset Separation
Deep Neural Networks (DNNs) are susceptible to backdoor attacks, where adversaries poison training data to implant backdoor into the victim model. Current backdoor defenses on poisoned data often suffer from high computational costs or low effectiveness against advanced attacks like clean-label and clean-image backdoors. To address them, we introduce CLIP-Guided backdoor Defense (CGD), an efficient and effective method that mitigates various backdoor attacks. CGD utilizes a publicly accessible CLIP model to identify inputs that are likely to be clean or poisoned. It then retrains the model with these inputs, using CLIP's logits as a guidance to effectively neutralize the backdoor. Experiments on 4 datasets and 11 attack types demonstrate that CGD reduces attack success rates (ASRs) to below 1% while maintaining clean accuracy (CA) with a maximum drop of only 0.3%, outperforming existing defenses. Additionally, we show that clean-data-based defenses can be adapted to poisoned data using CGD. Also, CGD exhibits strong robustness, maintaining low ASRs even when employing a weaker CLIP model or when CLIP itself is compromised by a backdoor. These findings underscore CGD's exceptional efficiency, effectiveness, and applicability for real-world backdoor defense scenarios. Code: https://github.com/binyxu/CGD.
CGD utilizes a publicly accessible CLIP model to identify inputs that are likely to be clean or poisoned.
https://arxiv.org/abs/2507.05113v1
https://arxiv.org/pdf/2507.05113v1.pdf
null
[ "Binyan Xu", "Fan Yang", "Xilin Dai", "Di Tang", "Kehuan Zhang" ]
[ "backdoor defense" ]
2025-07-07T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/OpenAI/CLIP", "description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)", "full_name": "Contrastive Language-Image Pre-training", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Representations", "parent": null }, "name": "CLIP", "source_title": "Learning Transferable Visual Models From Natural Language Supervision", "source_url": "https://arxiv.org/abs/2103.00020v1" } ]
https://paperswithcode.com/paper/stochastic-human-motion-prediction-with-1
2507.04062
null
null
Stochastic Human Motion Prediction with Memory of Action Transition and Action Characteristic
Action-driven stochastic human motion prediction aims to generate future motion sequences of a pre-defined target action based on given past observed sequences performing non-target actions. This task primarily presents two challenges. Firstly, generating smooth transition motions is hard due to the varying transition speeds of different actions. Secondly, the action characteristic is difficult to be learned because of the similarity of some actions. These issues cause the predicted results to be unreasonable and inconsistent. As a result, we propose two memory banks, the Soft-transition Action Bank (STAB) and Action Characteristic Bank (ACB), to tackle the problems above. The STAB stores the action transition information. It is equipped with the novel soft searching approach, which encourages the model to focus on multiple possible action categories of observed motions. The ACB records action characteristic, which produces more prior information for predicting certain actions. To fuse the features retrieved from the two banks better, we further propose the Adaptive Attention Adjustment (AAA) strategy. Extensive experiments on four motion prediction datasets demonstrate that our approach consistently outperforms the previous state-of-the-art. The demo and code are available at https://hyqlat.github.io/STABACB.github.io/.
The STAB stores the action transition information.
https://arxiv.org/abs/2507.04062v1
https://arxiv.org/pdf/2507.04062v1.pdf
CVPR 2025 1
[ "Jianwei Tang", "Hong Yang", "Tengyue Chen", "Jian-Fang Hu" ]
[ "Human motion prediction", "motion prediction", "Stochastic Human Motion Prediction" ]
2025-07-05T00:00:00
http://openaccess.thecvf.com//content/CVPR2025/html/Tang_Stochastic_Human_Motion_Prediction_with_Memory_of_Action_Transition_and_CVPR_2025_paper.html
http://openaccess.thecvf.com//content/CVPR2025/papers/Tang_Stochastic_Human_Motion_Prediction_with_Memory_of_Action_Transition_and_CVPR_2025_paper.pdf
stochastic-human-motion-prediction-with
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/model-inversion-attacks-on-llama-3-extracting
2507.04478
null
null
Model Inversion Attacks on Llama 3: Extracting PII from Large Language Models
Large language models (LLMs) have transformed natural language processing, but their ability to memorize training data poses significant privacy risks. This paper investigates model inversion attacks on the Llama 3.2 model, a multilingual LLM developed by Meta. By querying the model with carefully crafted prompts, we demonstrate the extraction of personally identifiable information (PII) such as passwords, email addresses, and account numbers. Our findings highlight the vulnerability of even smaller LLMs to privacy attacks and underscore the need for robust defenses. We discuss potential mitigation strategies, including differential privacy and data sanitization, and call for further research into privacy-preserving machine learning techniques.
null
https://arxiv.org/abs/2507.04478v1
https://arxiv.org/pdf/2507.04478v1.pdf
null
[ "Sathesh P. Sivashanmugam" ]
[ "Privacy Preserving" ]
2025-07-06T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**LLaMA** is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.\r\n\r\n- RMSNorm normalizing function is used to improve the training stability, by normalizing the input of each transformer sub-layer, instead of normalizing the output.\r\n- The ReLU non-linearity is replaced by the SwiGLU activation function to improve performance.\r\n- Absolute positional embeddings are removed and instead rotary positional embeddings (RoPE) are added at each layer of the network.", "full_name": "LLaMA", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "LLaMA", "source_title": "LLaMA: Open and Efficient Foundation Language Models", "source_url": "https://arxiv.org/abs/2302.13971v1" } ]
https://paperswithcode.com/paper/temporal-continual-learning-with-prior-1
2507.04060
null
v0GzRLvVp3
Temporal Continual Learning with Prior Compensation for Human Motion Prediction
Human Motion Prediction (HMP) aims to predict future poses at different moments according to past motion sequences. Previous approaches have treated the prediction of various moments equally, resulting in two main limitations: the learning of short-term predictions is hindered by the focus on long-term predictions, and the incorporation of prior information from past predictions into subsequent predictions is limited. In this paper, we introduce a novel multi-stage training framework called Temporal Continual Learning (TCL) to address the above challenges. To better preserve prior information, we introduce the Prior Compensation Factor (PCF). We incorporate it into the model training to compensate for the lost prior information. Furthermore, we derive a more reasonable optimization objective through theoretical derivation. It is important to note that our TCL framework can be easily integrated with different HMP backbone models and adapted to various datasets and applications. Extensive experiments on four HMP benchmark datasets demonstrate the effectiveness and flexibility of TCL. The code is available at https://github.com/hyqlat/TCL.
Extensive experiments on four HMP benchmark datasets demonstrate the effectiveness and flexibility of TCL.
https://arxiv.org/abs/2507.04060v1
https://arxiv.org/pdf/2507.04060v1.pdf
NeurIPS 2023 11
[ "Jianwei Tang", "Jiangxin Sun", "Xiaotong LIN", "Lifang Zhang", "Wei-Shi Zheng", "Jian-Fang Hu" ]
[ "Continual Learning", "Human motion prediction", "motion prediction" ]
2025-07-05T00:00:00
https://openreview.net/forum?id=v0GzRLvVp3
https://openreview.net/pdf?id=v0GzRLvVp3
temporal-continual-learning-with-prior
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/presentagent-multimodal-agent-for
2507.04036
null
null
PresentAgent: Multimodal Agent for Presentation Video Generation
We present PresentAgent, a multimodal agent that transforms long-form documents into narrated presentation videos. While existing approaches are limited to generating static slides or text summaries, our method advances beyond these limitations by producing fully synchronized visual and spoken content that closely mimics human-style presentations. To achieve this integration, PresentAgent employs a modular pipeline that systematically segments the input document, plans and renders slide-style visual frames, generates contextual spoken narration with large language models and Text-to-Speech models, and seamlessly composes the final video with precise audio-visual alignment. Given the complexity of evaluating such multimodal outputs, we introduce PresentEval, a unified assessment framework powered by Vision-Language Models that comprehensively scores videos across three critical dimensions: content fidelity, visual clarity, and audience comprehension through prompt-based evaluation. Our experimental validation on a curated dataset of 30 document-presentation pairs demonstrates that PresentAgent approaches human-level quality across all evaluation metrics. These results highlight the significant potential of controllable multimodal agents in transforming static textual materials into dynamic, effective, and accessible presentation formats. Code will be available at https://github.com/AIGeeksGroup/PresentAgent.
We present PresentAgent, a multimodal agent that transforms long-form documents into narrated presentation videos.
https://arxiv.org/abs/2507.04036v1
https://arxiv.org/pdf/2507.04036v1.pdf
null
[ "Jingwei Shi", "Zeyu Zhang", "Biao Wu", "Yanjie Liang", "Meng Fang", "Ling Chen", "Yang Zhao" ]
[ "text-to-speech", "Text to Speech", "Video Generation" ]
2025-07-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/detection-of-rail-line-track-and-human-beings
2507.03040
null
null
Detection of Rail Line Track and Human Beings Near the Track to Avoid Accidents
This paper presents an approach for rail line detection and the identification of human beings in proximity to the track, utilizing the YOLOv5 deep learning model to mitigate potential accidents. The technique incorporates real-time video data to identify railway tracks with impressive accuracy and recognizes nearby moving objects within a one-meter range, specifically targeting the identification of humans. This system aims to enhance safety measures in railway environments by providing real-time alerts for any detected human presence close to the track. The integration of a functionality to identify objects at a longer distance further fortifies the preventative capabilities of the system. With a precise focus on real-time object detection, this method is poised to deliver significant contributions to the existing technologies in railway safety. The effectiveness of the proposed method is demonstrated through a comprehensive evaluation, yielding a remarkable improvement in accuracy over existing methods. These results underscore the potential of this approach to revolutionize safety measures in railway environments, providing a substantial contribution to accident prevention strategies.
null
https://arxiv.org/abs/2507.03040v1
https://arxiv.org/pdf/2507.03040v1.pdf
null
[ "Mehrab Hosain", "Rajiv Kapoor" ]
[ "Line Detection", "object-detection", "Object Detection", "Real-Time Object Detection" ]
2025-07-03T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/regulation-compliant-ai-for-fusion-real-time
2507.02897
null
null
Regulation Compliant AI for Fusion: Real-Time Image Analysis-Based Control of Divertor Detachment in Tokamaks
While artificial intelligence (AI) has been promising for fusion control, its inherent black-box nature will make compliant implementation in regulatory environments a challenge. This study implements and validates a real-time AI enabled linear and interpretable control system for successful divertor detachment control with the DIII-D lower divertor camera. Using D2 gas, we demonstrate feedback divertor detachment control with a mean absolute difference of 2% from the target for both detachment and reattachment. This automatic training and linear processing framework can be extended to any image based diagnostic for regulatory compliant controller necessary for future fusion reactors.
While artificial intelligence (AI) has been promising for fusion control, its inherent black-box nature will make compliant implementation in regulatory environments a challenge.
https://arxiv.org/abs/2507.02897v1
https://arxiv.org/pdf/2507.02897v1.pdf
null
[ "Nathaniel Chen", "Cheolsik Byun", "Azarakash Jalalvand", "SangKyeun Kim", "Andrew Rothstein", "Filippo Scotti", "Steve Allen", "David Eldon", "Keith Erickson", "Egemen Kolemen" ]
[ "Diagnostic" ]
2025-06-21T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/losia-efficient-high-rank-fine-tuning-via
2507.04487
null
null
LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization
Parameter-Efficient Fine-Tuning (PEFT) methods, such as LoRA, significantly reduce the number of trainable parameters by introducing low-rank decomposition matrices. However, existing methods perform extensive matrix multiplications in domain specialization tasks, resulting in computational inefficiency and sub-optimal fine-tuning performance. Hence, we propose LoSiA(Low-Resources Subnet Integration Adaptation), an innovative method that dynamically localizes and optimizes critical parameters during the training process. Specifically, it identifies a sub-network using gradient sparsity analysis and optimizes it as the trainable target. This design enables effective high-rank adaptation by updating only the sub-network parameters, reducing the additional matrix multiplication. We also present LoSiA-Pro, a faster implementation of LoSiA, which reduces the training latency by about $27\%$ compared to LoRA. Extensive evaluations show that our method achieves minimal performance drop compared to full fine-tuning, while requiring the least training time across domain specialization and common-sense reasoning tasks. Further analysis shows that LoSiA also reduces forgetting during continued training.
We also present LoSiA-Pro, a faster implementation of LoSiA, which reduces the training latency by about $27\%$ compared to LoRA.
https://arxiv.org/abs/2507.04487v1
https://arxiv.org/pdf/2507.04487v1.pdf
null
[ "Xujia Wang. Yunjia Qi", "Bin Xu" ]
[ "Common Sense Reasoning", "parameter-efficient fine-tuning" ]
2025-07-06T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fa-forced-prompt-learning-of-vision-language
2507.04511
null
null
FA: Forced Prompt Learning of Vision-Language Models for Out-of-Distribution Detection
Pre-trained vision-language models (VLMs) have advanced out-of-distribution (OOD) detection recently. However, existing CLIP-based methods often focus on learning OOD-related knowledge to improve OOD detection, showing limited generalization or reliance on external large-scale auxiliary datasets. In this study, instead of delving into the intricate OOD-related knowledge, we propose an innovative CLIP-based framework based on Forced prompt leArning (FA), designed to make full use of the In-Distribution (ID) knowledge and ultimately boost the effectiveness of OOD detection. Our key insight is to learn a prompt (i.e., forced prompt) that contains more diversified and richer descriptions of the ID classes beyond the textual semantics of class labels. Specifically, it promotes better discernment for ID images, by forcing more notable semantic similarity between ID images and the learnable forced prompt. Moreover, we introduce a forced coefficient, encouraging the forced prompt to learn more comprehensive and nuanced descriptions of the ID classes. In this way, FA is capable of achieving notable improvements in OOD detection, even when trained without any external auxiliary datasets, while maintaining an identical number of trainable parameters as CoOp. Extensive empirical evaluations confirm our method consistently outperforms current state-of-the-art methods. Code is available at https://github.com/0xFAFA/FA.
In this study, instead of delving into the intricate OOD-related knowledge, we propose an innovative CLIP-based framework based on Forced prompt leArning (FA), designed to make full use of the In-Distribution (ID) knowledge and ultimately boost the effectiveness of OOD detection.
https://arxiv.org/abs/2507.04511v1
https://arxiv.org/pdf/2507.04511v1.pdf
null
[ "Xinhua Lu", "Runhe Lai", "Yanqi Wu", "Kanghao Chen", "Wei-Shi Zheng", "Ruixuan Wang" ]
[ "Out-of-Distribution Detection", "Out of Distribution (OOD) Detection", "Prompt Learning", "Semantic Similarity", "Semantic Textual Similarity" ]
2025-07-06T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "", "full_name": "Feedback Alignment", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "FA", "source_title": "Random feedback weights support learning in deep neural networks", "source_url": "http://arxiv.org/abs/1411.0247v1" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" }, { "code_snippet_url": "", "description": "**CoOp**, or **Context Optimization**, is an automated prompt engineering method that avoids manual prompt tuning by modeling context words with continuous vectors that are end-to-end learned from data. The context could be shared among all classes or designed to be class-specific. During training, we simply minimize the prediction error using the cross-entropy loss with respect to the learnable context vectors, while keeping the pre-trained parameters fixed. The gradients can be back-propagated all the way through the text encoder, distilling the rich knowledge encoded in the parameters for learning task-relevant context.", "full_name": "Context Optimization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Prompt engineering is a practice of creating a large number of prompts to more efficiently extract information from Language Models. ", "name": "Prompt Engineering", "parent": null }, "name": "CoOp", "source_title": "Learning to Prompt for Vision-Language Models", "source_url": "https://arxiv.org/abs/2109.01134v6" } ]
https://paperswithcode.com/paper/flow-anchored-consistency-models
2507.03738
null
null
Flow-Anchored Consistency Models
Continuous-time Consistency Models (CMs) promise efficient few-step generation but face significant challenges with training instability. We argue this instability stems from a fundamental conflict: by training a network to learn only a shortcut across a probability flow, the model loses its grasp on the instantaneous velocity field that defines the flow. Our solution is to explicitly anchor the model in the underlying flow during training. We introduce the Flow-Anchored Consistency Model (FACM), a simple but effective training strategy that uses a Flow Matching (FM) task as an anchor for the primary CM shortcut objective. This Flow-Anchoring approach requires no architectural modifications and is broadly compatible with standard model architectures. By distilling a pre-trained LightningDiT model, our method achieves a state-of-the-art FID of 1.32 with two steps (NFE=2) and 1.76 with just one step (NFE=1) on ImageNet 256x256, significantly outperforming previous methods. This provides a general and effective recipe for building high-performance, few-step generative models. Our code and pretrained models: https://github.com/ali-vilab/FACM.
We introduce the Flow-Anchored Consistency Model (FACM), a simple but effective training strategy that uses a Flow Matching (FM) task as an anchor for the primary CM shortcut objective.
https://arxiv.org/abs/2507.03738v1
https://arxiv.org/pdf/2507.03738v1.pdf
null
[ "Yansong Peng", "Kai Zhu", "Yu Liu", "Pingyu Wu", "Hebei Li", "Xiaoyan Sun", "Feng Wu" ]
[ "Image Generation" ]
2025-07-04T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "", "full_name": "Consistency Models", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Diffusion Models", "parent": null }, "name": "Consistency Models", "source_title": "Consistency Models", "source_url": "https://arxiv.org/abs/2303.01469v2" } ]
https://paperswithcode.com/paper/real-world-en-call-center-transcripts-dataset
2507.02958
null
null
Real-World En Call Center Transcripts Dataset with PII Redaction
We introduce CallCenterEN, a large-scale (91,706 conversations, corresponding to 10448 audio hours), real-world English call center transcript dataset designed to support research and development in customer support and sales AI systems. This is the largest release to-date of open source call center transcript data of this kind. The dataset includes inbound and outbound calls between agents and customers, with accents from India, the Philippines and the United States. The dataset includes high-quality, PII-redacted human-readable transcriptions. All personally identifiable information (PII) has been rigorously removed to ensure compliance with global data protection laws. The audio is not included in the public release due to biometric privacy concerns. Given the scarcity of publicly available real-world call center datasets, CallCenterEN fills a critical gap in the landscape of available ASR corpora, and is released under a CC BY-NC 4.0 license for non-commercial research use.
Given the scarcity of publicly available real-world call center datasets, CallCenterEN fills a critical gap in the landscape of available ASR corpora, and is released under a CC BY-NC 4. 0 license for non-commercial research use.
https://arxiv.org/abs/2507.02958v1
https://arxiv.org/pdf/2507.02958v1.pdf
null
[ "Ha Dao", "Gaurav Chawla", "Raghu Banda", "Caleb DeLeeuw" ]
[ "PII Redaction" ]
2025-06-30T00:00:00
null
null
null
null
[]