paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
float64
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/joint-spectrum-sensing-and-resource-1
2506.13008
null
null
Joint Spectrum Sensing and Resource Allocation for OFDMA-based Underwater Acoustic Communications
Underwater acoustic (UWA) communications generally rely on cognitive radio (CR)-based ad-hoc networks due to challenges such as long propagation delay, limited channel resources, and high attenuation. To address the constraints of limited frequency resources, UWA communications have recently incorporated orthogonal frequency division multiple access (OFDMA), significantly enhancing spectral efficiency (SE) through multiplexing gains. Still, {the} low propagation speed of UWA signals, combined with {the} dynamic underwater environment, creates asynchrony in multiple access scenarios. This causes inaccurate spectrum sensing as inter-carrier interference (ICI) increases, which leads to difficulties in resource allocation. As efficient resource allocation is essential for achieving high-quality communication in OFDMA-based CR networks, these challenges degrade communication reliability in UWA systems. To resolve the issue, we propose an end-to-end sensing and resource optimization method using deep reinforcement learning (DRL) in an OFDMA-based UWA-CR network. Through extensive simulations, we confirm that the proposed method is superior to baseline schemes, outperforming other methods by 42.9 % in SE and 4.4 % in communication success rate.
null
https://arxiv.org/abs/2506.13008v1
https://arxiv.org/pdf/2506.13008v1.pdf
null
[ "Minwoo Kim", "Youngchol Choi", "Yeongjun Kim", "Eojin Seo", "Hyun Jong Yang" ]
[ "Deep Reinforcement Learning" ]
2025-06-16T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/interference-mitigation-in-star-ris-aided
2506.12964
null
null
Interference Mitigation in STAR-RIS-Aided Multi-User Networks with Statistical CSI
In this paper, we investigate real-time interference mitigation in multiuser wireless networks assisted by simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs). Unlike conventional methods that rely on instantaneous channel state information (CSI), we consider a practical scenario where only statistical CSI is available, and the STAR-RIS phase shifts are impaired by random phase errors modeled via the Von Mises distribution. To tackle the resulting nonconvex optimization problem induced by unit-modulus constraints and stochastic interference, we derive a closed-form approximation of the effective channel matrix using statistical expectations. We then reformulate the interference minimization problem as an unconstrained optimization over a Riemannian manifold and propose a conjugate gradient algorithm tailored to the complex circle manifold. The proposed solution enables efficient real-time computation of optimal phase shifts while accounting for hardware imperfections and limited CSI. Simulation results confirm that our method significantly suppresses inter-user interference and achieves superior SINR performance and convergence speed compared to conventional baselines.
null
https://arxiv.org/abs/2506.12964v1
https://arxiv.org/pdf/2506.12964v1.pdf
null
[ "Abuzar B. M. Adam", "Mohammed A. M. Elhassan", "Elhadj Moustapha Diallo", "Mohamed Amine Ouamri" ]
[]
2025-06-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/low-latency-terrestrial-interference
2506.12908
null
null
Low-Latency Terrestrial Interference Detection for Satellite-to-Device Communications
Direct satellite-to-device communication is a promising future direction due to its lower latency and enhanced efficiency. However, intermittent and unpredictable terrestrial interference significantly affects system reliability and performance. Continuously employing sophisticated interference mitigation techniques is practically inefficient. Motivated by the periodic idle intervals characteristic of burst-mode satellite transmissions, this paper investigates online interference detection frameworks specifically tailored for satellite-to-device scenarios. We first rigorously formulate interference detection as a binary hypothesis testing problem, leveraging differences between Rayleigh (no interference) and Rice (interference present) distributions. Then, we propose a cumulative sum (CUSUM)-based online detector for scenarios with known interference directions, explicitly characterizing the trade-off between detection latency and false alarm rate, and establish its asymptotic optimality. For practical scenarios involving unknown interference direction, we further propose a generalized likelihood ratio (GLR)-based detection method, jointly estimating interference direction via the Root-MUSIC algorithm. Numerical results validate our theoretical findings and demonstrate that our proposed methods achieve high detection accuracy with remarkably low latency, highlighting their practical applicability in future satellite-to-device communication systems.
null
https://arxiv.org/abs/2506.12908v1
https://arxiv.org/pdf/2506.12908v1.pdf
null
[ "Runnan Liu", "Weifeng Zhu", "Shu Sun", "Wenjun Zhang" ]
[]
2025-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dynamic-scheduling-for-enhanced-performance
2506.12778
null
null
Dynamic Scheduling for Enhanced Performance in RIS-assisted Cooperative Network with Interference
Reconfigurable Intelligent Surfaces (RIS) have emerged as transformative technologies, enhancing spectral efficiency and improving interference management in multi-user cooperative communications. This paper investigates the integration of RIS with Flexible-Duplex (FlexD) communication, featuring dynamic scheduling capabilities, to mitigate unintended external interference in multi-user wireless networks. By leveraging the reconfigurability of RIS and dynamic scheduling, we propose a user-pair selection scheme to maximize system throughput when full channel state information (CSI) of interference is unavailable. We develop a mathematical framework to evaluate the throughput outage probability when RIS introduces spatial correlation. The derived analytical results are used for asymptotic analysis, providing insights into dynamic user scheduling under interference based on statistical channel knowledge. Finally, we compare FlexD with traditional Full Duplex (FD) and Half Duplex (HD) systems against RIS-assisted FlexD. Our results show FlexD's superior throughput enhancement, energy efficiency and data management capability in interference-affected networks, typical in current and next-generation cooperative wireless applications like cellular and vehicular communications.
null
https://arxiv.org/abs/2506.12778v1
https://arxiv.org/pdf/2506.12778v1.pdf
null
[ "Yomali Lokugama", "Saman Atapattu", "Nathan Ross", "Sithamparanathan Kandeepan", "Chintha Tellambura" ]
[ "Management", "Scheduling" ]
2025-06-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/conditional-diffusion-model-driven-generative
2506.12682
null
null
Conditional Diffusion Model-Driven Generative Channels for Double RIS-Aided Wireless Systems
With the development of the upcoming sixth-generation networks (6G), reconfigurable intelligent surfaces (RISs) have gained significant attention due to its ability of reconfiguring wireless channels via smart reflections. However, traditional channel state information (CSI) acquisition techniques for double-RIS systems face challenges (e.g., high pilot overhead or multipath interference). This paper proposes a new channel generation method in double-RIS communication systems based on the tool of conditional diffusion model (CDM). The CDM is trained on synthetic channel data to capture channel characteristics. It addresses the limitations of traditional CSI generation methods, such as insufficient model understanding capability and poor environmental adaptability. We provide a detailed analysis of the diffusion process for channel generation, and it is validated through simulations. The simulation results demonstrate that the proposed CDM based method outperforms traditional channel acquisition methods in terms of normalized mean squared error (NMSE). This method offers a new paradigm for channel acquisition in double-RIS systems, which is expected to improve the quality of channel acquisition with low pilot overhead.
null
https://arxiv.org/abs/2506.12682v1
https://arxiv.org/pdf/2506.12682v1.pdf
null
[ "Yiyang Ni", "Qi Zhang", "Guangji Chen", "Yan Cai", "Jun Li", "Shi Jin" ]
[]
2025-06-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/semi-blind-channel-estimation-for-downlink
2506.12639
null
null
Semi-Blind Channel Estimation for Downlink Communications Based on Dynamic Metasurface Antennas
Dynamic metasurface antennas (DMAs) are emerging as a promising technology to enable energy-efficient, large array-based multi-antenna systems. This paper presents a simple channel estimation scheme for the downlink of a multiple-input single-output orthogonal frequency division multiplexing (MISO-OFDM) communication system exploiting DMAs. The proposed scheme extracts separate estimates of the wireless channel and the unknown waveguide propagation vector using a simple iterative algorithm based on the parallel factor (PARAFAC) decomposition. Obtaining decoupled estimates of the wireless channel and inner waveguide vector enables the isolation and compensation for its effect when designing the DMA beamformer, regardless of the wireless channel state, which evolves much faster due to its shorter coherence time and bandwidth. Additionally, our solution operates in a data-aided manner, delivering estimates of useful data symbols jointly with channel estimates, without requiring sequential pilot and data stages. To the best of our knowledge, this is the first work to explore this CE approach. Numerical results corroborate the notable performance of the proposed scheme.
null
https://arxiv.org/abs/2506.12639v1
https://arxiv.org/pdf/2506.12639v1.pdf
null
[ "Amarilton L. Magalhães", "André L. F. de Almeida", "A. Lee Swindlehurst" ]
[]
2025-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "In image inpainting task, the mechanism extracts complementary features from the word embedding in two paths by reciprocal attention, which is done by comparing the descriptive text and complementary image areas through reciprocal attention.", "full_name": "Dual Multimodal Attention", "introduced_year": 2000, "main_collection": { "area": "General", "description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.", "name": "Attention Mechanisms", "parent": "Attention" }, "name": "DMA", "source_title": "Text-Guided Neural Image Inpainting", "source_url": "https://arxiv.org/abs/2004.03212v4" } ]
https://paperswithcode.com/paper/parkinson-s-disease-freezing-of-gait-fog
2506.12561
null
null
Parkinson's Disease Freezing of Gait (FoG) Symptom Detection Using Machine Learning from Wearable Sensor Data
Freezing of gait (FoG) is a special symptom found in patients with Parkinson's disease (PD). Patients who have FoG abruptly lose the capacity to walk as they normally would. Accelerometers worn by patients can record movement data during these episodes, and machine learning algorithms can be useful to categorize this information. Thus, the combination may be able to identify FoG in real time. In order to identify FoG events in accelerometer data, we introduce the Transformer Encoder-Bi-LSTM fusion model in this paper. The model's capability to differentiate between FoG episodes and normal movement was used to evaluate its performance, and on the Kaggle Parkinson's Freezing of Gait dataset, the proposed Transformer Encoder-Bi-LSTM fusion model produced 92.6% accuracy, 80.9% F1 score, and 52.06% in terms of mean average precision. The findings highlight how Deep Learning-based approaches may progress the field of FoG identification and help PD patients receive better treatments and management plans.
null
https://arxiv.org/abs/2506.12561v1
https://arxiv.org/pdf/2506.12561v1.pdf
null
[ "Mahmudul Hasan" ]
[]
2025-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" } ]
https://paperswithcode.com/paper/quantizing-small-scale-state-space-models-for
2506.12480
null
null
Quantizing Small-Scale State-Space Models for Edge AI
State-space models (SSMs) have recently gained attention in deep learning for their ability to efficiently model long-range dependencies, making them promising candidates for edge-AI applications. In this paper, we analyze the effects of quantization on small-scale SSMs with a focus on reducing memory and computational costs while maintaining task performance. Using the S4D architecture, we first investigate post-training quantization (PTQ) and show that the state matrix A and internal state x are particularly sensitive to quantization. Furthermore, we analyze the impact of different quantization techniques applied to the parameters and activations in the S4D architecture. To address the observed performance drop after Post-training Quantization (PTQ), we apply Quantization-aware Training (QAT), significantly improving performance from 40% (PTQ) to 96% on the sequential MNIST benchmark at 8-bit precision. We further demonstrate the potential of QAT in enabling sub-8-bit precisions and evaluate different parameterization schemes for QAT stability. Additionally, we propose a heterogeneous quantization strategy that assigns different precision levels to model components, reducing the overall memory footprint by a factor of 6x without sacrificing performance. Our results provide actionable insights for deploying quantized SSMs in resource-constrained environments.
null
https://arxiv.org/abs/2506.12480v1
https://arxiv.org/pdf/2506.12480v1.pdf
null
[ "Leo Zhao", "Tristan Torchet", "Melika Payvand", "Laura Kriener", "Filippo Moro" ]
[ "Quantization", "State Space Models" ]
2025-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/directed-acyclic-graph-convolutional-networks
2506.12218
null
null
Directed Acyclic Graph Convolutional Networks
Directed acyclic graphs (DAGs) are central to science and engineering applications including causal inference, scheduling, and neural architecture search. In this work, we introduce the DAG Convolutional Network (DCN), a novel graph neural network (GNN) architecture designed specifically for convolutional learning from signals supported on DAGs. The DCN leverages causal graph filters to learn nodal representations that account for the partial ordering inherent to DAGs, a strong inductive bias does not present in conventional GNNs. Unlike prior art in machine learning over DAGs, DCN builds on formal convolutional operations that admit spectral-domain representations. We further propose the Parallel DCN (PDCN), a model that feeds input DAG signals to a parallel bank of causal graph-shift operators and processes these DAG-aware features using a shared multilayer perceptron. This way, PDCN decouples model complexity from graph size while maintaining satisfactory predictive performance. The architectures' permutation equivariance and expressive power properties are also established. Comprehensive numerical tests across several tasks, datasets, and experimental conditions demonstrate that (P)DCN compares favorably with state-of-the-art baselines in terms of accuracy, robustness, and computational efficiency. These results position (P)DCN as a viable framework for deep learning from DAG-structured data that is designed from first (graph) signal processing principles.
null
https://arxiv.org/abs/2506.12218v1
https://arxiv.org/pdf/2506.12218v1.pdf
null
[ "Samuel Rey", "Hamed Ajorlou", "Gonzalo Mateos" ]
[ "Causal Inference", "Computational Efficiency", "Graph Neural Network", "Inductive Bias", "Neural Architecture Search", "Scheduling" ]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Graph Neural Network", "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Graph Neural Network", "source_title": "Graph Neural Networks: A Review of Methods and Applications", "source_url": "https://arxiv.org/abs/1812.08434v6" } ]
https://paperswithcode.com/paper/graph-semi-supervised-learning-for-point
2506.12197
null
null
Graph Semi-Supervised Learning for Point Classification on Data Manifolds
We propose a graph semi-supervised learning framework for classification tasks on data manifolds. Motivated by the manifold hypothesis, we model data as points sampled from a low-dimensional manifold $\mathcal{M} \subset \mathbb{R}^F$. The manifold is approximated in an unsupervised manner using a variational autoencoder (VAE), where the trained encoder maps data to embeddings that represent their coordinates in $\mathbb{R}^F$. A geometric graph is constructed with Gaussian-weighted edges inversely proportional to distances in the embedding space, transforming the point classification problem into a semi-supervised node classification task on the graph. This task is solved using a graph neural network (GNN). Our main contribution is a theoretical analysis of the statistical generalization properties of this data-to-manifold-to-graph pipeline. We show that, under uniform sampling from $\mathcal{M}$, the generalization gap of the semi-supervised task diminishes with increasing graph size, up to the GNN training error. Leveraging a training procedure which resamples a slightly larger graph at regular intervals during training, we then show that the generalization gap can be reduced even further, vanishing asymptotically. Finally, we validate our findings with numerical experiments on image classification benchmarks, demonstrating the empirical effectiveness of our approach.
null
https://arxiv.org/abs/2506.12197v1
https://arxiv.org/pdf/2506.12197v1.pdf
null
[ "Caio F. Deberaldini Netto", "Zhiyang Wang", "Luana Ruiz" ]
[ "Classification", "Graph Neural Network", "image-classification", "Image Classification", "Node Classification" ]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Graph Neural Network", "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Graph Neural Network", "source_title": "Graph Neural Networks: A Review of Methods and Applications", "source_url": "https://arxiv.org/abs/1812.08434v6" } ]
https://paperswithcode.com/paper/tcn-dpd-parameter-efficient-temporal
2506.12165
null
null
TCN-DPD: Parameter-Efficient Temporal Convolutional Networks for Wideband Digital Predistortion
Digital predistortion (DPD) is essential for mitigating nonlinearity in RF power amplifiers, particularly for wideband applications. This paper presents TCN-DPD, a parameter-efficient architecture based on temporal convolutional networks, integrating noncausal dilated convolutions with optimized activation functions. Evaluated on the OpenDPD framework with the DPA_200MHz dataset, TCN-DPD achieves simulated ACPRs of -51.58/-49.26 dBc (L/R), EVM of -47.52 dB, and NMSE of -44.61 dB with 500 parameters and maintains superior linearization than prior models down to 200 parameters, making it promising for efficient wideband PA linearization.
Digital predistortion (DPD) is essential for mitigating nonlinearity in RF power amplifiers, particularly for wideband applications.
https://arxiv.org/abs/2506.12165v1
https://arxiv.org/pdf/2506.12165v1.pdf
null
[ "Huanqiang Duan", "Manno Versluis", "Qinyu Chen", "Leo C. N. de Vreede", "Chang Gao" ]
[]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Extreme Value Machine", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "EVM", "source_title": "The Extreme Value Machine", "source_url": "http://arxiv.org/abs/1506.06112v4" } ]
https://paperswithcode.com/paper/dmrs-based-uplink-channel-estimation-for-mu
2506.11899
null
null
DMRS-Based Uplink Channel Estimation for MU-MIMO Systems with Location-Specific SCSI Acquisition
With the growing number of users in multi-user multiple-input multiple-output (MU-MIMO) systems, demodulation reference signals (DMRSs) are efficiently multiplexed in the code domain via orthogonal cover codes (OCC) to ensure orthogonality and minimize pilot interference. In this paper, we investigate uplink DMRS-based channel estimation for MU-MIMO systems with Type II OCC pattern standardized in 3GPP Release 18, leveraging location-specific statistical channel state information (SCSI) to enhance performance. Specifically, we propose a SCSI-assisted Bayesian channel estimator (SA-BCE) based on the minimum mean square error criterion to suppress the pilot interference and noise, albeit at the cost of cubic computational complexity due to matrix inversions. To reduce this complexity while maintaining performance, we extend the scheme to a windowed version (SA-WBCE), which incorporates antenna-frequency domain windowing and beam-delay domain processing to exploit asymptotic sparsity and mitigate energy leakage in practical systems. To avoid the frequent real-time SCSI acquisition, we construct a grid-based location-specific SCSI database based on the principle of spatial consistency, and subsequently leverage the uplink received signals within each grid to extract the SCSI. Facilitated by the multilinear structure of wireless channels, we formulate the SCSI acquisition problem within each grid as a tensor decomposition problem, where the factor matrices are parameterized by the multi-path powers, delays, and angles. The computational complexity of SCSI acquisition can be significantly reduced by exploiting the Vandermonde structure of the factor matrices. Simulation results demonstrate that the proposed location-specific SCSI database construction method achieves high accuracy, while the SA-BCE and SA-WBCE significantly outperform state-of-the-art benchmarks in MU-MIMO systems.
null
https://arxiv.org/abs/2506.11899v1
https://arxiv.org/pdf/2506.11899v1.pdf
null
[ "Jiawei Zhuang", "Hongwei Hou", "Minjie Tang", "Wenjin Wang", "Shi Jin", "Vincent K. N. Lau" ]
[ "Tensor Decomposition" ]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/interference-in-spectrum-sharing-integrated
2506.11851
null
null
Interference in Spectrum-Sharing Integrated Terrestrial and Satellite Networks: Modeling, Approximation, and Robust Transmit Beamforming
This paper investigates robust transmit (TX) beamforming from the satellite to user terminals (UTs), based on statistical channel state information (CSI). The proposed design specifically targets the mitigation of satellite-to-terrestrial interference in spectrum-sharing integrated terrestrial and satellite networks. By leveraging the distribution information of terrestrial UTs, we first establish an interference model from the satellite to terrestrial systems without shared CSI. Based on this, robust TX beamforming schemes are developed under both the interference threshold and the power budget. Two optimization criteria are considered: satellite weighted sum rate maximization and mean square error minimization. The former achieves a superior achievable rate performance through an iterative optimization framework, whereas the latter enables a low-complexity closed-form solution at the expense of reduced rate, with interference constraints satisfied via a bisection method. To avoid complex integral calculations and the dependence on user distribution information in inter-system interference evaluations, we propose a terrestrial base station position-aided approximation method, and the approximation errors are subsequently analyzed. Numerical simulations validate the effectiveness of our proposed schemes.
null
https://arxiv.org/abs/2506.11851v1
https://arxiv.org/pdf/2506.11851v1.pdf
null
[ "Wenjing Cao", "Yafei Wang", "Tianxiang Ji", "Tianyang Cao", "Wenjin Wang", "Symeon Chatzinotas", "Björn Ottersten" ]
[]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Balanced Selection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Active Learning", "parent": null }, "name": "BASE", "source_title": "Active Learning at the ImageNet Scale", "source_url": "https://arxiv.org/abs/2111.12880v1" } ]
https://paperswithcode.com/paper/semantic-communications-in-6g-coexistence
2506.11779
null
null
Semantic Communications in 6G: Coexistence, Multiple Access, and Satellite Networks
The exponential growth of wireless users and bandwidth constraints necessitates innovative communication paradigms for next-generation networks. Semantic Communication (SemCom) emerges as a promising solution by transmitting extracted meaning rather than raw bits, enhancing spectral efficiency and enabling intelligent resource allocation. This paper explores the integration of SemCom with conventional Bit-based Communication (BitCom) in heterogeneous networks, highlighting key challenges and opportunities. We analyze multiple access techniques, including Non-Orthogonal Multiple Access (NOMA), to support coexisting SemCom and BitCom users. Furthermore, we examine multi-modal SemCom frameworks for handling diverse data types and discuss their applications in satellite networks, where semantic techniques mitigate bandwidth limitations and harsh channel conditions. Finally, we identify future directions for deploying semantic-aware systems in 6G and beyond.
null
https://arxiv.org/abs/2506.11779v1
https://arxiv.org/pdf/2506.11779v1.pdf
null
[ "Ishtiaque Ahmed", "Yingzhuo Sun", "Jingwen Fu", "Alper Kose", "Leila Musavian", "Ming Xiao", "Berna Ozbek" ]
[ "Semantic Communication" ]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fieldformer-self-supervised-reconstruction-of
2506.11629
null
null
FieldFormer: Self-supervised Reconstruction of Physical Fields via Tensor Attention Prior
Reconstructing physical field tensors from \textit{in situ} observations, such as radio maps and ocean sound speed fields, is crucial for enabling environment-aware decision making in various applications, e.g., wireless communications and underwater acoustics. Field data reconstruction is often challenging, due to the limited and noisy nature of the observations, necessitating the incorporation of prior information to aid the reconstruction process. Deep neural network-based data-driven structural constraints (e.g., ``deeply learned priors'') have showed promising performance. However, this family of techniques faces challenges such as model mismatches between training and testing phases. This work introduces FieldFormer, a self-supervised neural prior learned solely from the limited {\it in situ} observations without the need of offline training. Specifically, the proposed framework starts with modeling the fields of interest using the tensor Tucker model of a high multilinear rank, which ensures a universal approximation property for all fields. In the sequel, an attention mechanism is incorporated to learn the sparsity pattern that underlies the core tensor in order to reduce the solution space. In this way, a ``complexity-adaptive'' neural representation, grounded in the Tucker decomposition, is obtained that can flexibly represent various types of fields. A theoretical analysis is provided to support the recoverability of the proposed design. Moreover, extensive experiments, using various physical field tensors, demonstrate the superiority of the proposed approach compared to state-of-the-art baselines.
null
https://arxiv.org/abs/2506.11629v1
https://arxiv.org/pdf/2506.11629v1.pdf
null
[ "Panqi Chen", "Siyuan Li", "Lei Cheng", "Xiao Fu", "Yik-Chung Wu", "Sergios Theodoridis" ]
[]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "TuckER", "full_name": "TuckER", "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "\n\ngraph embeddings, can be homogeneous graph or heterogeneous graph", "name": "Graph Embeddings", "parent": null }, "name": "TuckER", "source_title": "TuckER: Tensor Factorization for Knowledge Graph Completion", "source_url": "https://arxiv.org/abs/1901.09590v2" } ]
https://paperswithcode.com/paper/wi-cbr-wifi-based-cross-domain-behavior
2506.11616
null
null
Wi-CBR: WiFi-based Cross-domain Behavior Recognition via Multimodal Collaborative Awareness
WiFi-based human behavior recognition aims to recognize gestures and activities by analyzing wireless signal variations. However, existing methods typically focus on a single type of data, neglecting the interaction and fusion of multiple features. To this end, we propose a novel multimodal collaborative awareness method. By leveraging phase data reflecting changes in dynamic path length and Doppler Shift (DFS) data corresponding to frequency changes related to the speed of gesture movement, we enable efficient interaction and fusion of these features to improve recognition accuracy. Specifically, we first introduce a dual-branch self-attention module to capture spatial-temporal cues within each modality. Then, a group attention mechanism is applied to the concatenated phase and DFS features to mine key group features critical for behavior recognition. Through a gating mechanism, the combined features are further divided into PD-strengthen and PD-weaken branches, optimizing information entropy and promoting cross-modal collaborative awareness. Extensive in-domain and cross-domain experiments on two large publicly available datasets, Widar3.0 and XRF55, demonstrate the superior performance of our method.
null
https://arxiv.org/abs/2506.11616v1
https://arxiv.org/pdf/2506.11616v1.pdf
null
[ "Ruobei Zhang", "Shengeng Tang", "Huan Yan", "Xiang Zhang", "Richang Hong" ]
[]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/energy-efficiency-optimization-of-finite
2506.11594
null
null
Energy Efficiency Optimization of Finite Block Length STAR-RIS-aided MU-MIMO Broadcast Channels
Energy-efficient designs are proposed for multi-user (MU) multiple-input multiple-output (MIMO) broadcast channels (BC), assisted by simultaneously transmitting and reflecting (STAR) reconfigurable intelligent surfaces (RIS) operating at finite block length (FBL). In particular, we maximize the sum energy efficiency (EE), showing that STAR-RIS can substantially enhance it. Our findings demonstrate that the gains of employing STAR-RIS increase when the codeword length and the maximum tolerable bit error rate decrease, meaning that a STAR-RIS is more energy efficient in a system with more stringent latency and reliability requirements.
null
https://arxiv.org/abs/2506.11594v1
https://arxiv.org/pdf/2506.11594v1.pdf
null
[ "Mohammad Soleymani", "Ignacio Santamaria", "Eduard Jorswieck", "Robert Schober", "Lajos Hanzo" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mmwiloc-a-multi-sensor-dataset-and-robust
2506.11540
null
null
MMWiLoc: A Multi-Sensor Dataset and Robust Device-Free Localization Method Using Commercial Off-The-Shelf Millimeter Wave Wi-Fi Devices
Device-free Wi-Fi sensing has numerous benefits in practical settings, as it eliminates the requirement for dedicated sensing devices and can be accomplished using current low-cost Wi-Fi devices. With the development of Wi-Fi standards, millimeter wave Wi-Fi devices with 60GHz operating frequency and up to 4GHz bandwidth have become commercially available. Although millimeter wave Wi-Fi presents great promise for Device-Free Wi-Fi sensing with increased bandwidth and beam-forming ability, there still lacks a method for localization using millimeter wave Wi-Fi. Here, we present two major contributions: First, we provide a comprehensive multi-sensor dataset that synchronously captures human movement data from millimeter wave Wi-Fi, 2.4GHz Wi-Fi, and millimeter wave radar sensors. This dataset enables direct performance comparisons across different sensing modalities and facilitates reproducible researches in indoor localization. Second, we introduce MMWiLoc, a novel localization method that achieves centimeter-level precision with low computational cost. MMWiLoc incorporates two components: beam pattern calibration using Expectation Maximization and target localization through Multi-Scale Compression Sensing. The system processes beam Signal-to-Noise Ratio (beamSNR) information from the beam-forming process to determine target Angle of Arrival (AoA), which is then fused across devices for localization. Our extensive evaluation demonstrates that MMWiLoc achieves centimeter-level precision, outperforming 2.4GHz Wi-Fi systems while maintaining competitive performance with high-precision radar systems. The dataset and examples processing code will be released after this paper is accepted at https://github.com/wowoyoho/MMWiLoc.
null
https://arxiv.org/abs/2506.11540v1
https://arxiv.org/pdf/2506.11540v1.pdf
null
[ "Wenbo Ding", "Yang Li", "Dongsheng Wang", "Bin Zhao", "Yunrong Zhu", "Yibo Zhang", "Yumeng Miao" ]
[ "Indoor Localization" ]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/robust-filtering-novel-statistical-learning
2506.11530
null
null
Robust Filtering -- Novel Statistical Learning and Inference Algorithms with Applications
State estimation or filtering serves as a fundamental task to enable intelligent decision-making in applications such as autonomous vehicles, robotics, healthcare monitoring, smart grids, intelligent transportation, and predictive maintenance. Standard filtering assumes prior knowledge of noise statistics to extract latent system states from noisy sensor data. However, real-world scenarios involve abnormalities like outliers, biases, drifts, and missing observations with unknown or partially known statistics, limiting conventional approaches. This thesis presents novel robust nonlinear filtering methods to mitigate these challenges. Based on insights from our filtering proposals, we extend the formulations to offline estimation/learning setups and propose smoothing extensions. Our methods leverage Bayesian inference frameworks, employing both deterministic and stochastic approximation techniques including Variational Inference (VI) and Particle Filters/Sequential Monte Carlo (SMC). We also study theoretical estimation limits using Bayesian Cram\'er-Rao bounds (BCRBs) in the context of measurement abnormalities. To validate the performance gains of the proposed methods, we perform simulations and experiments in scenarios including target tracking, indoor localization, 3D point cloud registration, mesh registration, and pose graph optimization. The fundamental nature of the work makes it useful in diverse applications, with possible future extensions toward developing outlier-robust machine learning pipelines, learning system dynamics from anomalous data, and addressing challenges in generative AI where standard diffusion models struggle with outliers, imbalanced datasets, and mode collapse.
null
https://arxiv.org/abs/2506.11530v1
https://arxiv.org/pdf/2506.11530v1.pdf
null
[ "Aamir Hussain Chughtai" ]
[ "Autonomous Vehicles", "Bayesian Inference", "Indoor Localization", "Point Cloud Registration", "State Estimation", "Variational Inference" ]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" }, { "code_snippet_url": null, "description": "", "full_name": "Variational Inference", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.", "name": "Dimensionality Reduction", "parent": null }, "name": "Variational Inference", "source_title": "Autoencoding Variational Autoencoder", "source_url": "https://arxiv.org/abs/2012.03715v1" } ]
https://paperswithcode.com/paper/joint-angle-and-velocity-estimation-for
2506.11497
null
null
Joint Angle and Velocity-Estimation for Target Localization in Bistatic mmWave MIMO Radar in the Presence of Clutter
Sparse Bayesian learning (SBL)-aided target localization is conceived for a bistatic mmWave MIMO radar system in the presence of unknown clutter, followed by the development of an angle-Doppler (AD)-domain representation of the target-plus-clutter echo model for accurate target parameter estimation. The proposed algorithm exploits the three-dimensional (3D) sparsity arising in the AD domain of the scattering scene and employs the powerful SBL framework for the estimation of target parameters, such as the angle-of-departure (AoD), angle-of-arrival (AoA) and velocity. To handle a practical scenario where the actual target parameters typically deviate from their finite-resolution grid, a super-resolution-based improved off-grid SBL framework is developed for recursively updating the parameter grid, thereby progressively refining the estimates. We also determine the Cram\'er-Rao bound (CRB) and Bayesian CRB for target parameter estimation in order to benchmark the estimation performance. Our simulation results corroborate the superior performance of the proposed approach in comparison to the existing algorithms, and also their ability to approach the bounds derived.
null
https://arxiv.org/abs/2506.11497v1
https://arxiv.org/pdf/2506.11497v1.pdf
null
[ "Priyanka Maity", "Suraj Srivastava", "Aditya K. Jagannatham", "Lajos Hanzo" ]
[ "parameter estimation", "Super-Resolution" ]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/movable-antenna-array-enhanced-downlink-noma
2506.11438
null
null
Movable-Antenna Array Enhanced Downlink NOMA
Movable antenna (MA) has gained increasing attention in the field of wireless communications due to its exceptional capability to proactively reconfigure wireless channels via localized antenna movements. In this paper, we investigate the resource allocation design for an MA array-enabled base station serving multiple single-antenna users in a downlink non-orthogonal multiple access (NOMA) system. We aim to maximize the sum rate of all users by jointly optimizing the transmit beamforming and the positions of all MAs at the BS, subject to the constraints of transmit power budget, finite antenna moving region, and the conditions for successive interference cancellation decoding rate. The formulated problem, inherently highly non-convex, is addressed by successive convex approximation (SCA) and alternating optimization methods to obtain a high-quality suboptimal solution. Simulation results unveil that the proposed MA-enhanced downlink NOMA system can significantly improve the sum rate performance compared to both the fixed-position antenna (FPA) system and the traditional orthogonal multiple access (OMA) system.
null
https://arxiv.org/abs/2506.11438v1
https://arxiv.org/pdf/2506.11438v1.pdf
null
[ "Nianzu Li", "Peiran Wu", "Lipeng Zhu", "Derrick Wing Kwan Ng" ]
[]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "This optimizer mix [ADAM](https://paperswithcode.com/method/adam) and [SGD](https://paperswithcode.com/method/sgd) creating the MAS optimizer.", "full_name": "Mixing Adam and SGD", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "MAS", "source_title": "Mixing ADAM and SGD: a Combined Optimization Method", "source_url": "https://arxiv.org/abs/2011.08042v1" }, { "code_snippet_url": null, "description": "", "full_name": "Balanced Selection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Active Learning", "parent": null }, "name": "BASE", "source_title": "Active Learning at the ImageNet Scale", "source_url": "https://arxiv.org/abs/2111.12880v1" } ]
https://paperswithcode.com/paper/a-compact-dynamic-omnidirectional-antenna
2506.11351
null
null
A Compact Dynamic Omnidirectional Antenna
We propose a novel omnidirectional antenna design incorporating directional modulation for secure narrow planar information transmission. The proposed antenna features a compact size and stable omnidirectional radiation performance by employing two tightly spaced, printed meander line monopole antennas, acting as a single radiating element. To achieve a narrow information secure region, the proposed antenna is fed by differential power excitation of two ports with real-time dynamic switching. This leads to phase pattern modulation only along the electrical polarization, resulting in directionally confined information recoverable region in the E-plane, while maintaining highly constant or static omnidirectional H-plane pattern, inducing a $360^\circ$ information recoverable region. The dynamic antenna is designed and fabricated on a single layer of Rogers RO4350B which provides a miniaturized planar size of $0.36 \times 0.5 , \lambda_0^2$ at 2.7 GHz and easy integration. To validate the wireless communication performance, the fabricated antenna is directly fed with a 10 dB power ratio by a radio frequency (RF) switching system and evaluated for 16-QAM and 256-QAM transmission in a high signal-to-noise ratio (SNR) environment. Experimental results demonstrate that for 16-QAM transmission, a narrow E-plane information beam (IB) of approximately $34^\circ$ and omnidirectional H-plane IB are obtained, and a narrower E-plane IB is achieved around $15^\circ$ for 256-QAM. These results confirm that the proposed antenna offers a simple yet effective approach to enhance planar physical information security with a compact dynamic antenna system.
null
https://arxiv.org/abs/2506.11351v1
https://arxiv.org/pdf/2506.11351v1.pdf
null
[ "Sheng Huang", "Jacob R. Randall", "Cory Hilton", "Jeffrey A. Nanzer" ]
[]
2025-06-12T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/design-of-3d-beamforming-and-deployment
2506.11294
null
null
Design of 3D Beamforming and Deployment Strategies for ISAC-based HAPS Systems
This paper explores high-altitude platform station (HAPS) systems enabled by integrated sensing and communication (ISAC), in which a HAPS simultaneously transmits communication signals and synthetic aperture radar (SAR) imaging signals to support multi-user communication while performing ground target sensing. Taking into account the operational characteristics of SAR imaging, we consider two HAPS deployment strategies: (i) a quasi-stationary HAPS that remains fixed at an optimized location during SAR operation, following the stop-and-go scanning model; and (ii) a dynamic HAPS that continuously adjusts its flight trajectory along a circular path. For each strategy, we aim at maximizing the weighted sum-rate throughput for communication users while ensuring that SAR imaging requirements, such as beampattern gain and signal-to-noise ratio (SNR), are satisfied. This is achieved by jointly optimizing the HAPS deployment strategy, i.e., its placement or trajectory, along with three-dimensional (3D) transmit beamforming, under practical constraints including transmit power limits, energy consumption, and flight dynamics. Nevertheless, the formulated optimization problems corresponding to the two deployment strategies are inherently non-convex. To address the issue, we propose efficient algorithms that leverage both convex and non-convex optimization techniques to obtain high-quality suboptimal solutions. Numerical results demonstrate the effectiveness and advantages of the proposed approaches over benchmark schemes.
null
https://arxiv.org/abs/2506.11294v1
https://arxiv.org/pdf/2506.11294v1.pdf
null
[ "Xue Zhang", "Bang Huang", "Mohamed-Slim Alouini" ]
[ "Integrated sensing and communication", "ISAC" ]
2025-06-12T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/upvc-net-a-universal-premature-ventricular
2506.11238
null
null
uPVC-Net: A Universal Premature Ventricular Contraction Detection Deep Learning Algorithm
Introduction: Premature Ventricular Contractions (PVCs) are common cardiac arrhythmias originating from the ventricles. Accurate detection remains challenging due to variability in electrocardiogram (ECG) waveforms caused by differences in lead placement, recording conditions, and population demographics. Methods: We developed uPVC-Net, a universal deep learning model to detect PVCs from any single-lead ECG recordings. The model is developed on four independent ECG datasets comprising a total of 8.3 million beats collected from Holter monitors and a modern wearable ECG patch. uPVC-Net employs a custom architecture and a multi-source, multi-lead training strategy. For each experiment, one dataset is held out to evaluate out-of-distribution (OOD) generalization. Results: uPVC-Net achieved an AUC between 97.8% and 99.1% on the held-out datasets. Notably, performance on wearable single-lead ECG data reached an AUC of 99.1%. Conclusion: uPVC-Net exhibits strong generalization across diverse lead configurations and populations, highlighting its potential for robust, real-world clinical deployment.
null
https://arxiv.org/abs/2506.11238v1
https://arxiv.org/pdf/2506.11238v1.pdf
null
[ "Hagai Hamami", "Yosef Solewicz", "Daniel Zur", "Yonatan Kleerekoper", "Joachim A. Behar" ]
[]
2025-06-12T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/identifiability-of-deep-polynomial-neural
2506.17093
null
null
Identifiability of Deep Polynomial Neural Networks
Polynomial Neural Networks (PNNs) possess a rich algebraic and geometric structure. However, their identifiability -- a key property for ensuring interpretability -- remains poorly understood. In this work, we present a comprehensive analysis of the identifiability of deep PNNs, including architectures with and without bias terms. Our results reveal an intricate interplay between activation degrees and layer widths in achieving identifiability. As special cases, we show that architectures with non-increasing layer widths are generically identifiable under mild conditions, while encoder-decoder networks are identifiable when the decoder widths do not grow too rapidly. Our proofs are constructive and center on a connection between deep PNNs and low-rank tensor decompositions, and Kruskal-type uniqueness theorems. This yields both generic conditions determined by the architecture, and effective conditions that depend on the network's parameters. We also settle an open conjecture on the expected dimension of PNN's neurovarieties, and provide new bounds on the activation degrees required for it to reach its maximum.
null
https://arxiv.org/abs/2506.17093v1
https://arxiv.org/pdf/2506.17093v1.pdf
null
[ "Konstantin Usevich", "Clara Dérand", "Ricardo Borsoi", "Marianne Clausel" ]
[ "Decoder", "Polynomial Neural Networks" ]
2025-06-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/data-analysis-using-discrete-cubical-homology
2506.15020
null
null
Data analysis using discrete cubical homology
We present a new tool for data analysis: persistence discrete homology, which is well-suited to analyze filtrations of graphs. In particular, we provide a novel way of representing high-dimensional data as a filtration of graphs using pairwise correlations. We discuss several applications of these tools, e.g., in weather and financial data, comparing them to the standard methods used in the respective fields.
null
https://arxiv.org/abs/2506.15020v1
https://arxiv.org/pdf/2506.15020v1.pdf
null
[ "Chris Kapulkin", "Nathan Kershaw" ]
[]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/an-nth-cousin-mating-model-and-the-n-anacci
2506.16577
null
null
An nth-cousin mating model and the n-anacci numbers
In seeking to understand the size of inbred pedigrees, J. Lachance (J. Theor. Biol. 261, 238-247, 2009) studied a population model in which, for a fixed value of $n$, each mating occurs between $n$th cousins. We explain a connection between the second-cousin case of the model ($n=2$) and the Fibonacci sequence, and more generally, between the $n$th-cousin case and the $n$-anacci sequence $(n \geq 2)$. For a model with $n$th-cousin mating $(n \geq 1)$, we obtain the generating function describing the size of the pedigree $t$ generations back from the present, and we use it to evaluate the asymptotic growth of the pedigree size. In particular, we show that the growth of the pedigree asymptotically follows the growth rate of the $n$-anacci sequence -- the golden ratio $\phi = (1 + \sqrt{5})/2 \approx 1.6180$ in the second-cousin case $n=2$ -- and approaches 2 as $n$ increases. The computations explain the appearance of familiar numerical sequences and constants in a pedigree model. They also recall similar appearances of such sequences and constants in studies of population biology more generally.
null
https://arxiv.org/abs/2506.16577v1
https://arxiv.org/pdf/2506.16577v1.pdf
null
[ "Elisa Heinrich Mora", "Noah A. Rosenberg" ]
[]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/covariance-decomposition-for-distance-based
2506.16425
null
null
Covariance Decomposition for Distance Based Species Tree Estimation
In phylogenomics, species-tree methods must contend with two major sources of noise; stochastic gene-tree variation under the multispecies coalescent model (MSC) and finite-sequence substitutional noise. Fast agglomerative methods such as GLASS, STEAC, and METAL combine multi-locus information via distance-based clustering. We derive the exact covariance matrix of these pairwise distance estimates under a joint MSC-plus-substitution model and leverage it for reliable confidence estimation, and we algebraically decompose it into components attributable to coalescent variation versus sequence-level stochasticity. Our theory identifies parameter regimes where one source of variance greatly exceeds the other. For both very low and very high mutation rates, substitutional noise dominates, while coalescent variance is the primary contributor at intermediate mutation rates. Moreover, the interval over which coalescent variance dominates becomes narrower as the species-tree height increases. These results imply that in some settings one may legitimately ignore the weaker noise source when designing methods or collecting data. In particular, when gene-tree variance is dominant, adding more loci is most beneficial, while when substitution noise dominates, longer sequences or imputation are needed. Finally, leveraging the derived covariance matrix, we implement a Gaussian-sampling procedure to generate split support values for METAL trees and demonstrate empirically that this approach yields more reliable confidence estimates than traditional bootstrapping.
null
https://arxiv.org/abs/2506.16425v1
https://arxiv.org/pdf/2506.16425v1.pdf
null
[ "Georgios Aliatimis", "Ruriko Yoshida", "Burak Boyak", "James Grant" ]
[ "Imputation" ]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/universal-kernels-via-harmonic-analysis-on
2506.19245
null
null
Universal kernels via harmonic analysis on Riemannian symmetric spaces
The universality properties of kernels characterize the class of functions that can be approximated in the associated reproducing kernel Hilbert space and are of fundamental importance in the theoretical underpinning of kernel methods in machine learning. In this work, we establish fundamental tools for investigating universality properties of kernels in Riemannian symmetric spaces, thereby extending the study of this important topic to kernels in non-Euclidean domains. Moreover, we use the developed tools to prove the universality of several recent examples from the literature on positive definite kernels defined on Riemannian symmetric spaces, thus providing theoretical justification for their use in applications involving manifold-valued data.
null
https://arxiv.org/abs/2506.19245v1
https://arxiv.org/pdf/2506.19245v1.pdf
null
[ "Franziskus Steinert", "Salem Said", "Cyrus Mostajeran" ]
[]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/invasive-species-control-via-a-discrete-model
2506.14023
null
null
Invasive species control via a discrete model for the Trojan Y-chromosome strategy
Invasive species are a growing threat to ecosystems, particularly in aquatic environments. The Trojan Y Chromosome (TYC) strategy is a promising biological method for reducing invasive populations by introducing genetically modified males (supermales) that produce only male offspring, leading to population decline due to a shortage of females. In this study, we develop a novel discrete--time, age--structured mathematical model to simulate the effects of this strategy. Our model divides the life cycle of species into two stages--egg and maturity--and tracks different sub--populations, including supermales. We analyze the equilibria of the system and prove the existence and stability of extinction and positive equilibrium points. Numerical simulations show that extinction depends on factors such as fecundity, the number of supermales released, and initial population sizes. The model also reveals complex behaviors, such as bistability and thresholds for population collapse. This discrete approach offers a useful framework for understanding and optimizing the TYC strategy and can help guide future field applications of invasive species control.
null
https://arxiv.org/abs/2506.14023v1
https://arxiv.org/pdf/2506.14023v1.pdf
null
[ "Don K. Mallawa Arachchi", "Rana D. Parshad", "Claus Kadelka" ]
[]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/impact-of-hill-coefficient-and-time-delay-on
2506.19853
null
null
Impact of Hill coefficient and time delay on a perceptual decision-making model
In this paper, a neural mass perceptual decision making model introduced by Piska{\l}a et al. is analyzed. The model describes activity of two neuron populations influenced by each other and external inputs. The groups' activities correspond to the process of making a perceptual binary decision. Existing results are generalized by investigating the impact of both a delay in self-inhibition and a generic Hill coefficient on solutions to the system of differential equations. Several versions of the model with various assumptions are compared using analytical and numerical methods.
null
https://arxiv.org/abs/2506.19853v1
https://arxiv.org/pdf/2506.19853v1.pdf
null
[ "Bartłomiej Morawski", "Anna Czartoszewska" ]
[ "Decision Making" ]
2025-06-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/faster-fixed-point-methods-for-multichain
2506.20910
null
null
Faster Fixed-Point Methods for Multichain MDPs
We study value-iteration (VI) algorithms for solving general (a.k.a. multichain) Markov decision processes (MDPs) under the average-reward criterion, a fundamental but theoretically challenging setting. Beyond the difficulties inherent to all average-reward problems posed by the lack of contractivity and non-uniqueness of solutions to the Bellman operator, in the multichain setting an optimal policy must solve the navigation subproblem of steering towards the best connected component, in addition to optimizing long-run performance within each component. We develop algorithms which better solve this navigational subproblem in order to achieve faster convergence for multichain MDPs, obtaining improved rates of convergence and sharper measures of complexity relative to prior work. Many key components of our results are of potential independent interest, including novel connections between average-reward and discounted problems, optimal fixed-point methods for discounted VI which extend to general Banach spaces, new sublinear convergence rates for the discounted value error, and refined suboptimality decompositions for multichain MDPs. Overall our results yield faster convergence rates for discounted and average-reward problems and expand the theoretical foundations of VI approaches.
null
https://arxiv.org/abs/2506.20910v1
https://arxiv.org/pdf/2506.20910v1.pdf
null
[ "Matthew Zurek", "Yudong Chen" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/control-and-optimization-for-neural-partial
2506.20764
null
null
Control and optimization for Neural Partial Differential Equations in Supervised Learning
Although there is a substantial body of literature on control and optimization problems for parabolic and hyperbolic systems, the specific problem of controlling and optimizing the coefficients of the associated operators within such systems has not yet been thoroughly explored. In this work, we aim to initiate a line of research in control theory focused on optimizing and controlling the coefficients of these operators-a problem that naturally arises in the context of neural networks and supervised learning. In supervised learning, the primary objective is to transport initial data toward target data through the layers of a neural network. We propose a novel perspective: neural networks can be interpreted as partial differential equations (PDEs). From this viewpoint, the control problem traditionally studied in the context of ordinary differential equations (ODEs) is reformulated as a control problem for PDEs, specifically targeting the optimization and control of coefficients in parabolic and hyperbolic operators. To the best of our knowledge, this specific problem has not yet been systematically addressed in the control theory of PDEs. To this end, we propose a dual system formulation for the control and optimization problem associated with parabolic PDEs, laying the groundwork for the development of efficient numerical schemes in future research. We also provide a theoretical proof showing that the control and optimization problem for parabolic PDEs admits minimizers. Finally, we investigate the control problem associated with hyperbolic PDEs and prove the existence of solutions for a corresponding approximated control problem.
null
https://arxiv.org/abs/2506.20764v1
https://arxiv.org/pdf/2506.20764v1.pdf
null
[ "Alain Bensoussan", "Minh-Binh Tran", "Bangjie Wang" ]
[]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/exact-matrix-seriation-through-mathematical
2506.19821
null
null
Exact Matrix Seriation through Mathematical Optimization: Stress and Effectiveness-Based Models
Matrix seriation, the problem of permuting the rows and columns of a matrix to uncover latent structure, is a fundamental technique in data science, particularly in the visualization and analysis of relational data. Applications span clustering, anomaly detection, and beyond. In this work, we present a unified framework grounded in mathematical optimization to address matrix seriation from a rigorous, model-based perspective. Our approach leverages combinatorial and mixed-integer optimization to represent seriation objectives and constraints with high fidelity, bridging the gap between traditional heuristic methods and exact solution techniques. We introduce new mathematical programming models for neighborhood-based stress criteria, including nonlinear formulations and their linearized counterparts. For structured settings such as Moore and von Neumann neighborhoods, we develop a novel Hamiltonian path-based reformulation that enables effective control over spatial arrangement and interpretability in the reordered matrix. To assess the practical impact of our models, we carry out an extensive set of experiments on synthetic and real-world datasets, as well as on a newly curated benchmark based on a coauthorship network from the matrix seriation literature. Our results show that these optimization-based formulations not only enhance solution quality and interpretability but also provide a versatile foundation for extending matrix seriation to new domains in data science.
Matrix seriation, the problem of permuting the rows and columns of a matrix to uncover latent structure, is a fundamental technique in data science, particularly in the visualization and analysis of relational data.
https://arxiv.org/abs/2506.19821v1
https://arxiv.org/pdf/2506.19821v1.pdf
null
[ "Víctor Blanco", "Alfredo Marín", "Justo Puerto" ]
[ "Anomaly Detection" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/toward-decision-oriented-prognostics-an
2506.19698
null
null
Toward Decision-Oriented Prognostics: An Integrated Estimate-Optimize Framework for Predictive Maintenance
Recent research increasingly integrates machine learning (ML) into predictive maintenance (PdM) to reduce operational and maintenance costs in data-rich operational settings. However, uncertainty due to model misspecification continues to limit widespread industrial adoption. This paper proposes a PdM framework in which sensor-driven prognostics inform decision-making under economic trade-offs within a finite decision space. We investigate two key questions: (1) Does higher predictive accuracy necessarily lead to better maintenance decisions? (2) If not, how can the impact of prediction errors on downstream maintenance decisions be mitigated? We first demonstrate that in the traditional estimate-then-optimize (ETO) framework, errors in probabilistic prediction can result in inconsistent and suboptimal maintenance decisions. To address this, we propose an integrated estimate-optimize (IEO) framework that jointly tunes predictive models while directly optimizing for maintenance outcomes. We establish theoretical finite-sample guarantees on decision consistency under standard assumptions. Specifically, we develop a stochastic perturbation gradient descent algorithm suitable for small run-to-failure datasets. Empirical evaluations on a turbofan maintenance case study show that the IEO framework reduces average maintenance regret up to 22% compared to ETO. This study provides a principled approach to managing prediction errors in data-driven PdM. By aligning prognostic model training with maintenance objectives, the IEO framework improves robustness under model misspecification and improves decision quality. The improvement is particularly pronounced when the decision-making policy is misaligned with the decision-maker's target. These findings support more reliable maintenance planning in uncertain operational environments.
null
https://arxiv.org/abs/2506.19698v1
https://arxiv.org/pdf/2506.19698v1.pdf
null
[ "Zhuojun Xie", "Adam Abdin", "Yiping Fang" ]
[ "Decision Making" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/addq-adaptive-distributional-double-q
2506.19478
null
null
ADDQ: Adaptive Distributional Double Q-Learning
Bias problems in the estimation of $Q$-values are a well-known obstacle that slows down convergence of $Q$-learning and actor-critic methods. One of the reasons of the success of modern RL algorithms is partially a direct or indirect overestimation reduction mechanism. We propose an easy to implement method built on top of distributional reinforcement learning (DRL) algorithms to deal with the overestimation in a locally adaptive way. Our framework is simple to implement, existing distributional algorithms can be improved with a few lines of code. We provide theoretical evidence and use double $Q$-learning to show how to include locally adaptive overestimation control in existing algorithms. Experiments are provided for tabular, Atari, and MuJoCo environments.
We propose an easy to implement method built on top of distributional reinforcement learning (DRL) algorithms to deal with the overestimation in a locally adaptive way.
https://arxiv.org/abs/2506.19478v1
https://arxiv.org/pdf/2506.19478v1.pdf
null
[ "Leif Döring", "Benedikt Wille", "Maximilian Birr", "Mihail Bîrsan", "Martin Slowik" ]
[ "Distributional Reinforcement Learning", "MuJoCo", "Q-Learning" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/duality-and-policy-evaluation-in
2506.19294
null
null
Duality and Policy Evaluation in Distributionally Robust Bayesian Diffusion Control
We consider a Bayesian diffusion control problem of expected terminal utility maximization. The controller imposes a prior distribution on the unknown drift of an underlying diffusion. The Bayesian optimal control, tracking the posterior distribution of the unknown drift, can be characterized explicitly. However, in practice, the prior will generally be incorrectly specified, and the degree of model misspecification can have a significant impact on policy performance. To mitigate this and reduce overpessimism, we introduce a distributionally robust Bayesian control (DRBC) formulation in which the controller plays a game against an adversary who selects a prior in divergence neighborhood of a baseline prior. The adversarial approach has been studied in economics and efficient algorithms have been proposed in static optimization settings. We develop a strong duality result for our DRBC formulation. Combining these results together with tools from stochastic analysis, we are able to derive a loss that can be efficiently trained (as we demonstrate in our numerical experiments) using a suitable neural network architecture. As a result, we obtain an effective algorithm for computing the DRBC optimal strategy. The methodology for computing the DRBC optimal strategy is greatly simplified, as we show, in the important case in which the adversary chooses a prior from a Kullback-Leibler distributional uncertainty set.
null
https://arxiv.org/abs/2506.19294v1
https://arxiv.org/pdf/2506.19294v1.pdf
null
[ "Jose Blanchet", "Jiayi Cheng", "Hao liu", "Yang Liu" ]
[]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/first-order-sparse-convex-optimization-better
2506.19075
null
null
First-Order Sparse Convex Optimization: Better Rates with Sparse Updates
In was recently established that for convex optimization problems with a sparse optimal solution (may it be entry-wise sparsity or matrix rank-wise sparsity) it is possible to have linear convergence rates which depend on an improved mixed-norm condition number of the form $\frac{\beta_1{}s}{\alpha_2}$, where $\beta_1$ is the $\ell_1$-Lipchitz continuity constant of the gradient, $\alpha_2$ is the $\ell_2$-quadratic growth constant, and $s$ is the sparsity of the optimal solution. However, beyond the improved convergence rate, these methods are unable to leverage the sparsity of optimal solutions towards improving also the runtime of each iteration, which may still be prohibitively high for high-dimensional problems. In this work, we establish that linear convergence rates which depend on this improved condition number can be obtained using only sparse updates, which may result in overall significantly improved running times. Moreover, our methods are considerably easier to implement.
null
https://arxiv.org/abs/2506.19075v1
https://arxiv.org/pdf/2506.19075v1.pdf
null
[ "Dan Garber" ]
[]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/soft-decision-trees-for-survival-analysis
2506.16846
null
null
Soft decision trees for survival analysis
Decision trees are popular in survival analysis for their interpretability and ability to model complex relationships. Survival trees, which predict the timing of singular events using censored historical data, are typically built through heuristic approaches. Recently, there has been growing interest in globally optimized trees, where the overall tree is trained by minimizing the error function over all its parameters. We propose a new soft survival tree model (SST), with a soft splitting rule at each branch node, trained via a nonlinear optimization formulation amenable to decomposition. Since SSTs provide for every input vector a specific survival function associated to a single leaf node, they satisfy the conditional computation property and inherit the related benefits. SST and the training formulation combine flexibility with interpretability: any smooth survival function (parametric, semiparametric, or nonparametric) estimated through maximum likelihood can be used, and each leaf node of an SST yields a cluster of distinct survival functions which are associated to the data points routed to it. Numerical experiments on 15 well-known datasets show that SSTs, with parametric and spline-based semiparametric survival functions, trained using an adaptation of the node-based decomposition algorithm proposed by Consolo et al. (2024) for soft regression trees, outperform three benchmark survival trees in terms of four widely-used discrimination and calibration measures. SSTs can also be extended to consider group fairness.
null
https://arxiv.org/abs/2506.16846v2
https://arxiv.org/pdf/2506.16846v2.pdf
null
[ "Antonio Consolo", "Edoardo Amaldi", "Emilio Carrizosa" ]
[ "Fairness", "Survival Analysis" ]
2025-06-20T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "How do I resolve a dispute on Expedia contact their support at + ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881 or + ( 1 ) ⟷ 805 ⟷ ( 330 ) ⟷ 4056. Provide booking details and explain the issue clearly. Ask about available compensation call + ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881 or + ( 1 ) ⟷ 805 ⟷ ( 330 ) ⟷ 4056—Expedia may offer exclusive promo codes, travel credits, or discounts to resolve your concern and retain your loyalty as a valued customer. What is the best way to complain to Expedia? The best way to complain to Expedia is by calling + ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881 or + ( 1 ) ⟷ 805 ⟷ ( 330 ) ⟷ 4056 or using their Help Center. Provide complete booking details and express your concerns clearly call + ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881 or + ( 1 ) ⟷ 805 ⟷ ( 330 ) ⟷ 4056. While filing your complaint, ask about special resolution perks—Expedia may offer travel credits or exclusive promo codes as a goodwill gesture to valued customers. How do I complain to Expedia. To make a claim against Expedia, contact support at + ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881 or + ( 1 ) ⟷ 805 ⟷ ( 330 ) ⟷ 4056, or use the Help Center. Provide your booking details and explain the issue. While filing your claim, inquire about available discounts—Expedia may offer travel vouchers, promo codes, or credits as compensation for inconvenience caused during your trip. How do I complain to Expedia.To file a complaint with Expedia, contact their support at + ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881 or + ( 1 ) ⟷ 805 ⟷ ( 330 ) ⟷ 4056, or use their Help Center online. Share your issue and booking details clearly. While filing, ask about compensation options—Expedia may offer travel credits, promo codes, or exclusive discounts to help resolve your concern and retain your business. How do I make a claim on Expedia? To make a claim on Expedia, call + ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881 or + ( 1 ) ⟷ 805 ⟷ ( 330 ) ⟷ 4056, or use their Help Center to submit your issue with full booking details. During the process, ask about special offers—Expedia may provide discount codes, travel credits, or promotional deals as part of their resolution and customer satisfaction efforts. How do I complain to Expedia. To make a claim with Expedia, contact their support team at + ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881 or + ( 1 ) ⟷ 805 ⟷ ( 330 ) ⟷ 4056, or use the Help Center to submit details of your issue. While resolving your claim, ask about available discounts—Expedia may offer travel credits, promo codes, or exclusive deals to help compensate for your inconvenience. How do I file a dispute with Expedia? To file a dispute with Expedia, call + ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881 or + ( 1 ) ⟷ 805 ⟷ ( 330 ) ⟷ 4056, or use their Help Center to submit your case with complete booking information. When addressing your issue, ask about special discount offers—Expedia may provide travel vouchers, promo codes, or exclusive deals to resolve the dispute and retain customer satisfaction.", "full_name": "+ ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881||How do I resolve a dispute on Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Adaptive or trainable activation functions are the functions with trainable parameters that are able to adapt (change, optimize) their shape and amplitude to the target dataset.", "name": "Adaptive Activation Functions", "parent": "Activation Functions" }, "name": "+ ( 1 ) ⟷ 888 ⟷ ( 829 ) ⟷ 0881||How do I resolve a dispute on Expedia?", "source_title": "Learnable Extended Activation Function (LEAF) for Deep Neural Networks", "source_url": "https://doi.org/10.47839/ijc.22.3.3225" } ]
https://paperswithcode.com/paper/optimal-depth-of-neural-networks
2506.16862
null
null
Optimal Depth of Neural Networks
Determining the optimal depth of a neural network is a fundamental yet challenging problem, typically resolved through resource-intensive experimentation. This paper introduces a formal theoretical framework to address this question by recasting the forward pass of a deep network, specifically a Residual Network (ResNet), as an optimal stopping problem. We model the layer-by-layer evolution of hidden representations as a sequential decision process where, at each layer, a choice is made between halting computation to make a prediction or continuing to a deeper layer for a potentially more refined representation. This formulation captures the intrinsic trade-off between accuracy and computational cost. Our primary theoretical contribution is a proof that, under a plausible condition of diminishing returns on the residual functions, the expected optimal stopping depth is provably finite, even in an infinite-horizon setting. We leverage this insight to propose a novel and practical regularization term, $\mathcal{L}_{\rm depth}$, that encourages the network to learn representations amenable to efficient, early exiting. We demonstrate the generality of our framework by extending it to the Transformer architecture and exploring its connection to continuous-depth models via free-boundary problems. Empirical validation on ImageNet confirms that our regularizer successfully induces the theoretically predicted behavior, leading to significant gains in computational efficiency without compromising, and in some cases improving, final model accuracy.
null
https://arxiv.org/abs/2506.16862v1
https://arxiv.org/pdf/2506.16862v1.pdf
null
[ "Qian Qi" ]
[ "Computational Efficiency" ]
2025-06-20T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" } ]
https://paperswithcode.com/paper/a-minimalist-optimizer-design-for-llm
2506.16659
null
null
A Minimalist Optimizer Design for LLM Pretraining
Training large language models (LLMs) typically relies on adaptive optimizers such as Adam, which require significant memory to maintain first- and second-moment matrices, known as optimizer states. While recent works such as GaLore, Fira, and APOLLO have proposed state-compressed variants to reduce memory consumption, a fundamental question remains: What is the minimal amount of optimizer state that is truly necessary to retain state-of-the-art performance in LLM pretraining? In this work, we systematically investigate this question using a bottom-up approach. We find that two memory- and compute-efficient optimization techniques are particularly effective: (1) column-wise gradient normalization significantly boosts the performance of plain SGD without requiring momentum; and (2) adding first-order momentum only to the output layer - where gradient variance is highest - yields performance competitive with fully adaptive methods such as Muon. Based on these insights, we propose SCALE (Stochastic Column-normalized Last-layer Momentum), a new optimizer that combines column-normalized SGD with last-layer momentum, where column normalization refers to normalizing the gradient along the output dimension. Across multiple LLaMA models (60M-1B), SCALE matches or exceeds the performance of Adam while using only 35-45% of the total memory. It also consistently outperforms memory-efficient optimizers such as GaLore, Fira, and APOLLO, making it a strong candidate for large-scale pretraining under memory constraints. For the LLaMA 7B model, SCALE outperforms the state-of-the-art method APOLLO in terms of both perplexity and memory consumption. In addition, our method serves as a minimalist baseline for more sophisticated optimizer design.
null
https://arxiv.org/abs/2506.16659v1
https://arxiv.org/pdf/2506.16659v1.pdf
null
[ "Athanasios Glentis", "Jiaxiang Li", "Andi Han", "Mingyi Hong" ]
[]
2025-06-20T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112", "description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))", "full_name": "Stochastic Gradient Descent", "introduced_year": 1951, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "SGD", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**LLaMA** is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.\r\n\r\n- RMSNorm normalizing function is used to improve the training stability, by normalizing the input of each transformer sub-layer, instead of normalizing the output.\r\n- The ReLU non-linearity is replaced by the SwiGLU activation function to improve performance.\r\n- Absolute positional embeddings are removed and instead rotary positional embeddings (RoPE) are added at each layer of the network.", "full_name": "LLaMA", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "LLaMA", "source_title": "LLaMA: Open and Efficient Foundation Language Models", "source_url": "https://arxiv.org/abs/2302.13971v1" }, { "code_snippet_url": null, "description": "Please enter a description about the method here", "full_name": "Adaptive Parameter-wise Diagonal Quasi-Newton Method", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "Apollo", "source_title": "Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization", "source_url": "https://arxiv.org/abs/2009.13586v6" } ]
https://paperswithcode.com/paper/a-simplified-analysis-of-sgd-for-linear
2506.15535
null
null
A Simplified Analysis of SGD for Linear Regression with Weight Averaging
Theoretically understanding stochastic gradient descent (SGD) in overparameterized models has led to the development of several optimization algorithms that are widely used in practice today. Recent work by~\citet{zou2021benign} provides sharp rates for SGD optimization in linear regression using constant learning rate, both with and without tail iterate averaging, based on a bias-variance decomposition of the risk. In our work, we provide a simplified analysis recovering the same bias and variance bounds provided in~\citep{zou2021benign} based on simple linear algebra tools, bypassing the requirement to manipulate operators on positive semi-definite (PSD) matrices. We believe our work makes the analysis of SGD on linear regression very accessible and will be helpful in further analyzing mini-batching and learning rate scheduling, leading to improvements in the training of realistic models.
null
https://arxiv.org/abs/2506.15535v1
https://arxiv.org/pdf/2506.15535v1.pdf
null
[ "Alexandru Meterez", "Depen Morwani", "Costin-Andrei Oncescu", "Jingfeng Wu", "Cengiz Pehlevan", "Sham Kakade" ]
[ "regression", "Scheduling" ]
2025-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112", "description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))", "full_name": "Stochastic Gradient Descent", "introduced_year": 1951, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "SGD", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)", "full_name": "Linear Regression", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.", "name": "Generalized Linear Models", "parent": null }, "name": "Linear Regression", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/multi-timescale-gradient-sliding-for
2506.15387
null
null
Multi-Timescale Gradient Sliding for Distributed Optimization
We propose two first-order methods for convex, non-smooth, distributed optimization problems, hereafter called Multi-Timescale Gradient Sliding (MT-GS) and its accelerated variant (AMT-GS). Our MT-GS and AMT-GS can take advantage of similarities between (local) objectives to reduce the communication rounds, are flexible so that different subsets (of agents) can communicate at different, user-picked rates, and are fully deterministic. These three desirable features are achieved through a block-decomposable primal-dual formulation, and a multi-timescale variant of the sliding method introduced in Lan et al. (2020), Lan (2016), where different dual blocks are updated at potentially different rates. To find an $\epsilon$-suboptimal solution, the complexities of our algorithms achieve optimal dependency on $\epsilon$: MT-GS needs $O(\overline{r}A/\epsilon)$ communication rounds and $O(\overline{r}/\epsilon^2)$ subgradient steps for Lipchitz objectives, and AMT-GS needs $O(\overline{r}A/\sqrt{\epsilon\mu})$ communication rounds and $O(\overline{r}/(\epsilon\mu))$ subgradient steps if the objectives are also $\mu$-strongly convex. Here, $\overline{r}$ measures the ``average rate of updates'' for dual blocks, and $A$ measures similarities between (subgradients of) local functions. In addition, the linear dependency of communication rounds on $A$ is optimal (Arjevani and Shamir 2015), thereby providing a positive answer to the open question whether such dependency is achievable for non-smooth objectives (Arjevani and Shamir 2015).
null
https://arxiv.org/abs/2506.15387v1
https://arxiv.org/pdf/2506.15387v1.pdf
null
[ "Junhui Zhang", "Patrick Jaillet" ]
[ "Distributed Optimization" ]
2025-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/when-and-how-unlabeled-data-provably-improve
2506.15329
null
null
When and How Unlabeled Data Provably Improve In-Context Learning
Recent research shows that in-context learning (ICL) can be effective even when demonstrations have missing or incorrect labels. To shed light on this capability, we examine a canonical setting where the demonstrations are drawn according to a binary Gaussian mixture model (GMM) and a certain fraction of the demonstrations have missing labels. We provide a comprehensive theoretical study to show that: (1) The loss landscape of one-layer linear attention models recover the optimal fully-supervised estimator but completely fail to exploit unlabeled data; (2) In contrast, multilayer or looped transformers can effectively leverage unlabeled data by implicitly constructing estimators of the form $\sum_{i\ge 0} a_i (X^\top X)^iX^\top y$ with $X$ and $y$ denoting features and partially-observed labels (with missing entries set to zero). We characterize the class of polynomials that can be expressed as a function of depth and draw connections to Expectation Maximization, an iterative pseudo-labeling algorithm commonly used in semi-supervised learning. Importantly, the leading polynomial power is exponential in depth, so mild amount of depth/looping suffices. As an application of theory, we propose looping off-the-shelf tabular foundation models to enhance their semi-supervision capabilities. Extensive evaluations on real-world datasets show that our method significantly improves the semisupervised tabular learning performance over the standard single pass inference.
null
https://arxiv.org/abs/2506.15329v1
https://arxiv.org/pdf/2506.15329v1.pdf
null
[ "Yingcong Li", "Xiangyu Chang", "Muti Kara", "Xiaofeng Liu", "Amit Roy-Chowdhury", "Samet Oymak" ]
[ "In-Context Learning", "Missing Labels" ]
2025-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/proximal-operators-of-sorted-nonconvex
2506.15315
null
null
Proximal Operators of Sorted Nonconvex Penalties
This work studies the problem of sparse signal recovery with automatic grouping of variables. To this end, we investigate sorted nonsmooth penalties as a regularization approach for generalized linear models. We focus on a family of sorted nonconvex penalties which generalizes the Sorted L1 Norm (SLOPE). These penalties are designed to promote clustering of variables due to their sorted nature, while the nonconvexity reduces the shrinkage of coefficients. Our goal is to provide efficient ways to compute their proximal operator, enabling the use of popular proximal algorithms to solve composite optimization problems with this choice of sorted penalties. We distinguish between two classes of problems: the weakly convex case where computing the proximal operator remains a convex problem, and the nonconvex case where computing the proximal operator becomes a challenging nonconvex combinatorial problem. For the weakly convex case (e.g. sorted MCP and SCAD), we explain how the Pool Adjacent Violators (PAV) algorithm can exactly compute the proximal operator. For the nonconvex case (e.g. sorted Lq with q in ]0,1[), we show that a slight modification of this algorithm turns out to be remarkably efficient to tackle the computation of the proximal operator. We also present new theoretical insights on the minimizers of the nonconvex proximal problem. We demonstrate the practical interest of using such penalties on several experiments.
null
https://arxiv.org/abs/2506.15315v1
https://arxiv.org/pdf/2506.15315v1.pdf
null
[ "Anne Gagneux", "Mathurin Massias", "Emmanuel Soubies" ]
[]
2025-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/dova-patbm-an-intelligent-adaptive-and
2506.15289
null
null
DOVA-PATBM: An Intelligent, Adaptive, and Scalable Framework for Optimizing Large-Scale EV Charging Infrastructure
The accelerating uptake of battery-electric vehicles demands infrastructure planning tools that are both data-rich and geographically scalable. Whereas most prior studies optimise charging locations for single cities, state-wide and national networks must reconcile the conflicting requirements of dense metropolitan cores, car-dependent exurbs, and power-constrained rural corridors. We present DOVA-PATBM (Deployment Optimisation with Voronoi-oriented, Adaptive, POI-Aware Temporal Behaviour Model), a geo-computational framework that unifies these contexts in a single pipeline. The method rasterises heterogeneous data (roads, population, night lights, POIs, and feeder lines) onto a hierarchical H3 grid, infers intersection importance with a zone-normalised graph neural network centrality model, and overlays a Voronoi tessellation that guarantees at least one five-port DC fast charger within every 30 km radius. Hourly arrival profiles, learned from loop-detector and floating-car traces, feed a finite M/M/c queue to size ports under feeder-capacity and outage-risk constraints. A greedy maximal-coverage heuristic with income-weighted penalties then selects the minimum number of sites that satisfy coverage and equity targets. Applied to the State of Georgia, USA, DOVA-PATBM (i) increases 30 km tile coverage by 12 percentage points, (ii) halves the mean distance that low-income residents travel to the nearest charger, and (iii) meets sub-transmission headroom everywhere -- all while remaining computationally tractable for national-scale roll-outs. These results demonstrate that a tightly integrated, GNN-driven, multi-resolution approach can bridge the gap between academic optimisation and deployable infrastructure policy.
null
https://arxiv.org/abs/2506.15289v1
https://arxiv.org/pdf/2506.15289v1.pdf
null
[ "Chuan Li", "Shunyu Zhao", "Vincent Gauthier", "Hassine Moungla" ]
[ "Graph Neural Network" ]
2025-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Graph Neural Network", "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Graph Neural Network", "source_title": "Graph Neural Networks: A Review of Methods and Applications", "source_url": "https://arxiv.org/abs/1812.08434v6" } ]
https://paperswithcode.com/paper/muon-optimizes-under-spectral-norm
2506.15054
null
null
Muon Optimizes Under Spectral Norm Constraints
The pursuit of faster optimization algorithms remains an active and important research direction in deep learning. Recently, the Muon optimizer [JJB+24] has demonstrated promising empirical performance, but its theoretical foundation remains less understood. In this paper, we bridge this gap and provide a theoretical analysis of Muon by placing it within the Lion-$\mathcal{K}$ family of optimizers [CLLL24]. Specifically, we show that Muon corresponds to Lion-$\mathcal{K}$ when equipped with the nuclear norm, and we leverage the theoretical results of Lion-$\mathcal{K}$ to establish that Muon (with decoupled weight decay) implicitly solves an optimization problem that enforces a constraint on the spectral norm of weight matrices. This perspective not only demystifies the implicit regularization effects of Muon but also leads to natural generalizations through varying the choice of convex map $\mathcal{K}$, allowing for the exploration of a broader class of implicitly regularized and constrained optimization algorithms.
null
https://arxiv.org/abs/2506.15054v1
https://arxiv.org/pdf/2506.15054v1.pdf
null
[ "Lizhang Chen", "Jonathan Li", "Qiang Liu" ]
[]
2025-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/robust-hedging-of-american-options-via
2506.14553
null
null
Robust Hedging of American Options via Aggregated Snell Envelopes
We construct an aggregator for a family of Snell envelopes in a nondominated framework. We apply this construction to establish a robust hedging duality, along with the existence of a minimal hedging strategy, in a general semi-martingale setting for American-style options. Our results encompass continuous processes, or processes with jumps and non-vanishing diffusion. A key application is to financial market models, where uncertainty is quantified through the semi-martingale characteristics.
null
https://arxiv.org/abs/2506.14553v1
https://arxiv.org/pdf/2506.14553v1.pdf
null
[ "Marco Rodrigues" ]
[]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/balancing-intensity-and-focality-in
2506.13452
null
null
Balancing Intensity and Focality in Directional DBS Under Uncertainty: A Simulation Study of Electrode Optimization via a Metaheuristic L1L1 Approach
As DBS technology advances toward directional leads and optimization-based current steering, this study aims to improve the selection of electrode contact configurations using the recently developed L1-norm regularized L1-norm fitting (L1L1) method. The focus is in particular on L1L1's capability to incorporate a priori lead field uncertainty, offering a potential advantage over conventional approaches that do not account for such variability. Our optimization framework incorporates uncertainty by constraining the solution space based on lead field attenuation. This reflects physiological expectations about the VTA and serves to avoid overfitting. By applying this method to 8- and 40-contact electrode configurations, we optimize current distributions within a discretized finite element (FE) model, focusing on the lead field's characteristics. The model accounts for uncertainty through these explicit constraints, enhancing the feasibility, focality, and robustness of the resulting solutions. The L1L1 method was validated through a series of numerical experiments using both noiseless and noisy lead fields, where the noise level was selected to reflect attenuation within VTA. It successfully fits and regularizes the current distribution across target structures, with hyperparameter optimization extracting either bipolar or multipolar electrode configurations. These configurations aim to maximize focused current density or prioritize a high gain field ratio in a discretized FE model. Compared to traditional methods, the L1L1 approach showed competitive performance in concentrating stimulation within the target region while minimizing unintended current spread, particularly under noisy conditions. By incorporating uncertainty directly into the optimization process, we obtain a noise-robust framework for current steering, allowing for variations in lead field models and simulation parameters.
null
https://arxiv.org/abs/2506.13452v1
https://arxiv.org/pdf/2506.13452v1.pdf
null
[ "Fernando Galaz Prieto", "Antti Lassila", "Maryam Samavaki", "Sampsa Pursiainen" ]
[ "Hyperparameter Optimization" ]
2025-06-16T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/an-explainable-and-interpretable-composite
2506.13259
null
null
An Explainable and Interpretable Composite Indicator Based on Decision Rules
Composite indicators are widely used to score or classify units evaluated on multiple criteria. Their construction involves aggregating criteria evaluations, a common practice in Multiple Criteria Decision Aiding (MCDA). In MCDA, various methods have been proposed to address key aspects of multiple criteria evaluations, such as the measurement scales of the criteria, the degree of acceptable compensation between them, and their potential interactions. However, beyond producing a final score or classification, it is essential to ensure the explainability and interpretability of results as well as the procedure's transparency. This paper proposes a method for constructing explainable and interpretable composite indicators using "if..., then..." decision rules. We consider the explainability and interpretability of composite indicators in four scenarios: (i) decision rules explain numerical scores obtained from an aggregation of numerical codes corresponding to ordinal qualifiers; (ii) an obscure numerical composite indicator classifies units into quantiles; (iii) given preference information provided by a Decision Maker in the form of classifications of some reference units, a composite indicator is constructed using decision rules; (iv) the classification of a set of units results from the application of an MCDA method and is explained by decision rules. To induce the rules from scored or classified units, we apply the Dominance-based Rough Set Approach. The resulting decision rules relate the class assignment or unit's score to threshold conditions on values of selected indicators in an intelligible way, clarifying the underlying rationale. Moreover, they serve to recommend composite indicator assessment for new units of interest.
null
https://arxiv.org/abs/2506.13259v1
https://arxiv.org/pdf/2506.13259v1.pdf
null
[ "Salvatore Corrente", "Salvatore Greco", "Roman Słowiński", "Silvano Zappalà" ]
[]
2025-06-16T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/restarted-contractive-operators-to-learn-at
2506.13239
null
null
Restarted contractive operators to learn at equilibrium
Bilevel optimization offers a methodology to learn hyperparameters in imaging inverse problems, yet its integration with automatic differentiation techniques remains challenging. On the one hand, inverse problems are typically solved by iterating arbitrarily many times some elementary scheme which maps any point to the minimizer of an energy functional, known as equilibrium point. On the other hand, introducing parameters to be learned in the energy functional yield architectures very reminiscent of Neural Networks (NN) known as Unrolled NN and thus suggests the use of Automatic Differentiation (AD) techniques. Yet, applying AD requires for the NN to be of relatively small depth, thus making necessary to truncate an unrolled scheme to a finite number of iterations. First, we show that, at the minimizer, the optimal gradient descent step computed in the Deep Equilibrium (DEQ) framework admits an approximation, known as Jacobian Free Backpropagation (JFB), that is much easier to compute and can be made arbitrarily good by controlling Lipschitz properties of the truncated unrolled scheme. Second, we introduce an algorithm that combines a restart strategy with JFB computed by AD and we show that the learned steps can be made arbitrarily close to the optimal DEQ framework. Third, we complement the theoretical analysis by applying the proposed method to a variety of problems in imaging that progressively depart from the theoretical framework. In particular we show that this method is effective for training weights in weighted norms; stepsizes and regularization levels of Plug-and-Play schemes; and a DRUNet denoiser embedded in Forward-Backward iterates.
null
https://arxiv.org/abs/2506.13239v1
https://arxiv.org/pdf/2506.13239v1.pdf
null
[ "Leo Davy", "Luis M. Briceno-Arias", "N. Pustelnik" ]
[ "Bilevel Optimization" ]
2025-06-16T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A new kind of implicit models, where the output of the network is defined as the solution to an \"infinite-level\" fixed point equation. Thanks to this we can compute the gradient of the output without activations and therefore with a significantly reduced memory footprint.", "full_name": "Deep Equilibrium Models", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Robust Training", "parent": null }, "name": "DEQ", "source_title": "Deep Equilibrium Models", "source_url": "https://arxiv.org/abs/1909.01377v2" } ]
https://paperswithcode.com/paper/unconstrained-robust-online-convex
2506.12781
null
null
Unconstrained Robust Online Convex Optimization
This paper addresses online learning with ``corrupted'' feedback. Our learner is provided with potentially corrupted gradients $\tilde g_t$ instead of the ``true'' gradients $g_t$. We make no assumptions about how the corruptions arise: they could be the result of outliers, mislabeled data, or even malicious interference. We focus on the difficult ``unconstrained'' setting in which our algorithm must maintain low regret with respect to any comparison point $u \in \mathbb{R}^d$. The unconstrained setting is significantly more challenging as existing algorithms suffer extremely high regret even with very tiny amounts of corruption (which is not true in the case of a bounded domain). Our algorithms guarantee regret $ \|u\|G (\sqrt{T} + k) $ when $G \ge \max_t \|g_t\|$ is known, where $k$ is a measure of the total amount of corruption. When $G$ is unknown we incur an extra additive penalty of $(\|u\|^2+G^2) k$.
null
https://arxiv.org/abs/2506.12781v1
https://arxiv.org/pdf/2506.12781v1.pdf
null
[ "Jiujia Zhang", "Ashok Cutkosky" ]
[]
2025-06-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/is-your-batch-size-the-problem-revisiting-the
2506.12543
null
null
Is your batch size the problem? Revisiting the Adam-SGD gap in language modeling
Adam is known to perform significantly better than Stochastic Gradient Descent (SGD) in language models, a phenomenon for which a number of explanations have been proposed. In this work, we revisit this "optimizer gap" through a series of comprehensively tuned baseline training runs for language modeling with Transformers. We exhaustively study how momentum, gradient clipping, and batch size affect the gap between SGD and Adam. Our empirical findings show that SGD with momentum can actually perform similarly to Adam in small-batch settings, if tuned correctly. We revisit existing explanations for Adam's advantage, including heavy-tailed class imbalance, directional sharpness, and Hessian heterogeneity, which struggle to directly explain this phenomenon. Towards bridging this gap in our understanding, by analyzing our Transformer training runs and simple quadratic settings inspired by the literature, we provide new insights, driven by stochastic differential equation models, into the role of batch size on the training dynamics.
null
https://arxiv.org/abs/2506.12543v1
https://arxiv.org/pdf/2506.12543v1.pdf
null
[ "Teodora Srećković", "Jonas Geiping", "Antonio Orvieto" ]
[ "Language Modeling", "Language Modelling" ]
2025-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112", "description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))", "full_name": "Stochastic Gradient Descent", "introduced_year": 1951, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "SGD", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/adjusted-shuffling-sarah-advancing-complexity
2506.12444
null
null
Adjusted Shuffling SARAH: Advancing Complexity Analysis via Dynamic Gradient Weighting
In this paper, we propose Adjusted Shuffling SARAH, a novel algorithm that integrates shuffling techniques with the well-known variance-reduced algorithm SARAH while dynamically adjusting the stochastic gradient weights in each update to enhance exploration. Our method achieves the best-known gradient complexity for shuffling variance reduction methods in a strongly convex setting. This result applies to any shuffling technique, which narrows the gap in the complexity analysis of variance reduction methods between uniform sampling and shuffling data. Furthermore, we introduce Inexact Adjusted Reshuffling SARAH, an inexact variant of Adjusted Shuffling SARAH that eliminates the need for full-batch gradient computations. This algorithm retains the same linear convergence rate as Adjusted Shuffling SARAH while showing an advantage in total complexity when the sample size is very large.
null
https://arxiv.org/abs/2506.12444v1
https://arxiv.org/pdf/2506.12444v1.pdf
null
[ "Duc Toan Nguyen", "Trang H. Tran", "Lam M. Nguyen" ]
[]
2025-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/convergence-of-momentum-based-optimization
2506.11904
null
null
Convergence of Momentum-Based Optimization Algorithms with Time-Varying Parameters
In this paper, we present a unified algorithm for stochastic optimization that makes use of a "momentum" term; in other words, the stochastic gradient depends not only on the current true gradient of the objective function, but also on the true gradient at the previous iteration. Our formulation includes the Stochastic Heavy Ball (SHB) and the Stochastic Nesterov Accelerated Gradient (SNAG) algorithms as special cases. In addition, in our formulation, the momentum term is allowed to vary as a function of time (i.e., the iteration counter). The assumptions on the stochastic gradient are the most general in the literature, in that it can be biased, and have a conditional variance that grows in an unbounded fashion as a function of time. This last feature is crucial in order to make the theory applicable to "zero-order" methods, where the gradient is estimated using just two function evaluations. We present a set of sufficient conditions for the convergence of the unified algorithm. These conditions are natural generalizations of the familiar Robbins-Monro and Kiefer-Wolfowitz-Blum conditions for standard stochastic gradient descent. We also analyze another method from the literature for the SHB algorithm with a time-varying momentum parameter, and show that it is impracticable.
null
https://arxiv.org/abs/2506.11904v1
https://arxiv.org/pdf/2506.11904v1.pdf
null
[ "Mathukumalli Vidyasagar" ]
[ "Stochastic Optimization" ]
2025-06-13T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**Nesterov Accelerated Gradient** is a momentum-based [SGD](https://paperswithcode.com/method/sgd) optimizer that \"looks ahead\" to where the parameters will be to calculate the gradient **ex post** rather than **ex ante**:\r\n\r\n$$ v\\_{t} = \\gamma{v}\\_{t-1} - \\eta\\nabla\\_{\\theta}J\\left(\\theta_{t-1}+\\gamma{v\\_{t-1}}\\right) $$\r\n$$ \\theta\\_{t} = \\theta\\_{t-1} + v\\_{t} $$\r\n$$ \\gamma, \\eta \\in \\mathbb{R}^+ $$\r\n\r\nLike SGD with momentum $\\gamma$ is usually set to $0.9$. $\\eta$ and $\\gamma$ are usually less than $1$.\r\n\r\nThe intuition is that the [standard momentum](https://paperswithcode.com/method/sgd-with-momentum) method first computes the gradient at the current location and then takes a big jump in the direction of the updated accumulated gradient. In contrast Nesterov momentum first makes a big jump in the direction of the previous accumulated gradient and then measures the gradient where it ends up and makes a correction. The idea being that it is better to correct a mistake after you have made it. \r\n\r\nImage Source: [Geoff Hinton lecture notes](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)", "full_name": "Nesterov Accelerated Gradient", "introduced_year": 1983, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "Nesterov Accelerated Gradient", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/quantum-learning-and-estimation-for
2506.11730
null
null
Quantum Learning and Estimation for Distribution Networks and Energy Communities Coordination
Price signals from distribution networks (DNs) guide energy communities (ECs) to adjust energy usage, enabling effective coordination for reliable power system operation. However, this coordination faces significant challenges due to the limited availability of information (i.e., only the aggregated energy usage of ECs is available to DNs), and the high computational burden of accounting for uncertainties and the associated risks through numerous scenarios. To address these challenges, we propose a quantum learning and estimation approach to enhance coordination between DNs and ECs. Specifically, leveraging advanced quantum properties such as quantum superposition and entanglement, we develop a hybrid quantum temporal convolutional network-long short-term memory (Q-TCN-LSTM) model to establish an end-to-end mapping between ECs' responses and the price incentives from DNs. Moreover, we develop a quantum estimation method based on quantum amplitude estimation (QAE) and two phase-rotation circuits to significantly accelerate the optimization process under numerous uncertainty scenarios. Numerical experiments demonstrate that, compared to classical neural networks, the proposed Q-TCN-LSTM model improves the mapping accuracy by 69.2% while reducing the model size by 99.75% and the computation time by 93.9%. Compared to classical Monte Carlo simulation, QAE achieves comparable accuracy with a dramatic reduction in computational time (up to 99.99%) and requires significantly fewer computational resources.
null
https://arxiv.org/abs/2506.11730v1
https://arxiv.org/pdf/2506.11730v1.pdf
null
[ "Yingrui Zhuang", "Lin Cheng", "Yuji Cao", "Tongxin Li", "Ning Qi", "Yan Xu", "Yue Chen" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-sample-complexity-of-parameter-free
2506.11336
null
null
The Sample Complexity of Parameter-Free Stochastic Convex Optimization
We study the sample complexity of stochastic convex optimization when problem parameters, e.g., the distance to optimality, are unknown. We pursue two strategies. First, we develop a reliable model selection method that avoids overfitting the validation set. This method allows us to generically tune the learning rate of stochastic optimization methods to match the optimal known-parameter sample complexity up to $\log\log$ factors. Second, we develop a regularization-based method that is specialized to the case that only the distance to optimality is unknown. This method provides perfect adaptability to unknown distance to optimality, demonstrating a separation between the sample and computational complexity of parameter-free stochastic convex optimization. Combining these two methods allows us to simultaneously adapt to multiple problem structures. Experiments performing few-shot learning on CIFAR-10 by fine-tuning CLIP models and prompt engineering Gemini to count shapes indicate that our reliable model selection method can help mitigate overfitting to small validation sets.
null
https://arxiv.org/abs/2506.11336v1
https://arxiv.org/pdf/2506.11336v1.pdf
null
[ "Jared Lawrence", "Ari Kalinsky", "Hannah Bradfield", "Yair Carmon", "Oliver Hinder" ]
[ "Few-Shot Learning", "Model Selection", "Prompt Engineering", "Stochastic Optimization" ]
2025-06-12T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/OpenAI/CLIP", "description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)", "full_name": "Contrastive Language-Image Pre-training", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Representations", "parent": null }, "name": "CLIP", "source_title": "Learning Transferable Visual Models From Natural Language Supervision", "source_url": "https://arxiv.org/abs/2103.00020v1" } ]
https://paperswithcode.com/paper/gpu-accelerated-modeling-of-biological
2506.19866
null
null
GPU-accelerated Modeling of Biological Regulatory Networks
The complex regulatory dynamics of a biological network can be succinctly captured using discrete logic models. Given even sparse time-course data from the system of interest, previous work has shown that global optimization schemes are suitable for proposing logic models that explain the data and make predictions about how the system will behave under varying conditions. Considering the large scale of the parameter search spaces associated with these regulatory systems, performance optimizations on the level of both hardware and software are necessary for making this a practical tool for in silico pharmaceutical research. We show here how the implementation of these global optimization algorithms in a GPU-computing environment can accelerate the solution of these parameter search problems considerably. We carry out parameter searches on two model biological regulatory systems that represent almost an order of magnitude scale-up in complexity, and we find the gains in efficiency from GPU to be a 33%-43% improvement compared to multi-thread CPU implementations and a 33%-1866% increase compared to CPU in serial. These improvements make global optimization of logic model identification a far more attractive and feasible method for in silico hypothesis generation and design of experiments.
null
https://arxiv.org/abs/2506.19866v1
https://arxiv.org/pdf/2506.19866v1.pdf
null
[ "Joyce Reimer", "Pranta Saha", "Chris Chen", "Neeraj Dhar", "Brook Byrns", "Steven Rayan", "Gordon Broderick" ]
[ "CPU", "global-optimization", "GPU" ]
2025-06-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/towards-reliable-detection-of-empty-space
2506.21486
null
null
Towards Reliable Detection of Empty Space: Conditional Marked Point Processes for Object Detection
Deep neural networks have set the state-of-the-art in computer vision tasks such as bounding box detection and semantic segmentation. Object detectors and segmentation models assign confidence scores to predictions, reflecting the model's uncertainty in object detection or pixel-wise classification. However, these confidence estimates are often miscalibrated, as their architectures and loss functions are tailored to task performance rather than probabilistic foundation. Even with well calibrated predictions, object detectors fail to quantify uncertainty outside detected bounding boxes, i.e., the model does not make a probability assessment of whether an area without detected objects is truly free of obstacles. This poses a safety risk in applications such as automated driving, where uncertainty in empty areas remains unexplored. In this work, we propose an object detection model grounded in spatial statistics. Bounding box data matches realizations of a marked point process, commonly used to describe the probabilistic occurrence of spatial point events identified as bounding box centers, where marks are used to describe the spatial extension of bounding boxes and classes. Our statistical framework enables a likelihood-based training and provides well-defined confidence estimates for whether a region is drivable, i.e., free of objects. We demonstrate the effectiveness of our method through calibration assessments and evaluation of performance.
Even with well calibrated predictions, object detectors fail to quantify uncertainty outside detected bounding boxes, i. e., the model does not make a probability assessment of whether an area without detected objects is truly free of obstacles.
https://arxiv.org/abs/2506.21486v1
https://arxiv.org/pdf/2506.21486v1.pdf
null
[ "Tobias J. Riedlinger", "Kira Maag", "Hanno Gottschalk" ]
[ "Object", "object-detection", "Object Detection", "Point Processes", "Semantic Segmentation" ]
2025-06-26T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/homogenization-of-multi-agent-learning
2506.21079
null
null
Homogenization of Multi-agent Learning Dynamics in Finite-state Markov Games
This paper introduces a new approach for approximating the learning dynamics of multiple reinforcement learning (RL) agents interacting in a finite-state Markov game. The idea is to rescale the learning process by simultaneously reducing the learning rate and increasing the update frequency, effectively treating the agent's parameters as a slow-evolving variable influenced by the fast-mixing game state. Under mild assumptions-ergodicity of the state process and continuity of the updates-we prove the convergence of this rescaled process to an ordinary differential equation (ODE). This ODE provides a tractable, deterministic approximation of the agent's learning dynamics. An implementation of the framework is available at\,: https://github.com/yannKerzreho/MarkovGameApproximation
This paper introduces a new approach for approximating the learning dynamics of multiple reinforcement learning (RL) agents interacting in a finite-state Markov game.
https://arxiv.org/abs/2506.21079v1
https://arxiv.org/pdf/2506.21079v1.pdf
null
[ "Yann Kerzreho" ]
[ "Reinforcement Learning (RL)" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/near-optimal-estimates-for-the-ell-p
2506.19695
null
null
Near-optimal estimates for the $\ell^p$-Lipschitz constants of deep random ReLU neural networks
This paper studies the $\ell^p$-Lipschitz constants of ReLU neural networks $\Phi: \mathbb{R}^d \to \mathbb{R}$ with random parameters for $p \in [1,\infty]$. The distribution of the weights follows a variant of the He initialization and the biases are drawn from symmetric distributions. We derive high probability upper and lower bounds for wide networks that differ at most by a factor that is logarithmic in the network's width and linear in its depth. In the special case of shallow networks, we obtain matching bounds. Remarkably, the behavior of the $\ell^p$-Lipschitz constant varies significantly between the regimes $ p \in [1,2) $ and $ p \in [2,\infty] $. For $p \in [2,\infty]$, the $\ell^p$-Lipschitz constant behaves similarly to $\Vert g\Vert_{p'}$, where $g \in \mathbb{R}^d$ is a $d$-dimensional standard Gaussian vector and $1/p + 1/p' = 1$. In contrast, for $p \in [1,2)$, the $\ell^p$-Lipschitz constant aligns more closely to $\Vert g \Vert_{2}$.
null
https://arxiv.org/abs/2506.19695v1
https://arxiv.org/pdf/2506.19695v1.pdf
null
[ "Sjoerd Dirksen", "Patrick Finke", "Paul Geuchen", "Dominik Stöger", "Felix Voigtlaender" ]
[]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!", "full_name": "*Communicated@Fast*How Do I Communicate to Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/from-minimax-optimal-importance-sampling-to
2506.19186
null
null
From Minimax Optimal Importance Sampling to Uniformly Ergodic Importance-tempered MCMC
We make two closely related theoretical contributions to the use of importance sampling schemes. First, for independent sampling, we prove that the minimax optimal trial distribution coincides with the target if and only if the target distribution has no atom with probability greater than $1/2$, where "minimax" means that the worst-case asymptotic variance of the self-normalized importance sampling estimator is minimized. When a large atom exists, it should be downweighted by the trial distribution. A similar phenomenon holds for a continuous target distribution concentrated on a small set. Second, we argue that it is often advantageous to run the Metropolis--Hastings algorithm with a tempered stationary distribution, $\pi(x)^\beta$, and correct for the bias by importance weighting. The dynamics of this "importance-tempered" sampling scheme can be described by a continuous-time Markov chain. We prove that for one-dimensional targets with polynomial tails, $\pi(x) \propto (1 + |x|)^{-\gamma}$, this chain is uniformly ergodic if and only if $1/\gamma < \beta < (\gamma - 2)/\gamma$. These results suggest that for target distributions with light or polynomial tails of order $\gamma > 3$, importance tempering can improve the precision of time-average estimators and essentially eliminate the need for burn-in.
null
https://arxiv.org/abs/2506.19186v1
https://arxiv.org/pdf/2506.19186v1.pdf
null
[ "Quan Zhou" ]
[]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-within-orbit-adaptive-leapfrog-no-u-turn
2506.18746
null
null
The Within-Orbit Adaptive Leapfrog No-U-Turn Sampler
Locally adapting parameters within Markov chain Monte Carlo methods while preserving reversibility is notoriously difficult. The success of the No-U-Turn Sampler (NUTS) largely stems from its clever local adaptation of the integration time in Hamiltonian Monte Carlo via a geometric U-turn condition. However, posterior distributions frequently exhibit multi-scale geometries with extreme variations in scale, making it necessary to also adapt the leapfrog integrator's step size locally and dynamically. Despite its practical importance, this problem has remained largely open since the introduction of NUTS by Hoffman and Gelman (2014). To address this issue, we introduce the Within-orbit Adaptive Leapfrog No-U-Turn Sampler (WALNUTS), a generalization of NUTS that adapts the leapfrog step size at fixed intervals of simulated time as the orbit evolves. At each interval, the algorithm selects the largest step size from a dyadic schedule that keeps the energy error below a user-specified threshold. Like NUTS, WALNUTS employs biased progressive state selection to favor states with positions that are further from the initial point along the orbit. Empirical evaluations on multiscale target distributions, including Neal's funnel and the Stock-Watson stochastic volatility time-series model, demonstrate that WALNUTS achieves substantial improvements in sampling efficiency and robustness compared to standard NUTS.
The success of the No-U-Turn Sampler (NUTS) largely stems from its clever local adaptation of the integration time in Hamiltonian Monte Carlo via a geometric U-turn condition.
https://arxiv.org/abs/2506.18746v1
https://arxiv.org/pdf/2506.18746v1.pdf
null
[ "Nawaf Bou-Rabee", "Bob Carpenter", "Tore Selland Kleppe", "Sifan Liu" ]
[]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/h-calibration-rethinking-classifier
2506.17968
null
null
h-calibration: Rethinking Classifier Recalibration with Probabilistic Error-Bounded Objective
Deep neural networks have demonstrated remarkable performance across numerous learning tasks but often suffer from miscalibration, resulting in unreliable probability outputs. This has inspired many recent works on mitigating miscalibration, particularly through post-hoc recalibration methods that aim to obtain calibrated probabilities without sacrificing the classification performance of pre-trained models. In this study, we summarize and categorize previous works into three general strategies: intuitively designed methods, binning-based methods, and methods based on formulations of ideal calibration. Through theoretical and practical analysis, we highlight ten common limitations in previous approaches. To address these limitations, we propose a probabilistic learning framework for calibration called h-calibration, which theoretically constructs an equivalent learning formulation for canonical calibration with boundedness. On this basis, we design a simple yet effective post-hoc calibration algorithm. Our method not only overcomes the ten identified limitations but also achieves markedly better performance than traditional methods, as validated by extensive experiments. We further analyze, both theoretically and experimentally, the relationship and advantages of our learning objective compared to traditional proper scoring rule. In summary, our probabilistic framework derives an approximately equivalent differentiable objective for learning error-bounded calibrated probabilities, elucidating the correspondence and convergence properties of computational statistics with respect to theoretical bounds in canonical calibration. The theoretical effectiveness is verified on standard post-hoc calibration benchmarks by achieving state-of-the-art performance. This research offers valuable reference for learning reliable likelihood in related fields.
In this study, we summarize and categorize previous works into three general strategies: intuitively designed methods, binning-based methods, and methods based on formulations of ideal calibration.
https://arxiv.org/abs/2506.17968v1
https://arxiv.org/pdf/2506.17968v1.pdf
null
[ "Wenjian Huang", "Guiping Cao", "Jiahao Xia", "Jingkun Chen", "Hao Wang", "JianGuo Zhang" ]
[ "scoring rule" ]
2025-06-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/greedy-selection-under-independent-increments
2506.17941
null
null
Greedy Selection under Independent Increments: A Toy Model Analysis
We study an iterative selection problem over N i.i.d. discrete-time stochastic processes with independent increments. At each stage, a fixed number of processes are retained based on their observed values. Under this simple model, we prove that the optimal strategy for selecting the final maximum-value process is to apply greedy selection at each stage. While the result relies on strong independence assumptions, it offers a clean justification for greedy heuristics in multi-stage elimination settings and may serve as a toy example for understanding related algorithms in high-dimensional applications.
null
https://arxiv.org/abs/2506.17941v1
https://arxiv.org/pdf/2506.17941v1.pdf
null
[ "Huitao Yang" ]
[]
2025-06-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/on-design-of-representative-distributionally
2506.16230
null
null
On Design of Representative Distributionally Robust Formulations for Evaluation of Tail Risk Measures
Conditional Value-at-Risk (CVaR) is a risk measure widely used to quantify the impact of extreme losses. Owing to the lack of representative samples CVaR is sensitive to the tails of the underlying distribution. In order to combat this sensitivity, Distributionally Robust Optimization (DRO), which evaluates the worst-case CVaR measure over a set of plausible data distributions is often deployed. Unfortunately, an improper choice of the DRO formulation can lead to a severe underestimation of tail risk. This paper aims at leveraging extreme value theory to arrive at a DRO formulation which leads to representative worst-case CVaR evaluations in that the above pitfall is avoided while simultaneously, the worst case evaluation is not a gross over-estimate of the true CVaR. We demonstrate theoretically that even when there is paucity of samples in the tail of the distribution, our formulation is readily implementable from data, only requiring calibration of a single scalar parameter. We showcase that our formulation can be easily extended to provide robustness to tail risk in multivariate applications as well as in the evaluation of other commonly used risk measures. Numerical illustrations on synthetic and real-world data showcase the practical utility of our approach.
null
https://arxiv.org/abs/2506.16230v1
https://arxiv.org/pdf/2506.16230v1.pdf
null
[ "Anand Deo" ]
[]
2025-06-19T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/sampling-conditioned-diffusions-via-pathspace
2506.15743
null
null
Sampling conditioned diffusions via Pathspace Projected Monte Carlo
We present an algorithm to sample stochastic differential equations conditioned on rather general constraints, including integral constraints, endpoint constraints, and stochastic integral constraints. The algorithm is a pathspace Metropolis-adjusted manifold sampling scheme, which samples stochastic paths on the submanifold of realizations that adhere to the conditioning constraint. We demonstrate the effectiveness of the algorithm by sampling a dynamical condensation phase transition, conditioning a random walk on a fixed Levy stochastic area, conditioning a stochastic nonlinear wave equation on high amplitude waves, and sampling a stochastic partial differential equation model of turbulent pipe flow conditioned on relaminarization events.
null
https://arxiv.org/abs/2506.15743v1
https://arxiv.org/pdf/2506.15743v1.pdf
null
[ "Tobias Grafke" ]
[]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/existence-of-adversarial-examples-for-random
2506.12613
null
null
Existence of Adversarial Examples for Random Convolutional Networks via Isoperimetric Inequalities on $\mathbb{so}(d)$
We show that adversarial examples exist for various random convolutional networks, and furthermore, that this is a relatively simple consequence of the isoperimetric inequality on the special orthogonal group $\mathbb{so}(d)$. This extends and simplifies a recent line of work which shows similar results for random fully connected networks.
null
https://arxiv.org/abs/2506.12613v1
https://arxiv.org/pdf/2506.12613v1.pdf
null
[ "Amit Daniely" ]
[]
2025-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/credit-risk-for-large-portfolios-of-green-and
2506.12510
null
null
Credit risk for large portfolios of green and brown loans: extending the ASRF model
We propose a credit risk model for portfolios composed of green and brown loans, extending the ASRF framework via a two-factor copula structure. Systematic risk is modeled using potentially skewed distributions, allowing for asymmetric creditworthiness effects, while idiosyncratic risk remains Gaussian. Under a non-uniform exposure setting, we establish convergence in quadratic mean of the portfolio loss to a limit reflecting the distinct characteristics of the two loan segments. Numerical results confirm the theoretical findings and illustrate how value-at-risk is affected by portfolio granularity, default probabilities, factor loadings, and skewness. Our model accommodates differential sensitivity to systematic shocks and offers a tractable basis for further developments in credit risk modeling, including granularity adjustments, CDO pricing, and empirical analysis of green loan portfolios.
null
https://arxiv.org/abs/2506.12510v1
https://arxiv.org/pdf/2506.12510v1.pdf
null
[ "Alessandro Ramponi", "Sergio Scarlatti" ]
[]
2025-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/multi-dimensional-queue-reactive-model-and
2506.11843
null
null
Multi-dimensional queue-reactive model and signal-driven models: a unified framework
We present a Markovian market model driven by a hidden Brownian efficient price. In particular, we extend the queue-reactive model, making its dynamics dependent on the efficient price. Our study focuses on two sub-models: a signal-driven price model where the mid-price jump rates depend on the efficient price and an observable signal, and the usual queue-reactive model dependent on the efficient price via the intensities of the order arrivals. This way, we are able to correlate the evolution of limit order books of different stocks. We prove the stability of the observed mid-price around the efficient price under natural assumptions. Precisely, we show that at the macroscopic scale, prices behave as diffusions. We also develop a maximum likelihood estimation procedure for the model, and test it numerically. Our model is them used to backest trading strategies in a liquidation context.
null
https://arxiv.org/abs/2506.11843v1
https://arxiv.org/pdf/2506.11843v1.pdf
null
[ "Emmanouil Sfendourakis" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/causal-effect-identification-in-heterogeneous
2506.11756
null
null
Causal Effect Identification in Heterogeneous Environments from Higher-Order Moments
We investigate the estimation of the causal effect of a treatment variable on an outcome in the presence of a latent confounder. We first show that the causal effect is identifiable under certain conditions when data is available from multiple environments, provided that the target causal effect remains invariant across these environments. Secondly, we propose a moment-based algorithm for estimating the causal effect as long as only a single parameter of the data-generating mechanism varies across environments -- whether it be the exogenous noise distribution or the causal relationship between two variables. Conversely, we prove that identifiability is lost if both exogenous noise distributions of both the latent and treatment variables vary across environments. Finally, we propose a procedure to identify which parameter of the data-generating mechanism has varied across the environments and evaluate the performance of our proposed methods through experiments on synthetic data.
null
https://arxiv.org/abs/2506.11756v1
https://arxiv.org/pdf/2506.11756v1.pdf
null
[ "Yaroslav Kivva", "Sina Akbari", "Saber Salehkaleybar", "Negar Kiyavash" ]
[]
2025-06-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/robust-alignment-via-partial-gromov
2506.21507
null
null
Robust Alignment via Partial Gromov-Wasserstein Distances
The Gromov-Wasserstein (GW) problem provides a powerful framework for aligning heterogeneous datasets by matching their internal structures in a way that minimizes distortion. However, GW alignment is sensitive to data contamination by outliers, which can greatly distort the resulting matching scheme. To address this issue, we study robust GW alignment, where upon observing contaminated versions of the clean data distributions, our goal is to accurately estimate the GW alignment cost between the original (uncontaminated) measures. We propose an estimator based on the partial GW distance, which trims out a fraction of the mass from each distribution before optimally aligning the rest. The estimator is shown to be minimax optimal in the population setting and is near-optimal in the finite-sample regime, where the optimality gap originates only from the suboptimality of the plug-in estimator in the empirical estimation setting (i.e., without contamination). Towards the analysis, we derive new structural results pertaining to the approximate pseudo-metric structure of the partial GW distance. Overall, our results endow the partial GW distance with an operational meaning by posing it as a robust surrogate of the classical distance when the observed data may be contaminated.
null
https://arxiv.org/abs/2506.21507v1
https://arxiv.org/pdf/2506.21507v1.pdf
null
[ "Xiaoyun Gong", "Sloan Nietert", "Ziv Goldfeld" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/wild-refitting-for-black-box-prediction
2506.21460
null
null
Wild refitting for black box prediction
We describe and analyze a computionally efficient refitting procedure for computing high-probability upper bounds on the instance-wise mean-squared prediction error of penalized nonparametric estimates based on least-squares minimization. Requiring only a single dataset and black box access to the prediction method, it consists of three steps: computing suitable residuals, symmetrizing and scaling them with a pre-factor $\rho$, and using them to define and solve a modified prediction problem recentered at the current estimate. We refer to it as wild refitting, since it uses Rademacher residual symmetrization as in a wild bootstrap variant. Under relatively mild conditions allowing for noise heterogeneity, we establish a high probability guarantee on its performance, showing that the wild refit with a suitably chosen wild noise scale $\rho$ gives an upper bound on prediction error. This theoretical analysis provides guidance into the design of such procedures, including how the residuals should be formed, the amount of noise rescaling in the wild sub-problem needed for upper bounds, and the local stability properties of the block-box procedure. We illustrate the applicability of this procedure to various problems, including non-rigid structure-from-motion recovery with structured matrix penalties; plug-and-play image restoration with deep neural network priors; and randomized sketching with kernel methods.
null
https://arxiv.org/abs/2506.21460v1
https://arxiv.org/pdf/2506.21460v1.pdf
null
[ "Martin J. Wainwright" ]
[ "Image Restoration", "Prediction" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hyperspherical-variational-autoencoders-using
2506.21278
null
null
Hyperspherical Variational Autoencoders Using Efficient Spherical Cauchy Distribution
We propose a novel variational autoencoder (VAE) architecture that employs a spherical Cauchy (spCauchy) latent distribution. Unlike traditional Gaussian latent spaces or the widely used von Mises-Fisher (vMF) distribution, spCauchy provides a more natural hyperspherical representation of latent variables, better capturing directional data while maintaining flexibility. Its heavy-tailed nature prevents over-regularization, ensuring efficient latent space utilization while offering a more expressive representation. Additionally, spCauchy circumvents the numerical instabilities inherent to vMF, which arise from computing normalization constants involving Bessel functions. Instead, it enables a fully differentiable and efficient reparameterization trick via M\"obius transformations, allowing for stable and scalable training. The KL divergence can be computed through a rapidly converging power series, eliminating concerns of underflow or overflow associated with evaluation of ratios of hypergeometric functions. These properties make spCauchy a compelling alternative for VAEs, offering both theoretical advantages and practical efficiency in high-dimensional generative modeling.
null
https://arxiv.org/abs/2506.21278v1
https://arxiv.org/pdf/2506.21278v1.pdf
null
[ "Lukas Sablica", "Kurt Hornik" ]
[]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/an-introduction-to-causal-modelling
2506.16486
null
null
An introduction to Causal Modelling
This tutorial provides a concise introduction to modern causal modeling by integrating potential outcomes and graphical methods. We motivate causal questions such as counterfactual reasoning under interventions and define binary treatments and potential outcomes. We discuss causal effect measures-including average treatment effects on the treated and on the untreated-and choices of effect scales for binary outcomes. We derive identification in randomized experiments under exchangeability and consistency, and extend to stratification and blocking designs. We present inverse probability weighting with propensity score estimation and robust inference via sandwich estimators. Finally, we introduce causal graphs, d-separation, the backdoor criterion, single-world intervention graphs, and structural equation models, showing how graphical and potential-outcome approaches complement each other. Emphasis is placed on clear notation, intuitive explanations, and practical examples for applied researchers.
null
https://arxiv.org/abs/2506.16486v2
https://arxiv.org/pdf/2506.16486v2.pdf
null
[ "Gauranga Kumar Baishya" ]
[ "Blocking", "counterfactual", "Counterfactual Reasoning" ]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lower-bounds-on-the-size-of-markov
2506.20933
null
null
Lower Bounds on the Size of Markov Equivalence Classes
Causal discovery algorithms typically recover causal graphs only up to their Markov equivalence classes unless additional parametric assumptions are made. The sizes of these equivalence classes reflect the limits of what can be learned about the underlying causal graph from purely observational data. Under the assumptions of acyclicity, causal sufficiency, and a uniform model prior, Markov equivalence classes are known to be small on average. In this paper, we show that this is no longer the case when any of these assumptions is relaxed. Specifically, we prove exponentially large lower bounds for the expected size of Markov equivalence classes in three settings: sparse random directed acyclic graphs, uniformly random acyclic directed mixed graphs, and uniformly random directed cyclic graphs.
null
https://arxiv.org/abs/2506.20933v1
https://arxiv.org/pdf/2506.20933v1.pdf
null
[ "Erik Jahn", "Frederick Eberhardt", "Leonard J. Schulman" ]
[ "Causal Discovery" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-shape-of-consumer-behavior-a-symbolic-and
2506.19759
null
null
The Shape of Consumer Behavior: A Symbolic and Topological Analysis of Time Series
Understanding temporal patterns in online search behavior is crucial for real-time marketing and trend forecasting. Google Trends offers a rich proxy for public interest, yet the high dimensionality and noise of its time-series data present challenges for effective clustering. This study evaluates three unsupervised clustering approaches, Symbolic Aggregate approXimation (SAX), enhanced SAX (eSAX), and Topological Data Analysis (TDA), applied to 20 Google Trends keywords representing major consumer categories. Our results show that while SAX and eSAX offer fast and interpretable clustering for stable time series, they struggle with volatility and complexity, often producing ambiguous ``catch-all'' clusters. TDA, by contrast, captures global structural features through persistent homology and achieves more balanced and meaningful groupings. We conclude with practical guidance for using symbolic and topological methods in consumer analytics and suggest that hybrid approaches combining both perspectives hold strong potential for future applications.
null
https://arxiv.org/abs/2506.19759v1
https://arxiv.org/pdf/2506.19759v1.pdf
null
[ "Pola Bereta", "Ioannis Diamantis" ]
[ "Clustering", "Marketing", "Time Series", "Topological Data Analysis" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/cross-regularization-adaptive-model
2506.19755
null
null
Cross-regularization: Adaptive Model Complexity through Validation Gradients
Model regularization requires extensive manual tuning to balance complexity against overfitting. Cross-regularization resolves this tradeoff by directly adapting regularization parameters through validation gradients during training. The method splits parameter optimization - training data guides feature learning while validation data shapes complexity controls - converging provably to cross-validation optima. When implemented through noise injection in neural networks, this approach reveals striking patterns: unexpectedly high noise tolerance and architecture-specific regularization that emerges organically during training. Beyond complexity control, the framework integrates seamlessly with data augmentation, uncertainty calibration and growing datasets while maintaining single-run efficiency through a simple gradient-based approach.
null
https://arxiv.org/abs/2506.19755v1
https://arxiv.org/pdf/2506.19755v1.pdf
null
[ "Carlos Stein Brito" ]
[ "Data Augmentation" ]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/when-can-we-reuse-a-calibration-set-for
2506.19689
null
null
When Can We Reuse a Calibration Set for Multiple Conformal Predictions?
Reliable uncertainty quantification is crucial for the trustworthiness of machine learning applications. Inductive Conformal Prediction (ICP) offers a distribution-free framework for generating prediction sets or intervals with user-specified confidence. However, standard ICP guarantees are marginal and typically require a fresh calibration set for each new prediction to maintain their validity. This paper addresses this practical limitation by demonstrating how e-conformal prediction, in conjunction with Hoeffding's inequality, can enable the repeated use of a single calibration set with a high probability of preserving the desired coverage. Through a case study on the CIFAR-10 dataset, we train a deep neural network and utilise a calibration set to estimate a Hoeffding correction. This correction allows us to apply a modified Markov's inequality, leading to the construction of prediction sets with quantifiable confidence. Our results illustrate the feasibility of maintaining provable performance in conformal prediction while enhancing its practicality by reducing the need for repeated calibration. The code for this work is publicly available.
null
https://arxiv.org/abs/2506.19689v1
https://arxiv.org/pdf/2506.19689v1.pdf
null
[ "A. A. Balinsky", "A. D. Balinsky" ]
[ "Conformal Prediction", "Prediction", "Uncertainty Quantification" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/statistical-inference-for-optimal-transport
2506.19025
null
null
Statistical Inference for Optimal Transport Maps: Recent Advances and Perspectives
In many applications of optimal transport (OT), the object of primary interest is the optimal transport map. This map rearranges mass from one probability distribution to another in the most efficient way possible by minimizing a specified cost. In this paper we review recent advances in estimating and developing limit theorems for the OT map, using samples from the underlying distributions. We also review parallel lines of work that establish similar results for special cases and variants of the basic OT setup. We conclude with a discussion of key directions for future research with the goal of providing practitioners with reliable inferential tools.
null
https://arxiv.org/abs/2506.19025v1
https://arxiv.org/pdf/2506.19025v1.pdf
null
[ "Sivaraman Balakrishnan", "Tudor Manole", "Larry Wasserman" ]
[]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-random-matrix-analysis-of-in-context
2506.18656
null
null
A Random Matrix Analysis of In-context Memorization for Nonlinear Attention
Attention mechanisms have revolutionized machine learning (ML) by enabling efficient modeling of global dependencies across inputs. Their inherently parallelizable structures allow for efficient scaling with the exponentially increasing size of both pretrained data and model parameters. Yet, despite their central role as the computational backbone of modern large language models (LLMs), the theoretical understanding of Attentions, especially in the nonlinear setting, remains limited. In this paper, we provide a precise characterization of the \emph{in-context memorization error} of \emph{nonlinear Attention}, in the high-dimensional proportional regime where the number of input tokens $n$ and their embedding dimension $p$ are both large and comparable. Leveraging recent advances in the theory of large kernel random matrices, we show that nonlinear Attention typically incurs higher memorization error than linear ridge regression on random inputs. However, this gap vanishes, and can even be reversed, when the input exhibits statistical structure, particularly when the Attention weights align with the input signal direction. Our results reveal how nonlinearity and input structure interact with each other to govern the memorization performance of nonlinear Attention. The theoretical insights are supported by numerical experiments.
null
https://arxiv.org/abs/2506.18656v1
https://arxiv.org/pdf/2506.18656v1.pdf
null
[ "Zhenyu Liao", "Jiaqing Liu", "Tianqi Hou", "Difan Zou", "Zenan Ling" ]
[ "Memorization" ]
2025-06-23T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" } ]
https://paperswithcode.com/paper/distributed-poisson-multi-bernoulli-filtering
2506.18397
null
null
Distributed Poisson multi-Bernoulli filtering via generalised covariance intersection
This paper presents the distributed Poisson multi-Bernoulli (PMB) filter based on the generalised covariance intersection (GCI) fusion rule for distributed multi-object filtering. Since the exact GCI fusion of two PMB densities is intractable, we derive a principled approximation. Specifically, we approximate the power of a PMB density as an unnormalised PMB density, which corresponds to an upper bound of the PMB density. Then, the GCI fusion rule corresponds to the normalised product of two unnormalised PMB densities. We show that the result is a Poisson multi-Bernoulli mixture (PMBM), which can be expressed in closed form. Future prediction and update steps in each filter preserve the PMBM form, which can be projected back to a PMB density before the next fusion step. Experimental results show the benefits of this approach compared to other distributed multi-object filters.
null
https://arxiv.org/abs/2506.18397v1
https://arxiv.org/pdf/2506.18397v1.pdf
null
[ "Ángel F. García-Fernández", "Giorgio Battistelli" ]
[ "Future prediction" ]
2025-06-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/identifiable-convex-concave-regression-via
2506.18078
null
null
Identifiable Convex-Concave Regression via Sub-gradient Regularised Least Squares
We propose a novel nonparametric regression method that models complex input-output relationships as the sum of convex and concave components. The method-Identifiable Convex-Concave Nonparametric Least Squares (ICCNLS)-decomposes the target function into additive shape-constrained components, each represented via sub-gradient-constrained affine functions. To address the affine ambiguity inherent in convex-concave decompositions, we introduce global statistical orthogonality constraints, ensuring that residuals are uncorrelated with both intercept and input variables. This enforces decomposition identifiability and improves interpretability. We further incorporate L1, L2 and elastic net regularisation on sub-gradients to enhance generalisation and promote structural sparsity. The proposed method is evaluated on synthetic and real-world datasets, including healthcare pricing data, and demonstrates improved predictive accuracy and model simplicity compared to conventional CNLS and difference-of-convex (DC) regression approaches. Our results show that statistical identifiability, when paired with convex-concave structure and sub-gradient regularisation, yields interpretable models suited for forecasting, benchmarking, and policy evaluation.
null
https://arxiv.org/abs/2506.18078v1
https://arxiv.org/pdf/2506.18078v1.pdf
null
[ "William Chung" ]
[ "Benchmarking", "regression" ]
2025-06-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/multi-armed-bandits-with-machine-learning
2506.16658
null
null
Multi-Armed Bandits With Machine Learning-Generated Surrogate Rewards
Multi-armed bandit (MAB) is a widely adopted framework for sequential decision-making under uncertainty. Traditional bandit algorithms rely solely on online data, which tends to be scarce as it must be gathered during the online phase when the arms are actively pulled. However, in many practical settings, rich auxiliary data, such as covariates of past users, is available prior to deploying any arms. We introduce a new setting for MAB where pre-trained machine learning (ML) models are applied to convert side information and historical data into \emph{surrogate rewards}. A prominent feature of this setting is that the surrogate rewards may exhibit substantial bias, as true reward data is typically unavailable in the offline phase, forcing ML predictions to heavily rely on extrapolation. To address the issue, we propose the Machine Learning-Assisted Upper Confidence Bound (MLA-UCB) algorithm, which can be applied to any reward prediction model and any form of auxiliary data. When the predicted and true rewards are jointly Gaussian, it provably improves the cumulative regret, provided that the correlation is non-zero -- even in cases where the mean surrogate reward completely misaligns with the true mean rewards. Notably, our method requires no prior knowledge of the covariance matrix between true and surrogate rewards. We compare MLA-UCB with the standard UCB on a range of numerical studies and show a sizable efficiency gain even when the size of the offline data and the correlation between predicted and true rewards are moderate.
null
https://arxiv.org/abs/2506.16658v1
https://arxiv.org/pdf/2506.16658v1.pdf
null
[ "Wenlong Ji", "Yihan Pan", "Ruihao Zhu", "Lihua Lei" ]
[ "Decision Making Under Uncertainty", "Multi-Armed Bandits", "Sequential Decision Making" ]
2025-06-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/leveraging-optimal-transport-for-distributed
2506.16047
null
null
Leveraging Optimal Transport for Distributed Two-Sample Testing: An Integrated Transportation Distance-based Framework
This paper introduces a novel framework for distributed two-sample testing using the Integrated Transportation Distance (ITD), an extension of the Optimal Transport distance. The approach addresses the challenges of detecting distributional changes in decentralized learning or federated learning environments, where data privacy and heterogeneity are significant concerns. We provide theoretical foundations for the ITD, including convergence properties and asymptotic behavior. A permutation test procedure is proposed for practical implementation in distributed settings, allowing for efficient computation while preserving data privacy. The framework's performance is demonstrated through theoretical power analysis and extensive simulations, showing robust Type I error control and high power across various distributions and dimensions. The results indicate that ITD effectively aggregates information across distributed clients, detecting subtle distributional shifts that might be missed when examining individual clients. This work contributes to the growing field of distributed statistical inference, offering a powerful tool for two-sample testing in modern, decentralized data environments.
null
https://arxiv.org/abs/2506.16047v1
https://arxiv.org/pdf/2506.16047v1.pdf
null
[ "Zhengqi Lin", "Yan Chen" ]
[ "Federated Learning", "Two-sample testing" ]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/identifiability-by-common-backdoor-in-summary
2506.14862
null
null
Identifiability by common backdoor in summary causal graphs of time series
The identifiability problem for interventions aims at assessing whether the total effect of some given interventions can be written with a do-free formula, and thus be computed from observational data only. We study this problem, considering multiple interventions and multiple effects, in the context of time series when only abstractions of the true causal graph in the form of summary causal graphs are available. We focus in this study on identifiability by a common backdoor set, and establish, for time series with and without consistency throughout time, conditions under which such a set exists. We also provide algorithms of limited complexity to decide whether the problem is identifiable or not.
null
https://arxiv.org/abs/2506.14862v1
https://arxiv.org/pdf/2506.14862v1.pdf
null
[ "Clément Yvernes", "Charles K. Assaad", "Emilie Devijver", "Eric Gaussier" ]
[ "Time Series" ]
2025-06-17T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/beyond-sin-squared-error-linear-time
2506.12655
null
null
Beyond Sin-Squared Error: Linear-Time Entrywise Uncertainty Quantification for Streaming PCA
We propose a novel statistical inference framework for streaming principal component analysis (PCA) using Oja's algorithm, enabling the construction of confidence intervals for individual entries of the estimated eigenvector. Most existing works on streaming PCA focus on providing sharp sin-squared error guarantees. Recently, there has been some interest in uncertainty quantification for the sin-squared error. However, uncertainty quantification or sharp error guarantees for entries of the estimated eigenvector in the streaming setting remains largely unexplored. We derive a sharp Bernstein-type concentration bound for elements of the estimated vector matching the optimal error rate up to logarithmic factors. We also establish a Central Limit Theorem for a suitably centered and scaled subset of the entries. To efficiently estimate the coordinate-wise variance, we introduce a provably consistent subsampling algorithm that leverages the median-of-means approach, empirically achieving similar accuracy to multiplier bootstrap methods while being significantly more computationally efficient. Numerical experiments demonstrate its effectiveness in providing reliable uncertainty estimates with a fraction of the computational cost of existing methods.
null
https://arxiv.org/abs/2506.12655v1
https://arxiv.org/pdf/2506.12655v1.pdf
null
[ "Syamantak Kumar", "Shourya Pandey", "Purnamrita Sarkar" ]
[ "Uncertainty Quantification" ]
2025-06-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)", "full_name": "Principal Components Analysis", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.", "name": "Dimensionality Reduction", "parent": null }, "name": "PCA", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/bridging-classical-molecular-dynamics-and
2506.20830
null
null
Bridging Classical Molecular Dynamics and Quantum Foundations for Comprehensive Protein Structural Analysis
The objective of this paper is to investigate the structural stability, dynamic properties, and potential interactions among Amyloid Precursor Protein (APP), Tau, and Alpha-synuclein through a series of molecular dynamics simulations that integrate publicly available structural data, detailed force-field parameters, and comprehensive analytical protocols. By focusing on these three proteins, which are each implicated in various neurodegenerative disorders, the study aims to elucidate how their conformational changes and interprotein contact sites may influence larger biological processes. Through rigorous evaluation of their folding behaviors, energetic interactions, and residue-specific functions, this work contributes to the broader understanding of protein aggregation mechanisms and offers insights that may ultimately guide therapeutic intervention strategies.
null
https://arxiv.org/abs/2506.20830v1
https://arxiv.org/pdf/2506.20830v1.pdf
null
[ "Don Roosan", "Rubayat Khan", "Tiffany Khou", "Saif Nirzhor", "Fahmida Hai", "Brian Provencher" ]
[]
2025-06-25T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/procaliper-functional-and-structural-analysis
2506.19961
null
null
ProCaliper: functional and structural analysis, visualization, and annotation of proteins
Understanding protein function at the molecular level requires connecting residue-level annotations with physical and structural properties. This can be cumbersome and error-prone when functional annotation, computation of physico-chemical properties, and structure visualization are separated. To address this, we introduce ProCaliper, an open-source Python library for computing and visualizing physico-chemical properties of proteins. It can retrieve annotation and structure data from UniProt and AlphaFold databases, compute residue-level properties such as charge, solvent accessibility, and protonation state, and interactively visualize the results of these computations along with user-supplied residue-level data. Additionally, ProCaliper incorporates functional and structural information to construct and optionally sparsify networks that encode the distance between residues and/or annotated functional sites or regions. The package ProCaliper and its source code, along with the code used to generate the figures in this manuscript, are freely available at https://github.com/PNNL-Predictive-Phenomics/ProCaliper.
Understanding protein function at the molecular level requires connecting residue-level annotations with physical and structural properties.
https://arxiv.org/abs/2506.19961v1
https://arxiv.org/pdf/2506.19961v1.pdf
null
[ "Jordan C. Rozum", "Hunter Ufford", "Alexandria K. Im", "Tong Zhang", "David D. Pollock", "Doo Nam Kim", "Song Feng" ]
[]
2025-06-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-standard-transformer-and-attention-with
2506.19834
null
null
A standard transformer and attention with linear biases for molecular conformer generation
Sampling low-energy molecular conformations, spatial arrangements of atoms in a molecule, is a critical task for many different calculations performed in the drug discovery and optimization process. Numerous specialized equivariant networks have been designed to generate molecular conformations from 2D molecular graphs. Recently, non-equivariant transformer models have emerged as a viable alternative due to their capability to scale to improve generalization. However, the concern has been that non-equivariant models require a large model size to compensate the lack of equivariant bias. In this paper, we demonstrate that a well-chosen positional encoding effectively addresses these size limitations. A standard transformer model incorporating relative positional encoding for molecular graphs when scaled to 25 million parameters surpasses the current state-of-the-art non-equivariant base model with 64 million parameters on the GEOM-DRUGS benchmark. We implemented relative positional encoding as a negative attention bias that linearly increases with the shortest path distances between graph nodes at varying slopes for different attention heads, similar to ALiBi, a widely adopted relative positional encoding technique in the NLP domain. This architecture has the potential to serve as a foundation for a novel class of generative models for molecular conformations.
null
https://arxiv.org/abs/2506.19834v1
https://arxiv.org/pdf/2506.19834v1.pdf
null
[ "Viatcheslav Gurev", "Timothy Rumbell" ]
[ "Drug Discovery" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Balanced Selection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Active Learning", "parent": null }, "name": "BASE", "source_title": "Active Learning at the ImageNet Scale", "source_url": "https://arxiv.org/abs/2111.12880v1" }, { "code_snippet_url": "", "description": "**ALiBi**, or **Attention with Linear Biases**, is a [positioning method](https://paperswithcode.com/methods/category/position-embeddings) that allows [Transformer](https://paperswithcode.com/methods/category/transformers) language models to consume, at inference time, sequences which are longer than the ones they were trained on. \r\n\r\nALiBi does this without using actual position embeddings. Instead, computing the attention between a certain key and query, ALiBi penalizes the attention value that that query can assign to the key depending on how far away the key and query are. So when a key and query are close by, the penalty is very low, and when they are far away, the penalty is very high. \r\n\r\nThis method was motivated by the simple reasoning that words that are close-by matter much more than ones that are far away.\r\n\r\nThis method is as fast as the sinusoidal or absolute embedding methods (the fastest positioning methods there are). It outperforms those methods and Rotary embeddings when evaluating sequences that are longer than the ones the model was trained on (this is known as extrapolation).", "full_name": "Attention with Linear Biases", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Inference Extrapolation", "parent": null }, "name": "ALiBi", "source_title": "Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation", "source_url": "https://arxiv.org/abs/2108.12409v2" } ]
https://paperswithcode.com/paper/proxelgen-generating-proteins-as-3d-densities
2506.19820
null
null
ProxelGen: Generating Proteins as 3D Densities
We develop ProxelGen, a protein structure generative model that operates on 3D densities as opposed to the prevailing 3D point cloud representations. Representing proteins as voxelized densities, or proxels, enables new tasks and conditioning capabilities. We generate proteins encoded as proxels via a 3D CNN-based VAE in conjunction with a diffusion model operating on its latent space. Compared to state-of-the-art models, ProxelGen's samples achieve higher novelty, better FID scores, and the same level of designability as the training set. ProxelGen's advantages are demonstrated in a standard motif scaffolding benchmark, and we show how 3D density-based generation allows for more flexible shape conditioning.
null
https://arxiv.org/abs/2506.19820v1
https://arxiv.org/pdf/2506.19820v1.pdf
null
[ "Felix Faltings", "Hannes Stark", "Regina Barzilay", "Tommi Jaakkola" ]
[]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/toward-the-explainability-of-protein-language
2506.19532
null
null
Toward the Explainability of Protein Language Models for Sequence Design
Transformer-based language models excel in a variety of protein-science tasks that range from structure prediction to the design of functional enzymes. However, these models operate as black boxes, and their underlying working principles remain unclear. Here, we survey emerging applications of explainable artificial intelligence (XAI) to protein language models (pLMs) and describe their potential in protein research. We break down the workflow of a generative decoder-only Transformer into four information contexts: (i) training sequences, (ii) input prompt, (iii) model architecture, and (iv) output sequence. For each, we describe existing methods and applications of XAI. Additionally, from published studies we distil five (potential) roles that XAI can play in protein design: Evaluator, Multitasker, Engineer, Coach, and Teacher, with the Evaluator role being the only one widely adopted so far. These roles aim to help both protein science practitioners and model developers understand the possibilities and limitations of implementing XAI for the design of sequences. Finally, we highlight the critical areas of application for the future, including risks related to security, trustworthiness, and bias, and we call for community benchmarks, open-source tooling, and domain-specific visualizations to advance explainable protein design. Overall, our analysis aims to move the discussion toward the use of XAI in protein design.
null
https://arxiv.org/abs/2506.19532v1
https://arxiv.org/pdf/2506.19532v1.pdf
null
[ "Andrea Hunklinger", "Noelia Ferruz" ]
[ "Explainable artificial intelligence", "Explainable Artificial Intelligence (XAI)", "Protein Design" ]
2025-06-24T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" } ]
https://paperswithcode.com/paper/generative-modeling-of-full-atom-protein
2506.17064
null
null
Generative Modeling of Full-Atom Protein Conformations using Latent Diffusion on Graph Embeddings
Generating diverse, all-atom conformational ensembles of dynamic proteins such as G-protein-coupled receptors (GPCRs) is critical for understanding their function, yet most generative models simplify atomic detail or ignore conformational diversity altogether. We present latent diffusion for full protein generation (LD-FPG), a framework that constructs complete all-atom protein structures, including every side-chain heavy atom, directly from molecular dynamics (MD) trajectories. LD-FPG employs a Chebyshev graph neural network (ChebNet) to obtain low-dimensional latent embeddings of protein conformations, which are processed using three pooling strategies: blind, sequential and residue-based. A diffusion model trained on these latent representations generates new samples that a decoder, optionally regularized by dihedral-angle losses, maps back to Cartesian coordinates. Using D2R-MD, a 2-microsecond MD trajectory (12 000 frames) of the human dopamine D2 receptor in a membrane environment, the sequential and residue-based pooling strategy reproduces the reference ensemble with high structural fidelity (all-atom lDDT of approximately 0.7; C-alpha-lDDT of approximately 0.8) and recovers backbone and side-chain dihedral-angle distributions with a Jensen-Shannon divergence of less than 0.03 compared to the MD data. LD-FPG thereby offers a practical route to system-specific, all-atom ensemble generation for large proteins, providing a promising tool for structure-based therapeutic design on complex, dynamic targets. The D2R-MD dataset and our implementation are freely available to facilitate further research.
null
https://arxiv.org/abs/2506.17064v2
https://arxiv.org/pdf/2506.17064v2.pdf
null
[ "Aditya Sengar", "Ali Hariri", "Daniel Probst", "Patrick Barth", "Pierre Vandergheynst" ]
[ "Graph Neural Network" ]
2025-06-20T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" }, { "code_snippet_url": null, "description": "", "full_name": "Graph Neural Network", "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Graph Neural Network", "source_title": "Graph Neural Networks: A Review of Methods and Applications", "source_url": "https://arxiv.org/abs/1812.08434v6" } ]
https://paperswithcode.com/paper/single-cell-proteomic-technologies-tools-in
2506.18198
null
null
Single-Cell Proteomic Technologies: Tools in the quest for principles
Over the last decade, proteomic analysis of single cells by mass spectrometry transitioned from an uncertain possibility to a set of robust and rapidly advancing technologies supporting the accurate quantification of thousands of proteins. We review the major drivers of this progress, from establishing feasibility to powerful and increasingly scalable methods. We focus on the tradeoffs and synergies of different technological solutions within a coherent conceptual framework, which projects considerable room both for throughput scaling and for extending the analysis scope to functional protein measurements. We highlight the potential of these technologies to support the development of mechanistic biophysical models and help uncover new principles.
null
https://arxiv.org/abs/2506.18198v1
https://arxiv.org/pdf/2506.18198v1.pdf
null
[ "Nikolai Slavov" ]
[]
2025-06-22T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/omniesi-a-unified-framework-for-enzyme
2506.17963
null
null
OmniESI: A unified framework for enzyme-substrate interaction prediction with progressive conditional deep learning
Understanding and modeling enzyme-substrate interactions is crucial for catalytic mechanism research, enzyme engineering, and metabolic engineering. Although a large number of predictive methods have emerged, they do not incorporate prior knowledge of enzyme catalysis to rationally modulate general protein-molecule features that are misaligned with catalytic patterns. To address this issue, we introduce a two-stage progressive framework, OmniESI, for enzyme-substrate interaction prediction through conditional deep learning. By decomposing the modeling of enzyme-substrate interactions into a two-stage progressive process, OmniESI incorporates two conditional networks that respectively emphasize enzymatic reaction specificity and crucial catalysis-related interactions, facilitating a gradual feature modulation in the latent space from general protein-molecule domain to catalysis-aware domain. On top of this unified architecture, OmniESI can adapt to a variety of downstream tasks, including enzyme kinetic parameter prediction, enzyme-substrate pairing prediction, enzyme mutational effect prediction, and enzymatic active site annotation. Under the multi-perspective performance evaluation of in-distribution and out-of-distribution settings, OmniESI consistently delivered superior performance than state-of-the-art specialized methods across seven benchmarks. More importantly, the proposed conditional networks were shown to internalize the fundamental patterns of catalytic efficiency while significantly improving prediction performance, with only negligible parameter increases (0.16%), as demonstrated by ablation studies on key components. Overall, OmniESI represents a unified predictive approach for enzyme-substrate interactions, providing an effective tool for catalytic mechanism cracking and enzyme engineering with strong generalization and broad applicability.
Understanding and modeling enzyme-substrate interactions is crucial for catalytic mechanism research, enzyme engineering, and metabolic engineering.
https://arxiv.org/abs/2506.17963v1
https://arxiv.org/pdf/2506.17963v1.pdf
null
[ "Zhiwei Nie", "Hongyu Zhang", "Hao Jiang", "Yutian Liu", "Xiansong Huang", "Fan Xu", "Jie Fu", "Zhixiang Ren", "Yonghong Tian", "Wen-Bin Zhang", "Jie Chen" ]
[ "Parameter Prediction", "Prediction", "Specificity" ]
2025-06-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/abrank-a-benchmark-dataset-and-metric
2506.17857
null
null
AbRank: A Benchmark Dataset and Metric-Learning Framework for Antibody-Antigen Affinity Ranking
Accurate prediction of antibody-antigen (Ab-Ag) binding affinity is essential for therapeutic design and vaccine development, yet the performance of current models is limited by noisy experimental labels, heterogeneous assay conditions, and poor generalization across the vast antibody and antigen sequence space. We introduce AbRank, a large-scale benchmark and evaluation framework that reframes affinity prediction as a pairwise ranking problem. AbRank aggregates over 380,000 binding assays from nine heterogeneous sources, spanning diverse antibodies, antigens, and experimental conditions, and introduces standardized data splits that systematically increase distribution shift, from local perturbations such as point mutations to broad generalization across novel antigens and antibodies. To ensure robust supervision, AbRank defines an m-confident ranking framework by filtering out comparisons with marginal affinity differences, focusing training on pairs with at least an m-fold difference in measured binding strength. As a baseline for the benchmark, we introduce WALLE-Affinity, a graph-based approach that integrates protein language model embeddings with structural information to predict pairwise binding preferences. Our benchmarks reveal significant limitations in current methods under realistic generalization settings and demonstrate that ranking-based training improves robustness and transferability. In summary, AbRank offers a robust foundation for machine learning models to generalize across the antibody-antigen space, with direct relevance for scalable, structure-aware antibody therapeutic design.
Accurate prediction of antibody-antigen (Ab-Ag) binding affinity is essential for therapeutic design and vaccine development, yet the performance of current models is limited by noisy experimental labels, heterogeneous assay conditions, and poor generalization across the vast antibody and antigen sequence space.
https://arxiv.org/abs/2506.17857v1
https://arxiv.org/pdf/2506.17857v1.pdf
null
[ "Chunan Liu", "Aurelien Pelissier", "Yanjun Shao", "Lilian Denzler", "Andrew C. R. Martin", "Brooks Paige", "Mariia Rodriguez Martinez" ]
[ "Metric Learning", "Protein Language Model" ]
2025-06-21T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/aptamer-protein-interaction-prediction-model
2506.16084
null
null
Aptamer-protein interaction prediction model based on transformer
Aptamers are single-stranded DNA/RNAs or short peptides with unique tertiary structures that selectively bind to specific targets. They have great potential in the detection and medical fields. Here, we present SelfTrans-Ensemble, a deep learning model that integrates sequence information models and structural information models to extract multi-scale features for predicting aptamer-protein interactions (APIs). The model employs two pre-trained models, ProtBert and RNA-FM, to encode protein and aptamer sequences, along with features generated from primary sequence and secondary structural information. To address the data imbalance in the aptamer dataset imbalance, we incorporated short RNA-protein interaction data in the training set. This resulted in a training accuracy of 98.9% and a test accuracy of 88.0%, demonstrating the model's effectiveness in accurately predicting APIs. Additionally, analysis using molecular simulation indicated that SelfTrans-Ensemble is sensitive to aptamer sequence mutations. We anticipate that SelfTrans-Ensemble can offer a more efficient and rapid process for aptamer screening.
null
https://arxiv.org/abs/2506.16084v1
https://arxiv.org/pdf/2506.16084v1.pdf
null
[ "Zhichao Yan", "Yue Kang", "Buyong Ma" ]
[ "Prediction" ]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/active-learning-guided-seq2seq-variational
2506.15309
null
null
Active Learning-Guided Seq2Seq Variational Autoencoder for Multi-target Inhibitor Generation
Simultaneously optimizing molecules against multiple therapeutic targets remains a profound challenge in drug discovery, particularly due to sparse rewards and conflicting design constraints. We propose a structured active learning (AL) paradigm integrating a sequence-to-sequence (Seq2Seq) variational autoencoder (VAE) into iterative loops designed to balance chemical diversity, molecular quality, and multi-target affinity. Our method alternates between expanding chemically feasible regions of latent space and progressively constraining molecules based on increasingly stringent multi-target docking thresholds. In a proof-of-concept study targeting three related coronavirus main proteases (SARS-CoV-2, SARS-CoV, MERS-CoV), our approach efficiently generated a structurally diverse set of pan-inhibitor candidates. We demonstrate that careful timing and strategic placement of chemical filters within this active learning pipeline markedly enhance exploration of beneficial chemical space, transforming the sparse-reward, multi-objective drug design problem into an accessible computational task. Our framework thus provides a generalizable roadmap for efficiently navigating complex polypharmacological landscapes.
null
https://arxiv.org/abs/2506.15309v1
https://arxiv.org/pdf/2506.15309v1.pdf
null
[ "Júlia Vilalta-Mor", "Alexis Molina", "Laura Ortega Varga", "Isaac Filella-Merce", "Victor Guallar" ]
[ "Active Learning", "Diversity", "Drug Design", "Drug Discovery" ]
2025-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/cbtope2-an-improved-method-for-predicting-of
2506.13395
null
null
CBTOPE2: An improved method for predicting of conformational B-cell epitopes in an antigen from its primary sequence
In 2009, our group pioneered a novel method CBTOPE for predicting conformational B-cell epitopes in a protein from its amino acid sequence, which received extensive citations from the scientific community. In a recent study, Cia et al. (2023) evaluated the performance of conformational B-cell epitope prediction methods on a well-curated dataset, revealing that most approaches, including CBTOPE, exhibited poor performance. One plausible cause of this diminished performance is that available methods were trained on datasets that are both limited in size and outdated in content. In this study, we present an enhanced version of CBTOPE, trained, tested, and evaluated using the well-curated dataset from Cai et al. (2023). Initially, we developed machine learning-based models using binary profiles, achieving a maximum AUC of 0.58 on the validation dataset. The performance of our method improved significantly from an AUC of 0.58 to 0.63 when incorporating evolutionary information in the form of a Position-Specific Scoring Matrix (PSSM) profile. Furthermore, the performance increased from an AUC of 0.63 to 0.64 when we integrated both the PSSM profile and relative solvent accessibility (RSA). All models were trained, tested, and optimized on the training dataset using five-fold cross-validation. The final performance of our models was assessed using a validation or independent dataset that was not used during hyperparameter optimization. To facilitate scientific community working in the field of subunit vaccine, we develop a standalone software and web server CBTOPE2 (https://webs.iiitd.edu.in/raghava/cbtope2/).
null
https://arxiv.org/abs/2506.13395v1
https://arxiv.org/pdf/2506.13395v1.pdf
null
[ "Anupma Pandey", "Megha", "Nishant Kumar", "Ruchir Sahni", "Gajendra P. S. Raghava" ]
[ "Hyperparameter Optimization" ]
2025-06-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lapddpm-a-conditional-graph-diffusion-model
2506.13344
null
null
LapDDPM: A Conditional Graph Diffusion Model for scRNA-seq Generation with Spectral Adversarial Perturbations
Generating high-fidelity and biologically plausible synthetic single-cell RNA sequencing (scRNA-seq) data, especially with conditional control, is challenging due to its high dimensionality, sparsity, and complex biological variations. Existing generative models often struggle to capture these unique characteristics and ensure robustness to structural noise in cellular networks. We introduce LapDDPM, a novel conditional Graph Diffusion Probabilistic Model for robust and high-fidelity scRNA-seq generation. LapDDPM uniquely integrates graph-based representations with a score-based diffusion model, enhanced by a novel spectral adversarial perturbation mechanism on graph edge weights. Our contributions are threefold: we leverage Laplacian Positional Encodings (LPEs) to enrich the latent space with crucial cellular relationship information; we develop a conditional score-based diffusion model for effective learning and generation from complex scRNA-seq distributions; and we employ a unique spectral adversarial training scheme on graph edge weights, boosting robustness against structural variations. Extensive experiments on diverse scRNA-seq datasets demonstrate LapDDPM's superior performance, achieving high fidelity and generating biologically-plausible, cell-type-specific samples. LapDDPM sets a new benchmark for conditional scRNA-seq data generation, offering a robust tool for various downstream biological applications.
We introduce LapDDPM, a novel conditional Graph Diffusion Probabilistic Model for robust and high-fidelity scRNA-seq generation.
https://arxiv.org/abs/2506.13344v1
https://arxiv.org/pdf/2506.13344v1.pdf
null
[ "Lorenzo Bini", "Stephane Marchand-Maillet" ]
[]
2025-06-16T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]