paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/in-vitro-antibacterial-activity-of-hexane
|
2506.13121
| null | null |
In Vitro Antibacterial activity of hexane, Chloroform and methanolic extracts of different parts of Acronychia pedunculata grown in Sri Lanka
|
This study accessed the antibacterial potential in vitro of hexane, chloroform and methanol extracts made from leaves, stem bark, flowers, seeds or roots of Sri Lankan grown Acronychia pedunculata plant against two Gram positive bacteria, Staphylococus aureus (ATCC 25923) and Bacilus cereus (ATCC 11778), and two Gram negative bacteria, Pseudomonas aeruginosa (ATCC 9027) and Escherichia coli (ATCC 35218), using agar disc diffusion bioassay technique. The results showed that none the of the extracts provoked an antibacterial action against the two Gram negative bacteria P. aeruginosa and E. coli. Conversely, compared to reference drug, Gentamicin, varying magnitudes of antibacterial activity (concentration: 300 mg/disc) ranging from zero to mild to moderate to strong antibacterial activity was evident with the three solvent systems made from different parts of the plant against the two Gram positive bacteria S. aureus and B. cereus. All the three flower extracts excerted marked antibacterial activity against both S. aureus and B. cereus. The highest antibacterial activity was exhibited by methanol flowers extract (inhibition zone: 13.8-0.32mm), with a Minimum inhibitory value of 32mg/ml, against B. cereus. The overall order of potency against S. aureus was, chloroform flowers> chloroform seeds > hexane leaves > chloroform leaves > methanol flowers> hexane flowers> methanol seeds. And against B. cereus was methanol flowers> hexane leaves > hexane flowers> chloroform leaves >chloroform flowers >chloroform seeds > hexane roots > chloroform roots > methanol seeds chloroform stem barks = hexane stem barks. These are all novel findings for A. pedunculata found in Sri Lanka and elsewhere.
| null |
https://arxiv.org/abs/2506.13121v1
|
https://arxiv.org/pdf/2506.13121v1.pdf
| null |
[
"R. D. Nimantha Karunathilaka",
"Athige Rajith Niloshan Silva",
"Chathuranga Bharathee Ranaweera",
"D. M. R. K. Dissanayake",
"N. R. M. Nelumdeniya",
"Ranjith Pathirana",
"W. D. Ratnasooriya"
] |
[] | 2025-06-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/pdcnet-a-benchmark-and-general-deep-learning
|
2506.12821
| null | null |
PDCNet: a benchmark and general deep learning framework for activity prediction of peptide-drug conjugates
|
Peptide-drug conjugates (PDCs) represent a promising therapeutic avenue for human diseases, particularly in cancer treatment. Systematic elucidation of structure-activity relationships (SARs) and accurate prediction of the activity of PDCs are critical for the rational design and optimization of these conjugates. To this end, we carefully design and construct a benchmark PDCs dataset compiled from literature-derived collections and PDCdb database, and then develop PDCNet, the first unified deep learning framework for forecasting the activity of PDCs. The architecture systematically captures the complex factors underlying anticancer decisions of PDCs in real-word scenarios through a multi-level feature fusion framework that collaboratively characterizes and learns the features of peptides, linkers, and payloads. Leveraging a curated PDCs benchmark dataset, comprehensive evaluation results show that PDCNet demonstrates superior predictive capability, with the highest AUC, F1, MCC and BA scores of 0.9213, 0.7656, 0.7071 and 0.8388 for the test set, outperforming eight established traditional machine learning models. Multi-level validations, including 5-fold cross-validation, threshold testing, ablation studies, model interpretability analysis and external independent testing, further confirm the superiority, robustness, and usability of the PDCNet architecture. We anticipate that PDCNet represents a novel paradigm, incorporating both a benchmark dataset and advanced models, which can accelerate the design and discovery of new PDC-based therapeutic agents.
| null |
https://arxiv.org/abs/2506.12821v1
|
https://arxiv.org/pdf/2506.12821v1.pdf
| null |
[
"Yun Liu",
"Jintu Huang",
"Yingying Zhu",
"Congrui Wen",
"Yu Pang",
"Ji-Quan Zhang",
"Ling Wang"
] |
[
"Activity Prediction"
] | 2025-06-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/smartphone-integrated-rpa-crispr-cas12a
|
2506.15728
| null | null |
Smartphone-integrated RPA-CRISPR-Cas12a Detection System with Microneedle Sampling for Point-of-Care Diagnosis of Potato Late Blight in Early Stage
|
Potato late blight, caused by the oomycete pathogen Phytophthora infestans, is one of the most devastating diseases affecting potato crops in the history. Although conventional detection methods of plant diseases such as PCR and LAMP are highly sensitive and specific, they rely on bulky and expensive laboratory equipment and involve complex operations, making them impracticable for point-of care diagnosis in the field. Here in this study, we report a portable RPA-CRISPR based diagnosis system for plant disease, integrating smartphone for acquisition and analysis of fluorescent images. A polyvinyl alcohol (PVA) microneedle patch was employed for sample extraction on the plant leaves within one minute, the DNA extraction efficiency achieved 56 ug/mg, which is approximately 3 times to the traditional CTAB methods (18 ug/mg). The system of RPA-CRISPR-Cas12a isothermal assay was established to specifically target P. infestans with no cross-reactivity observed against closely-related species (P. sojae, P. capsici). The system demonstrated a detection limit of 2 pg/uL for P. infestans genomic DNA, offering sensitivity comparable to that of benchtop laboratory equipment. The system demonstrates the early-stage diagnosis capability by achieving a approximately 80% and 100% detection rate on the third and fourth day post-inoculation respectively, before visible symptoms observed on the leaves. The smartphone-based "sample-to-result" system decouples the limitations of traditional methods that rely heavily on specialized equipment, offering a promising way for early-stage plant disease detection and control in the field.
| null |
https://arxiv.org/abs/2506.15728v1
|
https://arxiv.org/pdf/2506.15728v1.pdf
| null |
[
"Jiangnan Zhao",
"Hanbo Xu",
"Cifu Xu",
"Wenlong Yin",
"Laixin Luo",
"Gang Liu",
"Yan Wang"
] |
[] | 2025-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/pfmbench-protein-foundation-model-benchmark
|
2506.14796
| null | null |
PFMBench: Protein Foundation Model Benchmark
|
This study investigates the current landscape and future directions of protein foundation model research. While recent advancements have transformed protein science and engineering, the field lacks a comprehensive benchmark for fair evaluation and in-depth understanding. Since ESM-1B, numerous protein foundation models have emerged, each with unique datasets and methodologies. However, evaluations often focus on limited tasks tailored to specific models, hindering insights into broader generalization and limitations. Specifically, researchers struggle to understand the relationships between tasks, assess how well current models perform across them, and determine the criteria in developing new foundation models. To fill this gap, we present PFMBench, a comprehensive benchmark evaluating protein foundation models across 38 tasks spanning 8 key areas of protein science. Through hundreds of experiments on 17 state-of-the-art models across 38 tasks, PFMBench reveals the inherent correlations between tasks, identifies top-performing models, and provides a streamlined evaluation protocol. Code is available at \href{https://github.com/biomap-research/PFMBench}{\textcolor{blue}{GitHub}}.
|
To fill this gap, we present PFMBench, a comprehensive benchmark evaluating protein foundation models across 38 tasks spanning 8 key areas of protein science.
|
https://arxiv.org/abs/2506.14796v1
|
https://arxiv.org/pdf/2506.14796v1.pdf
| null |
[
"Zhangyang Gao",
"Hao Wang",
"Cheng Tan",
"Chenrui Xu",
"Mengdi Liu",
"Bozhen Hu",
"Linlin Chao",
"XiaoMing Zhang",
"Stan Z. Li"
] |
[
"model"
] | 2025-06-01T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/scmamba-a-scalable-foundation-model-for
|
2506.20697
| null | null |
scMamba: A Scalable Foundation Model for Single-Cell Multi-Omics Integration Beyond Highly Variable Feature Selection
|
The advent of single-cell multi-omics technologies has enabled the simultaneous profiling of diverse omics layers within individual cells. Integrating such multimodal data provides unprecedented insights into cellular identity, regulatory processes, and disease mechanisms. However, it remains challenging, as current methods often rely on selecting highly variable genes or peaks during preprocessing, which may inadvertently discard crucial biological information. Here, we present scMamba, a foundation model designed to integrate single-cell multi-omics data without the need for prior feature selection while preserving genomic positional information. scMamba introduces a patch-based cell tokenization strategy that treats genomics regions as words (tokens) and cells as sentences. Building upon the concept of state space duality, scMamba distills rich biological insights from high-dimensional, sparse single-cell multi-omics data. Additionally, our novel contrastive learning approach, enhanced with cosine similarity regularization, enables superior alignment across omics layers compared to traditional methods. Systematic benchmarking across multiple datasets demonstrates that scMamba significantly outperforms state-of-the-art methods in preserving biological variation, aligning omics layers, and enhancing key downstream tasks such as clustering, cell type annotation, and trajectory inference. Our findings position scMamba as a powerful tool for large-scale single-cell multi-omics integration, capable of handling large-scale atlases and advancing biological discovery.
| null |
https://arxiv.org/abs/2506.20697v1
|
https://arxiv.org/pdf/2506.20697v1.pdf
| null |
[
"Zhen Yuan",
"Shaoqing Jiao",
"Yihang Xiao",
"Jiajie Peng"
] |
[
"Benchmarking",
"Contrastive Learning",
"feature selection"
] | 2025-06-25T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "Feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction.",
"full_name": "Feature Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**AutoML** methods are used to automatically solve machine learning tasks without needing the user to specify or experiment with architectures, hyperparameters and other settings. Below you can find a continuously updating list of AutoML methods.",
"name": "AutoML",
"parent": null
},
"name": "Feature Selection",
"source_title": "Feature Selection and Feature Extraction in Pattern Analysis: A Literature Review",
"source_url": "https://arxiv.org/abs/1905.02845v1"
}
] |
https://paperswithcode.com/paper/inferring-exocytosis-profiles-from-cell
|
2506.17472
| null | null |
Inferring Exocytosis Profiles from Cell Shapes Using a Dual-Configuration Model of Walled Cell Tip Growth
|
Tip growth in filamentous cells, such as root hairs, moss protonemata, and fungal hyphae, depends on coordinated cell wall extension driven by turgor pressure, wall mechanics, and exocytosis. We introduce a dual-configuration model that incorporates both turgid and unturgid states to describe cell wall growth as the combined effect of elastic deformation and irreversible extension. This framework infers exocytosis profiles directly from cell morphology and elastic stretches, formulated as an initial value problem based on the self-similarity condition. Applying the model to Medicago truncatula root hairs, moss Physcomitrium patens protonemata, and hyphoid-like shapes, we find that exocytosis peaks at the tip in tapered cells but shifts to an annular region away from the apex in flatter-tip cells beyond a threshold. The model generalizes previous fluid models and provides a mechanistic link between exocytosis distribution and cell shape, explaining observed variations in tip-growing cells across species.
| null |
https://arxiv.org/abs/2506.17472v1
|
https://arxiv.org/pdf/2506.17472v1.pdf
| null |
[
"Kamryn Spinelli",
"Chaozhen Wei",
"Luis Vidali",
"Min Wu"
] |
[] | 2025-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quantification-of-information-flow-by-dual
|
2506.15957
| null | null |
Quantification of Information Flow by Dual Reporter System and Its Application to Bacterial Chemotaxis
|
Mutual information is a theoretically grounded metric for quantifying cellular signaling pathways. However, its measurement demands characterization of both input and output distributions, limiting practical applications. Here, we present alternative method that alleviates this requirement using dual reporter systems. By extending extrinsic-intrinsic noise analysis, we derive a mutual information estimator that eliminates the need to measure input distribution. We demonstrate our method by analyzing the bacterial chemotactic pathway, regarding multiple flagellar motors as natural dual reporters. We show the biological relevance of the measured information flow by comparing it with theoretical bounds on sensory information. This framework opens new possibilities for quantifying information flow in cellular signaling pathways.
| null |
https://arxiv.org/abs/2506.15957v1
|
https://arxiv.org/pdf/2506.15957v1.pdf
| null |
[
"Kento Nakamura",
"Hajime Fukuoka",
"Akihiko Ishijima",
"Tetsuya J. Kobayashi"
] |
[] | 2025-06-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/heist-a-graph-foundation-model-for-spatial
|
2506.11152
| null | null |
HEIST: A Graph Foundation Model for Spatial Transcriptomics and Proteomics Data
|
Single-cell transcriptomics has become a great source for data-driven insights into biology, enabling the use of advanced deep learning methods to understand cellular heterogeneity and transcriptional regulation at the single-cell level. With the advent of spatial transcriptomics data we have the promise of learning about cells within a tissue context as it provides both spatial coordinates and transcriptomic readouts. However, existing models either ignore spatial resolution or the gene regulatory information. Gene regulation in cells can change depending on microenvironmental cues from neighboring cells, but existing models neglect gene regulatory patterns with hierarchical dependencies across levels of abstraction. In order to create contextualized representations of cells and genes from spatial transcriptomics data, we introduce HEIST, a hierarchical graph transformer-based foundation model for spatial transcriptomics and proteomics data. HEIST models tissue as spatial cellular neighborhood graphs, and each cell is, in turn, modeled as a gene regulatory network graph. The framework includes a hierarchical graph transformer that performs cross-level message passing and message passing within levels. HEIST is pre-trained on 22.3M cells from 124 tissues across 15 organs using spatially-aware contrastive learning and masked auto-encoding objectives. Unsupervised analysis of HEIST representations of cells, shows that it effectively encodes the microenvironmental influences in cell embeddings, enabling the discovery of spatially-informed subpopulations that prior models fail to differentiate. Further, HEIST achieves state-of-the-art results on four downstream task such as clinical outcome prediction, cell type annotation, gene imputation, and spatially-informed cell clustering across multiple technologies, highlighting the importance of hierarchical modeling and GRN-based representations.
| null |
https://arxiv.org/abs/2506.11152v1
|
https://arxiv.org/pdf/2506.11152v1.pdf
| null |
[
"Hiren Madhu",
"João Felipe Rocha",
"Tinglin Huang",
"Siddharth Viswanath",
"Smita Krishnaswamy",
"Rex Ying"
] |
[
"Contrastive Learning",
"Imputation"
] | 2025-06-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/inmotifin-a-lightweight-end-to-end-simulation
|
2506.20769
| null | null |
inMOTIFin: a lightweight end-to-end simulation software for regulatory sequences
|
The accurate development, assessment, interpretation, and benchmarking of bioinformatics frameworks for analyzing transcriptional regulatory grammars rely on controlled simulations to validate the underlying methods. However, existing simulators often lack end-to-end flexibility or ease of integration, which limits their practical use. We present inMOTIFin, a lightweight, modular, and user-friendly Python-based software that addresses these gaps by providing versatile and efficient simulation and modification of DNA regulatory sequences. inMOTIFin enables users to simulate or modify regulatory sequences efficiently for the customizable generation of motifs and insertion of motif instances with precise control over their positions, co-occurrences, and spacing, as well as direct modification of real sequences, facilitating a comprehensive evaluation of motif-based methods and interpretation tools. We demonstrate inMOTIFin applications for the assessment of de novo motif discovery prediction, the analysis of transcription factor cooperativity, and the support of explainability analyses for deep learning models. inMOTIFin ensures robust and reproducible analyses for studying transcriptional regulatory grammars. inMOTIFin is available at PyPI https://pypi.org/project/inMOTIFin/ and Docker Hub https://hub.docker.com/r/cbgr/inmotifin. Detailed documentation is available at https://inmotifin.readthedocs.io/en/latest/. The code for use case analyses is available at https://bitbucket.org/CBGR/inmotifin_evaluation/src/main/.
|
The accurate development, assessment, interpretation, and benchmarking of bioinformatics frameworks for analyzing transcriptional regulatory grammars rely on controlled simulations to validate the underlying methods.
|
https://arxiv.org/abs/2506.20769v1
|
https://arxiv.org/pdf/2506.20769v1.pdf
| null |
[
"Katalin Ferenc",
"Lorenzo Martini",
"Ieva Rauluseviciute",
"Geir Kjetil Sandve",
"Anthony Mathelier"
] |
[
"Benchmarking"
] | 2025-06-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quantum-gradient-optimized-drug-repurposing
|
2506.19097
| null | null |
Quantum Gradient Optimized Drug Repurposing Prototype for Omics Data
|
This paper presents a novel quantum-enhanced prototype for drug repurposing and addresses the challenge of managing massive genomics data in precision medicine.
| null |
https://arxiv.org/abs/2506.19097v1
|
https://arxiv.org/pdf/2506.19097v1.pdf
| null |
[
"Don Roosan",
"Saif Nirzhor",
"Rubayat Khan",
"Fahmida Hai"
] |
[] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/eccdnamamba-a-pre-trained-model-for-ultra
|
2506.18940
| null | null |
eccDNAMamba: A Pre-Trained Model for Ultra-Long eccDNA Sequence Analysis
|
Extrachromosomal circular DNA (eccDNA) plays key regulatory roles and contributes to oncogene overexpression in cancer through high-copy amplification and long-range interactions. Despite advances in modeling, no pre-trained models currently support full-length circular eccDNA for downstream analysis. Existing genomic models are either limited to single-nucleotide resolution or hindered by the inefficiency of the quadratic attention mechanism. Here, we introduce eccDNAMamba, the first bidirectional state-space encoder tailored for circular DNA sequences. It combines forward and reverse passes for full-context representation learning with linear-time complexity, and preserves circular structure through a novel augmentation strategy. Tested on two real-world datasets, eccDNAMamba achieves strong classification performance and scales to sequences up to 200 Kbp, offering a robust and efficient framework for modeling circular genomes. Our codes are available at https://github.com/zzq1zh/GenAI-Lab.
|
Extrachromosomal circular DNA (eccDNA) plays key regulatory roles and contributes to oncogene overexpression in cancer through high-copy amplification and long-range interactions.
|
https://arxiv.org/abs/2506.18940v1
|
https://arxiv.org/pdf/2506.18940v1.pdf
| null |
[
"Zhenke Liu",
"Jien Li",
"Ziqi Zhang"
] |
[
"Representation Learning"
] | 2025-06-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-genomic-models-via-task-specific
|
2506.17766
| null | null |
Improving Genomic Models via Task-Specific Self-Pretraining
|
Pretraining DNA language models (DNALMs) on the full human genome is resource-intensive, yet often considered necessary for strong downstream performance. Inspired by recent findings in NLP and long-context modeling, we explore an alternative: self-pretraining on task-specific, unlabeled data. Using the BEND benchmark, we show that DNALMs trained with self-pretraining match or exceed the performance of models trained from scratch under identical compute. While genome-scale pretraining may still offer higher absolute performance, task-specific self-pretraining provides a practical and compute-efficient strategy for building stronger supervised baselines.
| null |
https://arxiv.org/abs/2506.17766v1
|
https://arxiv.org/pdf/2506.17766v1.pdf
| null |
[
"Sohan Mupparapu",
"Parameswari Krishnamurthy",
"Ratish Puduppully"
] |
[] | 2025-06-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quantum-inspired-algorithm-for-simulating
|
2506.15671
| null | null |
Quantum-inspired algorithm for simulating viral response
|
Understanding the properties of biological systems is an exciting avenue for applying advanced approaches to solving corresponding computational tasks. A specific class of problems that arises in the resolution of biological challenges is optimization. In this work, we present the results of a proof-of-concept study that applies a quantum-inspired optimization algorithm to simulate a viral response. We formulate an Ising-type model to describe the patterns of gene activity in host responses. Reducing the problem to the Ising form allows the use of available quantum and quantum-inspired optimization tools. We demonstrate the application of a quantum-inspired optimization algorithm to this problem. Our study paves the way for exploring the full potential of quantum and quantum-inspired optimization tools in biological applications.
| null |
https://arxiv.org/abs/2506.15671v1
|
https://arxiv.org/pdf/2506.15671v1.pdf
| null |
[
"D. O. Konina",
"D. I. Korbashov",
"I. V. Kovalchuk",
"A. A. Nizamieva",
"D. A. Chermoshentsev",
"A. K. Fedorov"
] |
[] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/advancing-digital-precision-medicine-for
|
2506.15761
| null | null |
Advancing Digital Precision Medicine for Chronic Fatigue Syndrome through Longitudinal Large-Scale Multi-Modal Biological Omics Modeling with Machine Learning and Artificial Intelligence
|
We studied a generalized question: chronic diseases like ME/CFS and long COVID exhibit high heterogeneity with multifactorial etiology and progression, complicating diagnosis and treatment. To address this, we developed BioMapAI, an explainable Deep Learning framework using the richest longitudinal multi-omics dataset for ME/CFS to date. This dataset includes gut metagenomics, plasma metabolome, immune profiling, blood labs, and clinical symptoms. By connecting multi-omics to a symptom matrix, BioMapAI identified both disease- and symptom-specific biomarkers, reconstructed symptoms, and achieved state-of-the-art precision in disease classification. We also created the first connectivity map of these omics in both healthy and disease states and revealed how microbiome-immune-metabolome crosstalk shifted from healthy to ME/CFS.
| null |
https://arxiv.org/abs/2506.15761v1
|
https://arxiv.org/pdf/2506.15761v1.pdf
| null |
[
"Ruoyun Xiong"
] |
[] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/global-ground-metric-learning-with
|
2506.15383
| null | null |
Global Ground Metric Learning with Applications to scRNA data
|
Optimal transport provides a robust framework for comparing probability distributions. Its effectiveness is significantly influenced by the choice of the underlying ground metric. Traditionally, the ground metric has either been (i) predefined, e.g., as the Euclidean distance, or (ii) learned in a supervised way, by utilizing labeled data to learn a suitable ground metric for enhanced task-specific performance. Yet, predefined metrics typically cannot account for the inherent structure and varying importance of different features in the data, and existing supervised approaches to ground metric learning often do not generalize across multiple classes or are restricted to distributions with shared supports. To address these limitations, we propose a novel approach for learning metrics for arbitrary distributions over a shared metric space. Our method provides a distance between individual points like a global metric, but requires only class labels on a distribution-level for training. The learned global ground metric enables more accurate optimal transport distances, leading to improved performance in embedding, clustering and classification tasks. We demonstrate the effectiveness and interpretability of our approach using patient-level scRNA-seq data spanning multiple diseases.
|
The learned global ground metric enables more accurate optimal transport distances, leading to improved performance in embedding, clustering and classification tasks.
|
https://arxiv.org/abs/2506.15383v1
|
https://arxiv.org/pdf/2506.15383v1.pdf
| null |
[
"Damin Kühn",
"Michael T. Schaub"
] |
[
"Metric Learning"
] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/bmfm-rna-an-open-framework-for-building-and
|
2506.14861
| null | null |
BMFM-RNA: An Open Framework for Building and Evaluating Transcriptomic Foundation Models
|
Transcriptomic foundation models (TFMs) have recently emerged as powerful tools for analyzing gene expression in cells and tissues, supporting key tasks such as cell-type annotation, batch correction, and perturbation prediction. However, the diversity of model implementations and training strategies across recent TFMs, though promising, makes it challenging to isolate the contribution of individual design choices or evaluate their potential synergies. This hinders the field's ability to converge on best practices and limits the reproducibility of insights across studies. We present BMFM-RNA, an open-source, modular software package that unifies diverse TFM pretraining and fine-tuning objectives within a single framework. Leveraging this capability, we introduce a novel training objective, whole cell expression decoder (WCED), which captures global expression patterns using an autoencoder-like CLS bottleneck representation. In this paper, we describe the framework, supported input representations, and training objectives. We evaluated four model checkpoints pretrained on CELLxGENE using combinations of masked language modeling (MLM), WCED and multitask learning. Using the benchmarking capabilities of BMFM-RNA, we show that WCED-based models achieve performance that matches or exceeds state-of-the-art approaches like scGPT across more than a dozen datasets in both zero-shot and fine-tuning tasks. BMFM-RNA, available as part of the biomed-multi-omics project ( https://github.com/BiomedSciAI/biomed-multi-omic ), offers a reproducible foundation for systematic benchmarking and community-driven exploration of optimal TFM training strategies, enabling the development of more effective tools to leverage the latest advances in AI for understanding cell biology.
|
Transcriptomic foundation models (TFMs) have recently emerged as powerful tools for analyzing gene expression in cells and tissues, supporting key tasks such as cell-type annotation, batch correction, and perturbation prediction.
|
https://arxiv.org/abs/2506.14861v1
|
https://arxiv.org/pdf/2506.14861v1.pdf
| null |
[
"Bharath Dandala",
"Michael M. Danziger",
"Ella Barkan",
"Tanwi Biswas",
"Viatcheslav Gurev",
"Jianying Hu",
"Matthew Madgwick",
"Akira Koseki",
"Tal Kozlovski",
"Michal Rosen-Zvi",
"Yishai Shimoni",
"Ching-Huei Tsou"
] |
[
"Benchmarking",
"Language Modeling",
"Language Modelling",
"Masked Language Modeling"
] | 2025-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/phenokg-knowledge-graph-driven-gene-discovery
|
2506.13119
| null | null |
PhenoKG: Knowledge Graph-Driven Gene Discovery and Patient Insights from Phenotypes Alone
|
Identifying causative genes from patient phenotypes remains a significant challenge in precision medicine, with important implications for the diagnosis and treatment of genetic disorders. We propose a novel graph-based approach for predicting causative genes from patient phenotypes, with or without an available list of candidate genes, by integrating a rare disease knowledge graph (KG). Our model, combining graph neural networks and transformers, achieves substantial improvements over the current state-of-the-art. On the real-world MyGene2 dataset, it attains a mean reciprocal rank (MRR) of 24.64\% and nDCG@100 of 33.64\%, surpassing the best baseline (SHEPHERD) at 19.02\% MRR and 30.54\% nDCG@100. We perform extensive ablation studies to validate the contribution of each model component. Notably, the approach generalizes to cases where only phenotypic data are available, addressing key challenges in clinical decision support when genomic information is incomplete.
| null |
https://arxiv.org/abs/2506.13119v1
|
https://arxiv.org/pdf/2506.13119v1.pdf
| null |
[
"Kamilia Zaripova",
"Ege Özsoy",
"Nassir Navab",
"Azade Farshad"
] |
[] | 2025-06-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/semanticst-spatially-informed-semantic-graph
|
2506.11491
| null | null |
SemanticST: Spatially Informed Semantic Graph Learning for Clustering, Integration, and Scalable Analysis of Spatial Transcriptomics
|
Spatial transcriptomics (ST) technologies enable gene expression profiling with spatial resolution, offering unprecedented insights into tissue organization and disease heterogeneity. However, current analysis methods often struggle with noisy data, limited scalability, and inadequate modelling of complex cellular relationships. We present SemanticST, a biologically informed, graph-based deep learning framework that models diverse cellular contexts through multi-semantic graph construction. SemanticST builds multiple context-specific graphs capturing spatial proximity, gene expression similarity, and tissue domain structure, and learns disentangled embeddings for each. These are fused using an attention-inspired strategy to yield a unified, biologically meaningful representation. A community-aware min-cut loss improves robustness over contrastive learning, particularly in sparse ST data. SemanticST supports mini-batch training, making it the first graph neural network scalable to large-scale datasets such as Xenium (500,000 cells). Benchmarking across four platforms (Visium, Slide-seq, Stereo-seq, Xenium) and multiple human and mouse tissues shows consistent 20 percentage gains in ARI, NMI, and trajectory fidelity over DeepST, GraphST, and IRIS. In re-analysis of breast cancer Xenium data, SemanticST revealed rare and clinically significant niches, including triple receptor-positive clusters, spatially distinct DCIS-to-IDC transition zones, and FOXC2 tumour-associated myoepithelial cells, suggesting non-canonical EMT programs with stem-like features. SemanticST thus provides a scalable, interpretable, and biologically grounded framework for spatial transcriptomics analysis, enabling robust discovery across tissue types and diseases, and paving the way for spatially resolved tissue atlases and next-generation precision medicine.
| null |
https://arxiv.org/abs/2506.11491v2
|
https://arxiv.org/pdf/2506.11491v2.pdf
| null |
[
"Roxana Zahedi",
"Ahmadreza Argha",
"Nona Farbehi",
"Ivan Bakhshayeshi",
"Youqiong Ye",
"Nigel H. Lovell",
"Hamid Alinejad-Rokny"
] |
[
"Benchmarking",
"Contrastive Learning",
"graph construction",
"Graph Learning",
"Graph Neural Network"
] | 2025-06-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Graph Neural Network",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Graph Neural Network",
"source_title": "Graph Neural Networks: A Review of Methods and Applications",
"source_url": "https://arxiv.org/abs/1812.08434v6"
}
] |
https://paperswithcode.com/paper/improving-spliced-alignment-by-modeling
|
2506.12986
| null | null |
Improving spliced alignment by modeling splice sites with deep learning
|
Motivation: Spliced alignment refers to the alignment of messenger RNA (mRNA) or protein sequences to eukaryotic genomes. It plays a critical role in gene annotation and the study of gene functions. Accurate spliced alignment demands sophisticated modeling of splice sites, but current aligners use simple models, which may affect their accuracy given dissimilar sequences. Results: We implemented minisplice to learn splice signals with a one-dimensional convolutional neural network (1D-CNN) and trained a model with 7,026 parameters for vertebrate and insect genomes. It captures conserved splice signals across phyla and reveals GC-rich introns specific to mammals and birds. We used this model to estimate the empirical splicing probability for every GT and AG in genomes, and modified minimap2 and miniprot to leverage pre-computed splicing probability during alignment. Evaluation on human long-read RNA-seq data and cross-species protein datasets showed our method greatly improves the junction accuracy especially for noisy long RNA-seq reads and proteins of distant homology. Availability and implementation: https://github.com/lh3/minisplice
|
Motivation: Spliced alignment refers to the alignment of messenger RNA (mRNA) or protein sequences to eukaryotic genomes.
|
https://arxiv.org/abs/2506.12986v1
|
https://arxiv.org/pdf/2506.12986v1.pdf
| null |
[
"Siying Yang",
"Neng Huang",
"Heng Li"
] |
[] | 2025-06-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/viral-dark-matter-illuminating-protein
|
2506.11942
| null | null |
Viral Dark Matter: Illuminating Protein Function, Ecology, and Biotechnological Promises
|
Viruses are the most abundant biological entities on Earth and play central roles in shaping microbiomes and influencing ecosystem functions. Yet, most viral genes remain uncharacterized, comprising what is commonly referred to as "viral dark matter." Metagenomic studies across diverse environments consistently show that 40-90% of viral genes lack known homologs or annotated functions. This persistent knowledge gap limits our ability to interpret viral sequence data, understand virus-host interactions, and assess the ecological or applied significance of viral genes. Among the most intriguing components of viral dark matter are auxiliary viral genes (AVGs), including auxiliary metabolic genes (AMGs), regulatory genes (AReGs), and host physiology-modifying genes (APGs), which may alter host function during infection and contribute to microbial metabolism, stress tolerance, or resistance. In this review, we explore recent advances in the discovery and functional characterization of viral dark matter. We highlight representative examples of novel viral proteins across diverse ecosystems including human microbiomes, soil, oceans, and extreme environments, and discuss what is known, and still unknown, about their roles. We then examine the bioinformatic and experimental challenges that hinder functional characterization, and present emerging strategies to overcome these barriers. Finally, we highlight both the fundamental and applied benefits that multidisciplinary efforts to characterize viral proteins can bring. By integrating computational predictions with experimental validation, and fostering collaboration across disciplines, we emphasize that illuminating viral dark matter is both feasible and essential for advancing microbial ecology and unlocking new tools for biotechnology.
| null |
https://arxiv.org/abs/2506.11942v1
|
https://arxiv.org/pdf/2506.11942v1.pdf
| null |
[
"James C. Kosmopoulos",
"Karthik Anantharaman"
] |
[] | 2025-06-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/globdb-a-comprehensive-species-dereplicated
|
2506.11896
| null | null |
GlobDB: A comprehensive species-dereplicated microbial genome resource
|
Over the past years, substantial numbers of microbial species' genomes have been deposited outside of conventional INSDC databases. The GlobDB aggregates 14 independent genomic catalogues to provide a comprehensive database of species-dereplicated microbial genomes, with consistent taxonomy, annotations, and additional analysis resources. The GlobDB is available at https://globdb.org/.
| null |
https://arxiv.org/abs/2506.11896v1
|
https://arxiv.org/pdf/2506.11896v1.pdf
| null |
[
"Daan R. Speth",
"Nick Pullen",
"Samuel T. N. Aroney",
"Benjamin L. Coltman",
"Jay T. Osvatic",
"Ben J. Woodcroft",
"Thomas Rattei",
"Michael Wagner"
] |
[] | 2025-06-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multimodal-modeling-of-crispr-cas12-activity
|
2506.11182
| null | null |
Multimodal Modeling of CRISPR-Cas12 Activity Using Foundation Models and Chromatin Accessibility Data
|
Predicting guide RNA (gRNA) activity is critical for effective CRISPR-Cas12 genome editing but remains challenging due to limited data, variation across protospacer adjacent motifs (PAMs-short sequence requirements for Cas binding), and reliance on large-scale training. We investigate whether pre-trained biological foundation model originally trained on transcriptomic data can improve gRNA activity estimation even without domain-specific pre-training. Using embeddings from existing RNA foundation model as input to lightweight regressor, we show substantial gains over traditional baselines. We also integrate chromatin accessibility data to capture regulatory context, improving performance further. Our results highlight the effectiveness of pre-trained foundation models and chromatin accessibility data for gRNA activity prediction.
| null |
https://arxiv.org/abs/2506.11182v1
|
https://arxiv.org/pdf/2506.11182v1.pdf
| null |
[
"Azim Dehghani Amirabad",
"Yanfei Zhang",
"Artem Moskalev",
"Sowmya Rajesh",
"Tommaso Mansi",
"Shuwei Li",
"Mangal Prakash",
"Rui Liao"
] |
[
"Activity Prediction"
] | 2025-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/brain-wide-interpolation-and-conditioning-of
|
2506.11158
| null | null |
Brain-wide interpolation and conditioning of gene expression in the human brain using Implicit Neural Representations
|
In this paper, we study the efficacy and utility of recent advances in non-local, non-linear image interpolation and extrapolation algorithms, specifically, ideas based on Implicit Neural Representations (INR), as a tool for analysis of spatial transcriptomics data. We seek to utilize the microarray gene expression data sparsely sampled in the healthy human brain, and produce fully resolved spatial maps of any given gene across the whole brain at a voxel-level resolution. To do so, we first obtained the 100 top AD risk genes, whose baseline spatial transcriptional profiles were obtained from the Allen Human Brain Atlas (AHBA). We adapted Implicit Neural Representation models so that the pipeline can produce robust voxel-resolution quantitative maps of all genes. We present a variety of experiments using interpolations obtained from Abagen as a baseline/reference.
| null |
https://arxiv.org/abs/2506.11158v1
|
https://arxiv.org/pdf/2506.11158v1.pdf
| null |
[
"Xizheng Yu",
"Justin Torok",
"Sneha Pandya",
"Sourav Pal",
"Vikas Singh",
"Ashish Raj"
] |
[] | 2025-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unlasting-unpaired-single-cell-multi
|
2506.21107
| null | null |
Unlasting: Unpaired Single-Cell Multi-Perturbation Estimation by Dual Conditional Diffusion Implicit Bridges
|
Estimating single-cell responses across various perturbations facilitates the identification of key genes and enhances drug screening, significantly boosting experimental efficiency. However, single-cell sequencing is a destructive process, making it impossible to capture the same cell's phenotype before and after perturbation. Consequently, data collected under perturbed and unperturbed conditions are inherently unpaired. Existing methods either attempt to forcibly pair unpaired data using random sampling, or neglect the inherent relationship between unperturbed and perturbed cells during the modeling. In this work, we propose a framework based on Dual Diffusion Implicit Bridges (DDIB) to learn the mapping between different data distributions, effectively addressing the challenge of unpaired data. We further interpret this framework as a form of data augmentation. We integrate gene regulatory network (GRN) information to propagate perturbation signals in a biologically meaningful way, and further incorporate a masking mechanism to predict silent genes, improving the quality of generated profiles. Moreover, gene expression under the same perturbation often varies significantly across cells, frequently exhibiting a bimodal distribution that reflects intrinsic heterogeneity. To capture this, we introduce a more suitable evaluation metric. We propose Unlasting, dual conditional diffusion models that overcome the problem of unpaired single-cell perturbation data and strengthen the model's insight into perturbations under the guidance of the GRN, with a dedicated mask model designed to improve generation quality by predicting silent genes. In addition, we introduce a biologically grounded evaluation metric that better reflects the inherent heterogeneity in single-cell responses.
| null |
https://arxiv.org/abs/2506.21107v1
|
https://arxiv.org/pdf/2506.21107v1.pdf
| null |
[
"Changxi Chi",
"Jun Xia",
"Yufei Huang",
"Jingbo Zhou",
"Siyuan Li",
"Yunfan Liu",
"Chang Yu",
"Stan Z. Li"
] |
[
"Data Augmentation"
] | 2025-06-26T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/enhancing-biosecurity-in-tamper-resistant
|
2506.19086
| null | null |
Enhancing Biosecurity in Tamper-Resistant Large Language Models With Quantum Gradient Descent
|
This paper introduces a tamper-resistant framework for large language models (LLMs) in medical applications, utilizing quantum gradient descent (QGD) to detect malicious parameter modifications in real time. Integrated into a LLaMA-based model, QGD monitors weight amplitude distributions, identifying adversarial fine-tuning anomalies. Tests on the MIMIC and eICU datasets show minimal performance impact (accuracy: 89.1 to 88.3 on MIMIC) while robustly detecting tampering. PubMedQA evaluations confirm preserved biomedical question-answering capabilities. Compared to baselines like selective unlearning and cryptographic fingerprinting, QGD offers superior sensitivity to subtle weight changes. This quantum-inspired approach ensures secure, reliable medical AI, extensible to other high-stakes domains.
| null |
https://arxiv.org/abs/2506.19086v1
|
https://arxiv.org/pdf/2506.19086v1.pdf
| null |
[
"Fahmida Hai",
"Saif Nirzhor",
"Rubayat Khan",
"Don Roosan"
] |
[
"Question Answering",
"Sensitivity"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/central-dogma-cycle-and-network-a-model-for
|
2506.16374
| null | null |
Central Dogma Cycle and Network: A Model for Cell Memory
|
This paper proposes an extension of the traditional Central Dogma of molecular biology to a more dynamic model termed the Central Dogma Cycle (CDC) and a broader network called the Central Dogma Cyclic Network (CDCN). While the Central Dogma is necessary for genetic information flow, it is not sufficient to fully explain cellular memory and information management. The CDC incorporates additional well-established steps, including protein folding and protein networking, highlighting the cyclical nature of information flow in cells. This cyclic architecture is proposed as a key mechanism for cellular memory, drawing analogies to memory functions in computers, such as input, read, write, execute, and erase. The interconnected cycles within the CDCN, including metabolic cycles and signaling pathways, are suggested to function akin to latches in computer memory, contributing to the storage and processing of cellular information beyond nucleic acid sequences. Understanding cellular memory through this cyclic network model offers a new perspective on heredity, cell processes, and the potential disruptions in disease pathology.
| null |
https://arxiv.org/abs/2506.16374v1
|
https://arxiv.org/pdf/2506.16374v1.pdf
| null |
[
"Martin R. Schiller"
] |
[
"Protein Folding"
] | 2025-06-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/attractor-stability-of-boolean-networks-under
|
2506.15581
| null | null |
Attractor Stability of Boolean networks under noise
|
We study the impact of noise on attractor dynamics in Boolean networks, focusing on their stability and transition behaviors. By constructing attractor matrices based on single-node perturbations, we propose a framework to quantify attractor stability and identify dominant attractors. We find that attractors are more stable than predicted by basin sizes, showing the importance of dynamical structure in noisy environments. In addition, under global perturbations, basin sizes dictate long-term behavior; under local noise, however, attractor dominance is determined by noise-induced transition patterns rather than basin sizes. Our results show that transition dynamics induced by stochastic perturbations provide an efficient and quantitative description for the attractor stability and dynamics in Boolean networks under noise.
| null |
https://arxiv.org/abs/2506.15581v1
|
https://arxiv.org/pdf/2506.15581v1.pdf
| null |
[
"Byungjoon Min",
"Jeehye Choi",
"Reinhard Laubenbacher"
] |
[] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/complex-forming-behaviour-of-a-b-eta-and-g
|
2506.13115
| null | null |
Complex forming behaviour of α, \b{eta} and γ-cyclodextrins with varying size probe particles in silico
|
Cyclodextrins (CDs) are cyclic oligosaccharides composed of glucopyranose units bonded together to form a truncated cone that can make inclusion complexes with guest molecules. The {\alpha}, \b{eta}, and {\gamma}-CDs, which respectively comprise six, seven or eight glucopyranose units, are used extensively in pharmaceutical formulations as functional excipients. The cavity sizes of all three natural CDs have been approximated using static structures but a growing consensus is that the CDs are flexible; moreover, the size range of molecules that CDs can accommodate has not been systematically studied. Here the results of molecular dynamics simulations performed using spherical continuum probe particles of different sizes to observe the complex-forming behaviour of CDs are presented. Results revealed that CDs can make dynamic complexes with guest molecules that are larger than their reported cavity sizes.
| null |
https://arxiv.org/abs/2506.13115v1
|
https://arxiv.org/pdf/2506.13115v1.pdf
| null |
[
"N. R. M. Nelumdeniya",
"R. J. K. U. Ranatunga"
] |
[] | 2025-06-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/network-pharmacology-reveals-hspa1a-bst2-as
|
2506.12107
| null | null |
Network Pharmacology Reveals HSPA1A/BST2 as Potential Targets of Ci Bai Capsule's Active Compounds Intervening in Leukopenia
|
Background: Radiation-induced leukopenia caused by low-dose exposure is frequently associated with Traditional Chinese Medicine (TCM) syndromes like "blood deficiency" and "fatigue syndrome". Ci Bai Capsule (CB) has been reported to enhance white blood cell levels; however, its mechanisms and bioactive compounds remain unclear.Aim: This study aimed to identify the bioactive compounds group of CB and elucidate its potential mechanisms in radiation-induced leukopenia.Methods: Syndrome-related data were gathered from SYMMAP and CTD database. CB's target profile is predicted by DrugCIPHER. Network pharmacology approaches were employed to identify active compounds and related pathways. Experimental validation was conducted through flow cytometry and RNA-sequencing in both ex vivo and in vivo models.Results: A total of 22 pathways related to cellular processes, immune responses, and signal transduction were identified. Five key bioactive compounds (kaempferol-3-glucorhamnoside, syringin, schisandrin, 3-hydroxytyrosol 3-O-glucoside and salidroside) were found to significantly modulate syndrome-related pathways. Optimal dosing of this compound combination enhanced leukocyte counts and splenic immune cell proliferation in irradiated mice. Transcriptomic analysis revealed that the compounds exert regulatory effects on PP1A, RB, CDK4/6, CDK2, and CDK1, thereby modulating downstream immune and hematopoietic markers such as MNDA, BST2, and HSPA1A.Conclusion: Our findings suggest that CB mitigates radiation-induced leukopenia by enhancing immune and hematopoietic recovery, offering a promising therapeutic approach for managing radiation-related hematological disorders.
| null |
https://arxiv.org/abs/2506.12107v1
|
https://arxiv.org/pdf/2506.12107v1.pdf
| null |
[
"DingFan Zhang",
"Congshu Huang",
"Lei Zhou",
"Boyang Wang",
"Wei Zhou",
"Tiantian Xia",
"Pan Shen",
"Shao Li",
"Yue Gao"
] |
[] | 2025-06-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/input-to-state-stability-based-chemical
|
2506.12056
| null | null |
Input-to-state stability-based chemical reaction networks composition for molecular computations
|
Molecular computation based on chemical reaction networks (CRNs) has emerged as a promising paradigm for designing programmable biochemical systems. However, the implementation of complex computations still requires excessively large and intricate network structures, largely due to the limited understanding of composability, that is, how multiple subsystems can be coupled while preserving computational functionality. Existing composability frameworks primarily focus on rate-independent CRNs, whose computational capabilities are severely restricted. This article aims to establish a systematic framework for composable CRNs governed by mass-action kinetics, a common type of rate-dependent CRNs. Drawing upon the concepts of composable rate-independent CRNs, we introduce the notions of mass-action chemical reaction computers (msCRCs), dynamic computation and dynamic composability to establish a rigorous mathematical framework for composing two or more msCRCs to achieve layer-by-layer computation of composite functions. Further, we derive several sufficient conditions based on the notions of input-to-state stability (ISS) to characterize msCRCs that can be composed to implement desired molecular computations, thereby providing theoretical support for this framework. Some examples are presented to illustrate the efficiency of our method. Finally, comparative results demonstrate that the proposed method exhibits notable advantages in both computational ability and accuracy over the state-of-the-art methods.
| null |
https://arxiv.org/abs/2506.12056v1
|
https://arxiv.org/pdf/2506.12056v1.pdf
| null |
[
"Renlei Jiang",
"Yuzhen Fan",
"Di Fan",
"Chuanhou Gao",
"Denis Dochain"
] |
[] | 2025-05-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/the-assumptions-that-restrain-us-from
|
2506.21485
| null | null |
The assumptions that restrain us from understanding consciousness
|
The science of consciousness has been successful over the last decades. Yet, it seems that some of the key questions remain unanswered. Perhaps, as a science of consciousness, we cannot move forward using the same theoretical commitments that brought us here. It might be necessary to revise some assumptions we have made along the way. In this piece, I offer no answers, but I will question some of these fundamental assumptions. We will try to take a fresh look at the classical question about the neural and explanatory correlates of consciousness. A key assumption is that neural correlates are to be found at the level of spiking responses. However, perhaps we should not simply take it for granted that this assumption holds true. Another common assumption is that we are close to understanding the computations underlying consciousness. I will try to show that computations related to consciousness might be far more complex than our current theories envision. There is little reason to think that consciousness is an abstract computation, as traditionally believed. Furthermore, I will try to demonstrate that consciousness research could benefit from investigating internal changes of consciousness, such as aha-moments. Finally, I will ask which theories the science of consciousness really needs.
| null |
https://arxiv.org/abs/2506.21485v1
|
https://arxiv.org/pdf/2506.21485v1.pdf
| null |
[
"Jaan Aru"
] |
[] | 2025-06-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/wirelessly-transmitted-subthalamic-nucleus
|
2506.21439
| null | null |
Wirelessly transmitted subthalamic nucleus signals predict endogenous pain levels in Parkinson's disease patients
|
Parkinson disease (PD) patients experience pain fluctuations that significantly reduce their quality of life. Despite the vast knowledge of the subthalamic nucleus (STN) role in PD, the STN biomarkers for pain fluctuations and the relationship between bilateral subthalamic nucleus (STN) activities and pain occurrence are still less understood. This observational study used data-driven methods by collecting annotated pain followed by a series of corresponding binary pain ratings and wirelessly transmitted STN signals, then leveraging the explainable machine learning algorithm to predict binary pain levels and sort the feature influence. The binary pain levels could be predicted among annotated pain reports corresponding to PD-related pain characteristics. The STN activity from both sides could impact pain prediction, with gamma and beta bands in the contralateral STN and delta and theta bands in the ipsilateral STN showing a prominent role. This study emphasizes the role of bilateral STN biomarkers on endogenous pain fluctuations.
| null |
https://arxiv.org/abs/2506.21439v1
|
https://arxiv.org/pdf/2506.21439v1.pdf
| null |
[
"Abdi Reza",
"Takufumi Yanagisawa",
"Naoki Tani",
"Ryohei Fukuma",
"Takuto Emura",
"Satoru Oshino",
"Ben Seymour",
"Haruhiko Kishima"
] |
[] | 2025-06-26T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Yes, Expedia customer service is available 24 hours a day by phone at +1-805-330-4056. You\r\ncan reach a live agent anytime for help with flights, hotels, car rentals, cancellations, or\r\nchanges +1-805-330-4056. Whether you’re managing an existing booking or facing a travel\r\nemergency, support is open day and night +1-805-330-4056. Assistance includes itinerary\r\nupdates, refund tracking, and travel protection claims +1-805-330-4056. If online options\r\naren’t working or you prefer to speak with someone directly, call Expedia’s 24/7 customer\r\nservice at +1-805-330-4056. Agents are available around the clock to resolve your travel\r\nconcerns quickly +1-805-330-4056.",
"full_name": "Is Expedia Customer Service available 24/7 hour?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Is Expedia Customer Service available 24/7 hour?",
"source_title": "Spatial Transformer Networks",
"source_url": "http://arxiv.org/abs/1506.02025v3"
}
] |
https://paperswithcode.com/paper/amortizing-personalization-in-virtual-brain
|
2506.21155
| null | null |
Amortizing personalization in virtual brain twins
|
Virtual brain twins are personalized digital models of individual human subject or patient's brains, allowing for mechanistic interpretation of neuroimaging data features. Training and inference with these models however presents a pair of challenges: large shared infrastructure do not allow for use of personal data and inference in clinical applications should not require significant resources. We introduce "anonymized personalization" to address both by expanding model priors to include personalization which under amortized inference allows training to be performed anonymously, while inference is both personalized and lightweight. We illustrate the basic approach, demonstrate reliability in an example, and discuss the impact on both experimental and computational neuroscience. Code is available at https://github.com/ins-amu/apvbt.
|
Virtual brain twins are personalized digital models of individual human subject or patient's brains, allowing for mechanistic interpretation of neuroimaging data features.
|
https://arxiv.org/abs/2506.21155v1
|
https://arxiv.org/pdf/2506.21155v1.pdf
| null |
[
"Nina Baldy",
"Marmaduke M Woodman",
"Viktor K Jirsa"
] |
[] | 2025-06-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/modulating-task-outcome-value-to-mitigate
|
2506.21000
| null | null |
Modulating task outcome value to mitigate real-world procrastination via noninvasive brain stimulation
|
Procrastination represents one of the most prevalent behavioral problems affecting individual health and societal productivity. Although it is often conceptualized as a form of self-control failure, its underlying neurocognitive mechanisms are poorly understood. A leading model posits that procrastination arises from imbalanced competing motivations: the avoidance of negative task aversiveness and the pursuit of positive task outcomes, yet this theoretical framework has not fully validated in real-world settings and not applied effectively to guide interventions. Here, we addressed this gap with a preregistered, double-blind, randomized controlled trial. We applied seven sessions of high-definition transcranial direct current stimulation (HD-tDCS) to the left dorsolateral prefrontal cortex (DLPFC), a key region for self-control, in chronic procrastinators. Using the intensive experience sampling method (iESM), we assessed the effect of anodal HD-tDCS on real-world procrastination behavior at offline after-effect (2-day interval) and long-term retention (6-month follow-up). We found that this neuromodulation produced a lasting reduction in real-world procrastination, with effects sustained at a 6-month follow-up. While the intervention both decreased task aversiveness and increased perceived task outcome value, causal mediation analysis revealed a striking mechanism: the increase in task outcome value uniquely and sufficiently mediated the entire behavioral improvement. In conclusion, these findings provide causal evidence that enhancing DLPFC function mitigates procrastination by selectively amplifying the valuation of future rewards, not by simply reducing negative feelings about the task. This establishes a precise, value-driven neurocognitive pathway for self-control and offers a validated, theory-driven strategy for intervention.
| null |
https://arxiv.org/abs/2506.21000v1
|
https://arxiv.org/pdf/2506.21000v1.pdf
| null |
[
"Zhiyi Chen",
"Zhilin Ren",
"Wei Li",
"ZhenZhen Huo",
"ZhuangZheng Wang",
"Ye Liu",
"Bowen Hu",
"Wanting Chen",
"Ting Xu",
"Artemiy Leonov",
"Chenyan Zhang",
"Bernhard Hommel",
"Tingyong Feng"
] |
[] | 2025-06-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/predicting-readiness-to-engage-in
|
2506.20805
| null | null |
Predicting Readiness to Engage in Psychotherapy of People with Chronic Pain Based on their Pain-Related Narratives Saar
|
Background. Chronic pain afflicts 20 % of the global population. A strictly biomedical mind-set leaves many sufferers chasing somatic cures and has fuelled the opioid crisis. The biopsychosocial model recognises pain subjective, multifactorial nature, yet uptake of psychosocial care remains low. We hypothesised that patients own pain narratives would predict their readiness to engage in psychotherapy. Methods. In a cross-sectional pilot, 24 chronic-pain patients recorded narrated pain stories on Painstory.science. Open questions probed perceived pain source, interference and influencing factors. Narratives were cleaned, embedded with a pretrained large-language model and entered into machine-learning classifiers that output ready/not ready probabilities. Results. The perception-domain model achieved 95.7 % accuracy (specificity = 0.80, sensitivity = 1.00, AUC = 0.90). The factors-influencing-pain model yielded 83.3 % accuracy (specificity = 0.60, sensitivity = 0.90, AUC = 0.75). Sentence count correlated with readiness for perception narratives (r = 0.54, p < .01) and factor narratives (r = 0.24, p < .05). Conclusion. Brief spoken pain narratives carry reliable signals of willingness to start psychosocial treatment. NLP-based screening could help clinicians match chronic-pain patients to appropriate interventions sooner, supporting a patient-centred biopsychosocial pathway.
| null |
https://arxiv.org/abs/2506.20805v1
|
https://arxiv.org/pdf/2506.20805v1.pdf
| null |
[
"Saar Draznin Shiran",
"Boris Boltyansky",
"Alexandra Zhuravleva",
"Dmitry Scherbakov",
"Pavel Goldstein"
] |
[
"Large Language Model",
"Sensitivity",
"Sentence",
"Specificity"
] | 2025-06-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/brains-and-language-models-converge-on-a
|
2506.20489
| null | null |
Brains and language models converge on a shared conceptual space across different languages
|
Human languages differ widely in their forms, each having distinct sounds, scripts, and syntax. Yet, they can all convey similar meaning. Do different languages converge on a shared neural substrate for conceptual meaning? We used language models (LMs) and naturalistic fMRI to identify neural representations of the shared conceptual meaning of the same story as heard by native speakers of three languages: English, Chinese, and French. We found that LMs trained on entirely different languages converge onto a similar embedding space, especially in the middle layers. We then aimed to find if a similar shared space exists in the brains of different native speakers of the three languages. We trained voxelwise encoding models that align the LM embeddings with neural responses from one group of subjects speaking a single language. We then used the encoding models trained on one language to predict the neural activity in listeners of other languages. We found that models trained to predict neural activity for one language generalize to different subjects listening to the same content in a different language, across high-level language and default-mode regions. Our results suggest that the neural representations of meaning underlying different languages are shared across speakers of various languages, and that LMs trained on different languages converge on this shared meaning. These findings suggest that, despite the diversity of languages, shared meaning emerges from our interactions with one another and our shared world.
|
We found that models trained to predict neural activity for one language generalize to different subjects listening to the same content in a different language, across high-level language and default-mode regions.
|
https://arxiv.org/abs/2506.20489v1
|
https://arxiv.org/pdf/2506.20489v1.pdf
| null |
[
"Zaid Zada",
"Samuel A Nastase",
"Jixing Li",
"Uri Hasson"
] |
[] | 2025-06-25T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.",
"full_name": "ALIGN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)",
"name": "Vision and Language Pre-Trained Models",
"parent": null
},
"name": "ALIGN",
"source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision",
"source_url": "https://arxiv.org/abs/2102.05918v2"
}
] |
https://paperswithcode.com/paper/identifying-multi-compartment-hodgkin-huxley
|
2506.20233
| null | null |
Identifying multi-compartment Hodgkin-Huxley models with high-density extracellular voltage recordings
|
Multi-compartment Hodgkin-Huxley models are biophysical models of how electrical signals propagate throughout a neuron, and they form the basis of our knowledge of neural computation at the cellular level. However, these models have many free parameters that must be estimated for each cell, and existing fitting methods rely on intracellular voltage measurements that are highly challenging to obtain in vivo. Recent advances in neural recording technology with high-density probes and arrays enable dense sampling of extracellular voltage from many sites surrounding a neuron, allowing indirect measurement of many compartments of a cell simultaneously. Here, we propose a method for inferring the underlying membrane voltage, biophysical parameters, and the neuron's position relative to the probe, using extracellular measurements alone. We use an Extended Kalman Filter to infer membrane voltage and channel states using efficient, differentiable simulators. Then, we learn the model parameters by maximizing the marginal likelihood using gradient-based methods. We demonstrate the performance of this approach using simulated data and real neuron morphologies.
| null |
https://arxiv.org/abs/2506.20233v1
|
https://arxiv.org/pdf/2506.20233v1.pdf
| null |
[
"Ian Christopher Tanoh",
"Michael Deistler",
"Jakob H. Macke",
"Scott W. Linderman"
] |
[] | 2025-06-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-time-course-of-visuo-semantic
|
2506.19497
| null | null |
The time course of visuo-semantic representations in the human brain is captured by combining vision and language models
|
The human visual system provides us with a rich and meaningful percept of the world, transforming retinal signals into visuo-semantic representations. For a model of these representations, here we leveraged a combination of two currently dominating approaches: vision deep neural networks (DNNs) and large language models (LLMs). Using large-scale human electroencephalography (EEG) data recorded during object image viewing, we built encoding models to predict EEG responses using representations from a vision DNN, an LLM, and their fusion. We show that the fusion encoding model outperforms encoding models based on either the vision DNN or the LLM alone, as well as previous modelling approaches, in predicting neural responses to visual stimulation. The vision DNN and the LLM complemented each other in explaining stimulus-related signal in the EEG responses. The vision DNN uniquely captured earlier and broadband EEG signals, whereas the LLM uniquely captured later and low frequency signals, as well as detailed visuo-semantic stimulus information. Together, this provides a more accurate model of the time course of visuo-semantic processing in the human brain.
| null |
https://arxiv.org/abs/2506.19497v1
|
https://arxiv.org/pdf/2506.19497v1.pdf
| null |
[
"Boyan Rong",
"Alessandro Thomas Gifford",
"Emrah Düzel",
"Radoslaw Martin Cichy"
] |
[
"EEG"
] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/increasing-efficiency-of-the-chain-of
|
2506.19314
| null | null |
Increasing Efficiency of the Chain of Contagion Task
|
The chain of contagion task (CCT) is a pychological test to measure the amount of contagious beliefs in individuals. Contagious beliefs thereby refer to the perception that certain objects, people, or substances can transmit contamination through mere contact or proximity (Rozin et al., 1986). In the CCT, a neutral object (usually a pen) is rubbed against an inherently disgusting object (e.g. a toilet paper with feces) and participants are asked how contaminated this pen is on a scale from 0 (not at all) to 100 (very contaminated). Afterwards, this pen is rubbed against another pen, and again, the experienced degree of contamination is assessed. This is repeated 12 times. The CCT has first been experimentally investigated by Tolin et al. (2004) in an in vivo procedure with real disgusting objects. The authors could show that contagious beliefs measured with the CCT show a strong bias for people with contamination-based obsessive-compulsive disorder (C-OCD) compared to anxious individuals and non-anxious controls. Fink-Lamotte et al. (2024) replicated these findings with an online version of the CCT using audio-imagery-based and video-based stimuli and instructions. Both studies used 12 pens to assess the degree of contagious beliefs. Within this brief report, we show that after 8 pens, hardly any additional variance is explained between participants and after the tenth pen, no new information is gained. Thus, we recommend only using 8 pens instead of 12 when using the CCT to assess contagious beliefs.
| null |
https://arxiv.org/abs/2506.19314v1
|
https://arxiv.org/pdf/2506.19314v1.pdf
| null |
[
"Lars OM Rothkegel",
"Jakob Fink-Lamotte"
] |
[] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/longitudinal-analysis-of-heart-rate
|
2506.19128
| null | null |
Longitudinal analysis of heart rate variability as it pertains to anxiety and readiness
|
The aim of this study is to explore the relationship between lifestyle choices, subjective experiences and objective biometric data in a single individual. The participant, at the time a male in his twenties, used the EliteHRV app to perform Heart Rate Variability Readings across twenty-six months accompanied by logs about the previous days activity as well as current emotional and physical state. The study will use a mixed-methods approach to analyze the data, including quantitative analysis of the biometric data and correlation analysis between the biometric data and subjective experience tags. Qualitative analysis of the daily logs will also be conducted to gain a deeper understanding of the participant's experiences and to identify keywords, people, or ideas that affect biometric output. The results of this study will provide insights into the relationship between subjective and objective measures, and the potential benefits or drawbacks of certain lifestyle choices and ways of thinking. The findings could have implications for the development of wearable-based personalized interventions for improving mental health and well-being.
| null |
https://arxiv.org/abs/2506.19128v1
|
https://arxiv.org/pdf/2506.19128v1.pdf
| null |
[
"Tucker Paron"
] |
[
"Heart Rate Variability"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/how-brains-build-higher-order-representations
|
2506.19057
| null | null |
How brains build higher order representations of uncertainty
|
Higher-order representations (HORs) are neural or computational states that are "about" first-order representations (FORs), encoding information not about the external world per se but about the agent's own representational processes -- such as the reliability, source, or structure of a FOR. These HORs appear critical to metacognition, learning, and even consciousness by some accounts, yet their dimensionality, construction, and neural substrates remain poorly understood. Here, we propose that metacognitive estimates of uncertainty or noise reflect a read-out of "posterior-like" HORs from a Bayesian perspective. We then discuss how these posterior-like HORs reflect a combination of "likelihood-like" estimates of current FOR uncertainty and "prior-like" learned distributions over expected FOR uncertainty, and how various emerging engineering and theory-based analytical approaches may be employed to examine the estimation processes and neural correlates associated with these highly under-explored components of our experienced uncertainty.
| null |
https://arxiv.org/abs/2506.19057v1
|
https://arxiv.org/pdf/2506.19057v1.pdf
| null |
[
"Megan A. K. Peters",
"Hojjat Azimi Asrari"
] |
[] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/brainsymphony-a-transformer-driven-fusion-of
|
2506.18314
| null | null |
BrainSymphony: A Transformer-Driven Fusion of fMRI Time Series and Structural Connectivity
|
Existing foundation models for neuroimaging are often prohibitively large and data-intensive. We introduce BrainSymphony, a lightweight, parameter-efficient foundation model that achieves state-of-the-art performance while being pre-trained on significantly smaller public datasets. BrainSymphony's strong multimodal architecture processes functional MRI data through parallel spatial and temporal transformer streams, which are then efficiently distilled into a unified representation by a Perceiver module. Concurrently, it models structural connectivity from diffusion MRI using a novel signed graph transformer to encode the brain's anatomical structure. These powerful, modality-specific representations are then integrated via an adaptive fusion gate. Despite its compact design, our model consistently outperforms larger models on a diverse range of downstream benchmarks, including classification, prediction, and unsupervised network identification tasks. Furthermore, our model revealed novel insights into brain dynamics using attention maps on a unique external psilocybin neuroimaging dataset (pre- and post-administration). BrainSymphony establishes that architecturally-aware, multimodal models can surpass their larger counterparts, paving the way for more accessible and powerful research in computational neuroscience.
| null |
https://arxiv.org/abs/2506.18314v1
|
https://arxiv.org/pdf/2506.18314v1.pdf
| null |
[
"Moein Khajehnejad",
"Forough Habibollahi",
"Adeel Razi"
] |
[
"Diffusion MRI",
"Network Identification",
"Time Series"
] | 2025-06-23T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/perceptual-multistability-a-window-for-a
|
2506.18176
| null | null |
Perceptual multistability: a window for a multi-facet understanding of psychiatric disorders
|
Perceptual multistability, observed across species and sensory modalities, offers valuable insights into numerous cognitive functions and dysfunctions. For instance, differences in temporal dynamics and information integration during percept formation often distinguish clinical from non-clinical populations. Computational psychiatry can elucidate these variations, through two primary approaches: (i) Bayesian modeling, which treats perception as an unconscious inference, and (ii) an active, information-seeking perspective (e.g., reinforcement learning) framing perceptual switches as internal actions. Our synthesis aims to leverage multistability to bridge these computational psychiatry subfields, linking human and animal studies as well as connecting behavior to underlying neural mechanisms. Perceptual multistability emerges as a promising non-invasive tool for clinical applications, facilitating translational research and enhancing our mechanistic understanding of cognitive processes and their impairments.
| null |
https://arxiv.org/abs/2506.18176v1
|
https://arxiv.org/pdf/2506.18176v1.pdf
| null |
[
"Shervin Safavi",
"Danaé Rolland",
"Philipp Sterzer",
"Renaud Jardri",
"Pantelis Leptourgos"
] |
[] | 2025-06-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-relationship-between-cognition-and
|
2506.17970
| null | null |
The Relationship between Cognition and Computation: "Global-first" Cognition versus Local-first Computation
|
What fundamental research questions are essential for advancing toward brain-like AI or AGI (Artificial General Intelligence) capable of performing any intellectual task a human can? Should it be something like the Turing machine (1936), which answers the question "What is computation?" and lays the foundation for the entire field of computer science? Or should it be something like Shannon's mathematical theory of communication (1948), which answers the question "What is information?" and forms the basis for modern communication technology? We believe the key question today is the relationship between cognition and computation (RCC). For example, the widely discussed question "Will artificial intelligence replace the human mind?" is, in essence and in scientific terms, an issue concerning RCC. We have chosen to classify RCC into four categories: 1. The relationship between the primitives of cognition and the primitives of computation. 2. The relationship between the anatomical structure of neural representation of cognition and the computational architecture of artificial intelligence. 3. The relationship between emergents in cognition and emergents in computation. 4. The relationship between the mathematical foundations of cognition and computation.
| null |
https://arxiv.org/abs/2506.17970v1
|
https://arxiv.org/pdf/2506.17970v1.pdf
| null |
[
"Lin Chen"
] |
[] | 2025-06-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/which-consciousness-can-be-artificialized
|
2506.18935
| null | null |
Which Consciousness Can Be Artificialized? Local Percept-Perceiver Phenomenon for the Existence of Machine Consciousness
|
This paper presents a novel paradigm of the local percept-perceiver phenomenon to formalize certain observations in neuroscientific theories of consciousness. Using this model, a set-theoretic formalism is developed for artificial systems, and the existence of machine consciousness is proved by invoking Zermelo-Fraenkel set theory. The article argues for the possibility of a reductionist form of epistemic consciousness within machines.
| null |
https://arxiv.org/abs/2506.18935v1
|
https://arxiv.org/pdf/2506.18935v1.pdf
| null |
[
"Shri Lal Raghudev Ram Singh"
] |
[] | 2025-06-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/sequence-to-sequence-models-with-attention
|
2506.17424
| null | null |
Sequence-to-Sequence Models with Attention Mechanistically Map to the Architecture of Human Memory Search
|
Past work has long recognized the important role of context in guiding how humans search their memory. While context-based memory models can explain many memory phenomena, it remains unclear why humans develop such architectures over possible alternatives in the first place. In this work, we demonstrate that foundational architectures in neural machine translation -- specifically, recurrent neural network (RNN)-based sequence-to-sequence models with attention -- exhibit mechanisms that directly correspond to those specified in the Context Maintenance and Retrieval (CMR) model of human memory. Since neural machine translation models have evolved to optimize task performance, their convergence with human memory models provides a deeper understanding of the functional role of context in human memory, as well as presenting new ways to model human memory. Leveraging this convergence, we implement a neural machine translation model as a cognitive model of human memory search that is both interpretable and capable of capturing complex dynamics of learning. We show that our model accounts for both averaged and optimal human behavioral patterns as effectively as context-based memory models. Further, we demonstrate additional strengths of the proposed model by evaluating how memory search performance emerges from the interaction of different model components.
| null |
https://arxiv.org/abs/2506.17424v1
|
https://arxiv.org/pdf/2506.17424v1.pdf
| null |
[
"Nikolaus Salvatore",
"Qiong Zhang"
] |
[
"Machine Translation",
"Translation"
] | 2025-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/challenges-in-grounding-language-in-the-real
|
2506.17375
| null | null |
Challenges in Grounding Language in the Real World
|
A long-term goal of Artificial Intelligence is to build a language understanding system that allows a human to collaborate with a physical robot using language that is natural to the human. In this paper we highlight some of the challenges in doing this, and propose a solution that integrates the abilities of a cognitive agent capable of interactive task learning in a physical robot with the linguistic abilities of a large language model. We also point the way to an initial implementation of this approach.
| null |
https://arxiv.org/abs/2506.17375v1
|
https://arxiv.org/pdf/2506.17375v1.pdf
| null |
[
"Peter Lindes",
"Kaoutar Skiker"
] |
[
"Language Modeling",
"Language Modelling",
"Large Language Model"
] | 2025-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/brain-inspired-interpretable-reservoir
|
2506.17083
| null | null |
Brain-inspired interpretable reservoir computing with resonant recurrent neural networks
|
Traditional artificial neural networks consist of nodes with non-oscillatory dynamics. Biological neural networks, on the other hand, consist of oscillatory components embedded in an oscillatory environment. Motivated by this feature of biological neurons, we describe a reservoir computing framework with explicit damped, oscillatory node dynamics. We express the oscillatory dynamics using two history dependent terms to connect these dynamics with existing artificial neural network approaches and apply physical and stationary constraints to reduce the number of free parameters. We then optimize and illustrate reservoir performance by classifying different brain rhythms associated with epilepsy and show that reservoir elements support classification by resonating with features of the input signals. Applying the same reservoir network to visual and auditory signal types, we show the reservoir generalizes for accurate classification with few trainable parameters. Compared to existing artificial neural network approaches, the proposed resonant reservoir network (RRN) utilizes oscillatory dynamics expressed as a straightforward extension of traditional artificial neural networks, produces interpretable features for classification, avoids computationally expensive training (e.g., backpropagation), and performs well with few parameters in different classification scenarios. We propose that RRNs may serve as efficient, biologically implemented building blocks to achieve complex goals in biological and artificial neural networks.
| null |
https://arxiv.org/abs/2506.17083v1
|
https://arxiv.org/pdf/2506.17083v1.pdf
| null |
[
"Mark A. Kramer"
] |
[
"Classification"
] | 2025-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/efficient-and-faithful-reconstruction-of
|
2506.17079
| null | null |
Efficient and faithful reconstruction of dynamical attractors using homogeneous differentiators
|
Reconstructing the attractors of complex nonlinear dynamical systems from available measurements is key to analyse and predict their time evolution. Existing attractor reconstruction methods typically rely on topological embedding and may produce poor reconstructions, which differ significantly from the actual attractor, because measurements are corrupted by noise and often available only for some of the state variables and/or their combinations, and the time series are often relatively short. Here, we propose the use of Homogeneous Differentiators (HD) to effectively de-noise measurements and more faithfully reconstruct attractors of nonlinear systems. Homogeneous Differentiators are supported by rigorous theoretical guarantees about their de-noising capabilities, and their results can be fruitfully combined with time-delay embedding, differential embedding and functional observability. We apply our proposed HD-based methodology to simulated dynamical models of increasing complexity, from the Lorenz system to the Hindmarsh-Rose model and the Epileptor model for neural dynamics, as well as to empirical data of EEG recordings. In the presence of corrupting noise of various types, we obtain drastically improved quality and resolution of the reconstructed attractors, as well as significantly reduced computational time, which can be orders of magnitude lower than that of alternative methods. Our tests show the flexibility and effectiveness of Homogeneous Differentiators and suggest that they can become the tool of choice for preprocessing noisy signals and reconstructing attractors of highly nonlinear dynamical systems from both theoretical models and real data.
| null |
https://arxiv.org/abs/2506.17079v1
|
https://arxiv.org/pdf/2506.17079v1.pdf
| null |
[
"Uros Sutulovic",
"Daniele Proverbio",
"Rami Katz",
"Giulia Giordano"
] |
[
"EEG"
] | 2025-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/characterizing-neural-manifolds-properties
|
2506.12187
| null | null |
Characterizing Neural Manifolds' Properties and Curvatures using Normalizing Flows
|
Neuronal activity is found to lie on low-dimensional manifolds embedded within the high-dimensional neuron space. Variants of principal component analysis are frequently employed to assess these manifolds. These methods are, however, limited by assuming a Gaussian data distribution and a flat manifold. In this study, we introduce a method designed to satisfy three core objectives: (1) extract coordinated activity across neurons, described either statistically as correlations or geometrically as manifolds; (2) identify a small number of latent variables capturing these structures; and (3) offer an analytical and interpretable framework characterizing statistical properties by a characteristic function and describing manifold geometry through a collection of charts. To this end, we employ Normalizing Flows (NFs), which learn an underlying probability distribution of data by an invertible mapping between data and latent space. Their simplicity and ability to compute exact likelihoods distinguish them from other generative networks. We adjust the NF's training objective to distinguish between relevant (in manifold) and noise dimensions (out of manifold). Additionally, we find that different behavioral states align with the components of the latent Gaussian mixture model, enabling their treatment as distinct curved manifolds. Subsequently, we approximate the network for each mixture component with a quadratic mapping, allowing us to characterize both neural manifold curvature and non-Gaussian correlations among recording channels. Applying the method to recordings in macaque visual cortex, we demonstrate that state-dependent manifolds are curved and exhibit complex statistical dependencies. Our approach thus enables an expressive description of neural population activity, uncovering non-linear interactions among groups of neurons.
| null |
https://arxiv.org/abs/2506.12187v2
|
https://arxiv.org/pdf/2506.12187v2.pdf
| null |
[
"Peter Bouss",
"Sandra Nestler",
"Kirsten Fischer",
"Claudia Merger",
"Alexandre René",
"Moritz Helias"
] |
[] | 2025-06-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/ex4sperans/variational-inference-with-normalizing-flows/blob/922b569f851e02fa74700cd0754fe2ef5c1f3180/flow.py#L9",
"description": "**Normalizing Flows** are a method for constructing complex distributions by transforming a\r\nprobability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.\r\n\r\nIn the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \\mathbb{R}^{d} \\rightarrow \\mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \\cdot f\\left(z\\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\\left(z\\right)$, the resulting random variable $z' = f\\left(z\\right)$ has a distribution:\r\n\r\n$$ q\\left(\\mathbf{z}'\\right) = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}^{-1}}{\\delta{\\mathbf{z'}}}\\bigr\\vert = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}}{\\delta{\\mathbf{z}}}\\bigr\\vert ^{-1} $$\r\n\f\r\nwhere the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\\_{K}\\left(\\mathbf{z}\\right)$ obtained by successively transforming a random variable $z\\_{0}$ with distribution $q\\_{0}$ through a chain of $K$ transformations $f\\_{k}$ is:\r\n\r\n$$ z\\_{K} = f\\_{K} \\cdot \\dots \\cdot f\\_{2} \\cdot f\\_{1}\\left(z\\_{0}\\right) $$\r\n\r\n$$ \\ln{q}\\_{K}\\left(z\\_{K}\\right) = \\ln{q}\\_{0}\\left(z\\_{0}\\right) − \\sum^{K}\\_{k=1}\\ln\\vert\\det\\frac{\\delta{f\\_{k}}}{\\delta{\\mathbf{z\\_{k-1}}}}\\vert $$\r\n\f\r\nThe path traversed by the random variables $z\\_{k} = f\\_{k}\\left(z\\_{k-1}\\right)$ with initial distribution $q\\_{0}\\left(z\\_{0}\\right)$ is called the flow and the path formed by the successive distributions $q\\_{k}$ is a normalizing flow.",
"full_name": "Normalizing Flows",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Distribution Approximation** methods aim to approximate a complex distribution. Below you can find a continuously updating list of distribution approximation methods.",
"name": "Distribution Approximation",
"parent": null
},
"name": "Normalizing Flows",
"source_title": "Variational Inference with Normalizing Flows",
"source_url": "http://arxiv.org/abs/1505.05770v6"
},
{
"code_snippet_url": "",
"description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.",
"full_name": "ALIGN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)",
"name": "Vision and Language Pre-Trained Models",
"parent": null
},
"name": "ALIGN",
"source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision",
"source_url": "https://arxiv.org/abs/2102.05918v2"
}
] |
https://paperswithcode.com/paper/pacellm-brain-inspired-large-language-models
|
2506.17310
| null | null |
PaceLLM: Brain-Inspired Large Language Models for Long-Context Understanding
|
While Large Language Models (LLMs) demonstrate strong performance across domains, their long-context capabilities are limited by transient neural activations causing information decay and unstructured feed-forward network (FFN) weights leading to semantic fragmentation. Inspired by the brain's working memory and cortical modularity, we propose PaceLLM, featuring two innovations: (1) a Persistent Activity (PA) Mechanism that mimics prefrontal cortex (PFC) neurons' persistent firing by introducing an activation-level memory bank to dynamically retrieve, reuse, and update critical FFN states, addressing contextual decay; and (2) Cortical Expert (CE) Clustering that emulates task-adaptive neural specialization to reorganize FFN weights into semantic modules, establishing cross-token dependencies and mitigating fragmentation. Extensive evaluations show that PaceLLM achieves 6% improvement on LongBench's Multi-document QA and 12.5-17.5% performance gains on Infinite-Bench tasks, while extending measurable context length to 200K tokens in Needle-In-A-Haystack (NIAH) tests. This work pioneers brain-inspired LLM optimization and is complementary to other works. Besides, it can be generalized to any model and enhance their long-context performance and interpretability without structural overhauls.
| null |
https://arxiv.org/abs/2506.17310v1
|
https://arxiv.org/pdf/2506.17310v1.pdf
| null |
[
"Kangcong Li",
"Peng Ye",
"Chongjun Tu",
"Lin Zhang",
"Chunfeng Song",
"Jiamin Wu",
"Tao Yang",
"Qihao Zheng",
"Tao Chen"
] |
[
"Long-Context Understanding"
] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-task-agnostic-skill-bases-to-uncover
|
2506.15190
| null | null |
Learning Task-Agnostic Skill Bases to Uncover Motor Primitives in Animal Behaviors
|
Animals flexibly recombine a finite set of core motor primitives to meet diverse task demands, but existing behavior-segmentation methods oversimplify this process by imposing discrete syllables under restrictive generative assumptions. To reflect the animal behavior generation procedure, we introduce skill-based imitation learning (SKIL) for behavior understanding, a reinforcement learning-based imitation framework that (1) infers interpretable skill sets, i.e., latent basis functions of behavior, by leveraging representation learning on transition probabilities, and (2) parameterizes policies as dynamic mixtures of these skills. We validate our approach on a simple grid world, a discrete labyrinth, and unconstrained videos of freely moving animals. Across tasks, it identifies reusable skill components, learns continuously evolving compositional policies, and generates realistic trajectories beyond the capabilities of traditional discrete models. By exploiting generative behavior modeling with compositional representations, our method offers a concise, principled account of how complex animal behaviors emerge from dynamic combinations of fundamental motor primitives.
| null |
https://arxiv.org/abs/2506.15190v1
|
https://arxiv.org/pdf/2506.15190v1.pdf
| null |
[
"Jiyi Wang",
"Jingyang Ke",
"Bo Dai",
"Anqi Wu"
] |
[
"Imitation Learning",
"Representation Learning"
] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/poco-scalable-neural-forecasting-through
|
2506.14957
| null | null |
POCO: Scalable Neural Forecasting through Population Conditioning
|
Predicting future neural activity is a core challenge in modeling brain dynamics, with applications ranging from scientific investigation to closed-loop neurotechnology. While recent models of population activity emphasize interpretability and behavioral decoding, neural forecasting-particularly across multi-session, spontaneous recordings-remains underexplored. We introduce POCO, a unified forecasting model that combines a lightweight univariate forecaster with a population-level encoder to capture both neuron-specific and brain-wide dynamics. Trained across five calcium imaging datasets spanning zebrafish, mice, and C. elegans, POCO achieves state-of-the-art accuracy at cellular resolution in spontaneous behaviors. After pre-training, POCO rapidly adapts to new recordings with minimal fine-tuning. Notably, POCO's learned unit embeddings recover biologically meaningful structure-such as brain region clustering-without any anatomical labels. Our comprehensive analysis reveals several key factors influencing performance, including context length, session diversity, and preprocessing. Together, these results position POCO as a scalable and adaptable approach for cross-session neural forecasting and offer actionable insights for future model design. By enabling accurate, generalizable forecasting models of neural dynamics across individuals and species, POCO lays the groundwork for adaptive neurotechnologies and large-scale efforts for neural foundation models.
| null |
https://arxiv.org/abs/2506.14957v1
|
https://arxiv.org/pdf/2506.14957v1.pdf
| null |
[
"Yu Duan",
"Hamza Tahir Chaudhry",
"Misha B. Ahrens",
"Christopher D Harvey",
"Matthew G Perich",
"Karl Deisseroth",
"Kanaka Rajan"
] |
[] | 2025-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-from-the-past-with-cascading
|
2506.14598
| null | null |
Learning From the Past with Cascading Eligibility Traces
|
Animals often receive information about errors and rewards after a significant delay. For example, there is typically a delay of tens to hundreds of milliseconds between motor actions and visual feedback. The standard approach to handling delays in models of synaptic plasticity is to use eligibility traces. However, standard eligibility traces that decay exponentially mix together any events that happen during the delay, presenting a problem for any credit assignment signal that occurs with a significant delay. Here, we show that eligibility traces formed by a state-space model, inspired by a cascade of biochemical reactions, can provide a temporally precise memory for handling credit assignment at arbitrary delays. We demonstrate that these cascading eligibility traces (CETs) work for credit assignment at behavioral time-scales, ranging from seconds to minutes. As well, we can use CETs to handle extremely slow retrograde signals, as have been found in retrograde axonal signaling. These results demonstrate that CETs can provide an excellent basis for modeling synaptic plasticity.
| null |
https://arxiv.org/abs/2506.14598v1
|
https://arxiv.org/pdf/2506.14598v1.pdf
| null |
[
"Tokiniaina Raharison Ralambomihanta",
"Ivan Anokhin",
"Roman Pogodin",
"Samira Ebrahimi Kahou",
"Jonathan Cornford",
"Blake Aaron Richards"
] |
[] | 2025-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/exploring-eeg-indicators-to-evaluate
|
2506.14339
| null | null |
Exploring EEG Indicators to Evaluate Listening Difficulties in Noisy Environments
|
Auditory processing difficulties involve challenges in understanding speech in noisy environments despite normal hearing. However, the neural mechanisms remain unclear, and standardized diagnostic criteria are lacking. This study examined neural indicators using EEG under realistic noisy conditions. Ten Japanese-speaking university students participated in auditory tasks, including a resting state, a lecture attention task with background noise, and a task requiring attention to background noise. The study analyzed the peak frequency and power of alpha waves, the long-range temporal correlation of alpha oscillations, and the absolute power of delta waves. Results showed a significant reduction in the power of alpha waves during the background noise attention task, suggesting increased cognitive load. In contrast, the peak frequency of alpha waves remained stable, indicating limited sensitivity to cognitive demand changes. Long-range temporal correlation increased under tasks requiring auditory attention, reflecting sustained attention-related neural dynamics, while the absolute power of delta waves showed no significant variation across tasks. Regression analysis revealed a significant negative correlation between the power of alpha waves in noisy conditions and screening scores for auditory processing difficulties, suggesting its potential as a neural indicator.
| null |
https://arxiv.org/abs/2506.14339v1
|
https://arxiv.org/pdf/2506.14339v1.pdf
| null |
[
"Azuki Onaya",
"Hiroki Tanaka"
] |
[
"Diagnostic",
"EEG"
] | 2025-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-sex-dependent-effects-of-psychedelics-on
|
2506.17293
| null | null |
The Sex-Dependent Effects of Psychedelics on Myelination in APOE4 Mice
|
Several studies have linked myelin abnormalities with neuropsychiatric disorders; others have implicated psychedelics as a potential therapeutic for such conditions. One risk factor for these demyelinating disorders is a mutation in the Apolipoprotein E gene known as APOE4. This variant impedes the cholesterol regulation of oligodendrocytes responsible for the myelination, or insulation, of neurons when compared to the wild-type phenotype. In this work, I advance knowledge of cellular pathways involved in the progression of APOE4-related diseases and elucidate the effects of psychedelics on the brain. Myelin sheaths are vital for maintaining neural pathways, and healthy oligodendrocytes serve as a prerequisite for axonal integrity. Further, the Kaufer Lab has observed significant behavioral differences between male and female APOE4 mice following psychedelic treatment with 2,5-Dimethoxy-4-iodoamphetamine, or DOI, a serotonin receptor ligand. The sex-dependent mechanisms influencing symptom differences and treatment outcomes in AD are unclear, and could be key to developing successful therapeutics for myelin-related issues. I hypothesize that administration of DOI will increase the myelination activity of oligodendrocytes in female APOE4 mice compared with their male counterparts or controls. Preliminary results show a significant increase in MBP in the CA1, or short-term, and CA2, or long term, areas in only female APOE4 mice post-introduction of DOI to the system. This aligns with behavioral data indicating fewer anxiety-related behaviors in female APOE4 mice after DOI administration. These findings reveal distinct biological mechanisms in male and female brain degeneration and suggest potential for sex-specific therapeutics.
| null |
https://arxiv.org/abs/2506.17293v1
|
https://arxiv.org/pdf/2506.17293v1.pdf
| null |
[
"Sanjana Shankar"
] |
[] | 2025-06-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/metric-framework-of-coherent-activity
|
2506.12291
| null | null |
Metric Framework of Coherent Activity Patterns Identification in Spiking Neuronal Networks
|
Partial synchronization plays a crucial role in the functioning of neuronal networks: selective, coordinated activation of neurons enables information processing that flexibly adapts to a changing computational context. Since the structure of coherent activity patterns reflects the network's current state, developing automated tools to identify them is a key challenge in neurodynamics. Existing methods for analyzing neuronal dynamics tend to focus on global characteristics of the network, such as its aggregated synchrony level. While this approach can distinguish between the network's main dynamical states, it cannot reveal the localization or properties of distinct coherent patterns. In this work, we propose a new perspective on neural dynamics analysis that enables the study of network coherence at the single-neuron scale. We interpret the network as a metric space of neurons and represent its instantaneous state as an activity function on that space. We identify specific coherent activity clusters as regions where the activity function exhibits spatial continuity. Each cluster's activity is further characterized using the analytical properties of the activity function within that region. This approach yields a concise yet detailed algorithmic profile of the network's activity patterns.
| null |
https://arxiv.org/abs/2506.12291v1
|
https://arxiv.org/pdf/2506.12291v1.pdf
| null |
[
"Daniil Radushev",
"Olesia Dogonasheva",
"Boris Gutkin",
"Denis Zakharov"
] |
[] | 2025-06-14T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/mapping-neural-theories-of-consciousness-onto
|
2506.12224
| null | null |
Mapping Neural Theories of Consciousness onto the Common Model of Cognition
|
A beginning is made at mapping four neural theories of consciousness onto the Common Model of Cognition. This highlights how the four jointly depend on recurrent local modules plus a cognitive cycle operating on a global working memory with complex states, and reveals how an existing integrative view of consciousness from a neural perspective aligns with the Com-mon Model.
| null |
https://arxiv.org/abs/2506.12224v1
|
https://arxiv.org/pdf/2506.12224v1.pdf
| null |
[
"Paul S. Rosenbloom",
"John E. Laird",
"Christian Lebiere",
"Andrea Stocco"
] |
[] | 2025-06-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/scale-invariance-drives-convergence-in-ai-and
|
2506.12117
| null | null |
Scale-Invariance Drives Convergence in AI and Brain Representations
|
Despite variations in architecture and pretraining strategies, recent studies indicate that large-scale AI models often converge toward similar internal representations that also align with neural activity. We propose that scale-invariance, a fundamental structural principle in natural systems, is a key driver of this convergence. In this work, we propose a multi-scale analytical framework to quantify two core aspects of scale-invariance in AI representations: dimensional stability and structural similarity across scales. We further investigate whether these properties can predict alignment performance with functional Magnetic Resonance Imaging (fMRI) responses in the visual cortex. Our analysis reveals that embeddings with more consistent dimension and higher structural similarity across scales align better with fMRI data. Furthermore, we find that the manifold structure of fMRI data is more concentrated, with most features dissipating at smaller scales. Embeddings with similar scale patterns align more closely with fMRI data. We also show that larger pretraining datasets and the inclusion of language modalities enhance the scale-invariance properties of embeddings, further improving neural alignment. Our findings indicate that scale-invariance is a fundamental structural principle that bridges artificial and biological representations, providing a new framework for evaluating the structural quality of human-like AI systems.
| null |
https://arxiv.org/abs/2506.12117v1
|
https://arxiv.org/pdf/2506.12117v1.pdf
| null |
[
"Junjie Yu",
"Wenxiao Ma",
"Jianyu Zhang",
"Haotian Deng",
"Zihan Deng",
"Yi Guo",
"Quanying Liu"
] |
[] | 2025-06-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.",
"full_name": "ALIGN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)",
"name": "Vision and Language Pre-Trained Models",
"parent": null
},
"name": "ALIGN",
"source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision",
"source_url": "https://arxiv.org/abs/2102.05918v2"
}
] |
https://paperswithcode.com/paper/differences-in-neurovascular-coupling-in
|
2506.11634
| null | null |
Differences in Neurovascular Coupling in Patients with Major Depressive Disorder: Evidence from Simultaneous Resting-State EEG-fNIRS
|
Neurovascular coupling (NVC) refers to the process by which local neural activity, through energy consumption, induces changes in regional cerebral blood flow to meet the metabolic demands of neurons. Event-related studies have shown that the hemodynamic response typically lags behind neural activation by 4-6 seconds. However, little is known about how NVC is altered in patients with major depressive disorder (MDD) and throughout the recovery process. In this study, we employed simultaneous resting-state electroencephalography (rsEEG) and functional near-infrared spectroscopy (fNIRS) to monitor neural and hemodynamic signals. Twelve patients with MDD during the acute phase, ten patients in the maintenance or consolidation phase, and six healthy controls were involved. We calculated the differences in coherence and temporal delay between spontaneous peak electrophysiological activity and hemodynamic responses across groups during the resting state in the prefrontal cortex (PFC). We found that the neural activity and its subsequent correlation with hemodynamic responses were significantly higher in patients during the maintenance phase. The rise time from the lowest to the highest point of correlation was shorter in healthy individuals than in patients in the acute phase, and gradually recovered during remission. By leveraging wearable neuroimaging techniques, this study reveals alterations in neurovascular coupling in depression and offers novel multimodal insights into potential biomarkers for MDD and its recovery process.
| null |
https://arxiv.org/abs/2506.11634v1
|
https://arxiv.org/pdf/2506.11634v1.pdf
| null |
[
"Feng Yan",
"Xiaobin Wang",
"Yao Zhao",
"Shuyi Yang",
"Zhiren Wang"
] |
[
"EEG"
] | 2025-06-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/voxel-level-brain-states-prediction-using
|
2506.11455
| null | null |
Voxel-Level Brain States Prediction Using Swin Transformer
|
Understanding brain dynamics is important for neuroscience and mental health. Functional magnetic resonance imaging (fMRI) enables the measurement of neural activities through blood-oxygen-level-dependent (BOLD) signals, which represent brain states. In this study, we aim to predict future human resting brain states with fMRI. Due to the 3D voxel-wise spatial organization and temporal dependencies of the fMRI data, we propose a novel architecture which employs a 4D Shifted Window (Swin) Transformer as encoder to efficiently learn spatio-temporal information and a convolutional decoder to enable brain state prediction at the same spatial and temporal resolution as the input fMRI data. We used 100 unrelated subjects from the Human Connectome Project (HCP) for model training and testing. Our novel model has shown high accuracy when predicting 7.2s resting-state brain activities based on the prior 23.04s fMRI time series. The predicted brain states highly resemble BOLD contrast and dynamics. This work shows promising evidence that the spatiotemporal organization of the human brain can be learned by a Swin Transformer model, at high resolution, which provides a potential for reducing the fMRI scan time and the development of brain-computer interfaces in the future.
| null |
https://arxiv.org/abs/2506.11455v1
|
https://arxiv.org/pdf/2506.11455v1.pdf
| null |
[
"Yifei Sun",
"Daniel Chahine",
"Qinghao Wen",
"Tianming Liu",
"Xiang Li",
"Yixuan Yuan",
"Fernando Calamante",
"Jinglei Lv"
] |
[
"Prediction"
] | 2025-06-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
}
] |
https://paperswithcode.com/paper/convolutional-method-for-data-assimilation-an
|
2506.11365
| null | null |
Convolutional method for data assimilation An improved method on neuronal electrophysiological data
|
We present a convolution-based data assimilation method tailored to neuronal electrophysiology, addressing the limitations of traditional value-based synchronization approaches. While conventional methods rely on nudging terms and pointwise deviation metrics, they often fail to account for spike timing precision, a key feature in neural signals. Our approach applies a Gaussian convolution to both measured data and model estimates, enabling a cost function that evaluates both amplitude and timing alignment via spike overlap. This formulation remains compatible with gradient-based optimization. Through twin experiments and real hippocampal neuron recordings, we demonstrate improved parameter estimation and prediction quality, particularly in capturing sharp, time-sensitive dynamics.
| null |
https://arxiv.org/abs/2506.11365v1
|
https://arxiv.org/pdf/2506.11365v1.pdf
| null |
[
"Dawei Li",
"Henry D. I. Abarbanel"
] |
[
"parameter estimation"
] | 2025-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/wanting-to-be-understood-explains-the-meta
|
2506.12086
| null | null |
Wanting to Be Understood Explains the Meta-Problem of Consciousness
|
Because we are highly motivated to be understood, we created public external representations -- mime, language, art -- to externalise our inner states. We argue that such external representations are a pre-condition for access consciousness, the global availability of information for reasoning. Yet the bandwidth of access consciousness is tiny compared with the richness of `raw experience', so no external representation can reproduce that richness in full. Ordinarily an explanation of experience need only let an audience `grasp' the relevant pattern, not relive the phenomenon. But our drive to be understood, and our low level sensorimotor capacities for `grasping' so rich, that the demand for an explanation of the feel of experience cannot be ``satisfactory''. That inflated epistemic demand (the preeminence of our expectation that we could be perfectly understood by another or ourselves) rather than an irreducible metaphysical gulf -- keeps the hard problem of consciousness alive. But on the plus side, it seems we will simply never give up creating new ways to communicate and think about our experiences. In this view, to be consciously aware is to strive to have one's agency understood by oneself and others.
| null |
https://arxiv.org/abs/2506.12086v1
|
https://arxiv.org/pdf/2506.12086v1.pdf
| null |
[
"Chrisantha Fernando",
"Dylan Banarse",
"Simon Osindero"
] |
[] | 2025-06-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "We propose to theoretically and empirically examine the effect of incorporating weighting schemes into walk-aggregating GNNs. To this end, we propose a simple, interpretable, and end-to-end supervised GNN model, called AWARE (Attentive Walk-Aggregating GRaph Neural NEtwork), for graph-level prediction. AWARE aggregates the walk information by means of weighting schemes at distinct levels (vertex-, walk-, and graph-level) in a principled manner. By virtue of the incorporated weighting schemes at these different levels, AWARE can emphasize the information important for prediction while diminishing the irrelevant ones—leading to representations that can improve learning performance.",
"full_name": "Attentive Walk-Aggregating Graph Neural Network",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "AWARE",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/sparse-autoencoders-bridge-the-deep-learning
|
2506.11123
| null | null |
Sparse Autoencoders Bridge The Deep Learning Model and The Brain
|
We present SAE-BrainMap, a novel framework that directly aligns deep learning visual model representations with voxel-level fMRI responses using sparse autoencoders (SAEs). First, we train layer-wise SAEs on model activations and compute the correlations between SAE unit activations and cortical fMRI signals elicited by the same natural image stimuli with cosine similarity, revealing strong activation correspondence (maximum similarity up to 0.76). Depending on this alignment, we construct a voxel dictionary by optimally assigning the most similar SAE feature to each voxel, demonstrating that SAE units preserve the functional structure of predefined regions of interest (ROIs) and exhibit ROI-consistent selectivity. Finally, we establish fine-grained hierarchical mapping between model layers and the human ventral visual pathway, also by projecting voxel dictionary activations onto individual cortical surfaces, we visualize the dynamic transformation of the visual information in deep learning models. It is found that ViT-B/16$_{CLIP}$ tends to utilize low-level information to generate high-level semantic information in the early layers and reconstructs the low-dimension information later. Our results establish a direct, downstream-task-free bridge between deep neural networks and human visual cortex, offering new insights into model interpretability.
| null |
https://arxiv.org/abs/2506.11123v1
|
https://arxiv.org/pdf/2506.11123v1.pdf
| null |
[
"Ziming Mao",
"Jia Xu",
"Zeqi Zheng",
"Haofang Zheng",
"Dabing Sheng",
"Yaochu Jin",
"Guoyuan Yang"
] |
[
"Deep Learning"
] | 2025-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automatic-depression-assessment-using-machine
|
2506.18915
| null | null |
Automatic Depression Assessment using Machine Learning: A Comprehensive Survey
|
Depression is a common mental illness across current human society. Traditional depression assessment relying on inventories and interviews with psychologists frequently suffer from subjective diagnosis results, slow and expensive diagnosis process as well as lack of human resources. Since there is a solid evidence that depression is reflected by various human internal brain activities and external expressive behaviours, early traditional machine learning (ML) and advanced deep learning (DL) models have been widely explored for human behaviour-based automatic depression assessment (ADA) since 2012. However, recent ADA surveys typically only focus on a limited number of human behaviour modalities. Despite being used as a theoretical basis for developing ADA approaches, existing ADA surveys lack a comprehensive review and summary of multi-modal depression-related human behaviours. To bridge this gap, this paper specifically summarises depression-related human behaviours across a range of modalities (e.g. the human brain, verbal language and non-verbal audio/facial/body behaviours). We focus on conducting an up-to-date and comprehensive survey of ML-based ADA approaches for learning depression cues from these behaviours as well as discussing and comparing their distinctive features and limitations. In addition, we also review existing ADA competitions and datasets, identify and discuss the main challenges and opportunities to provide further research directions for future ADA researchers.
| null |
https://arxiv.org/abs/2506.18915v1
|
https://arxiv.org/pdf/2506.18915v1.pdf
| null |
[
"Siyang Song",
"Yupeng Huo",
"Shiqing Tang",
"Jiaee Cheong",
"Rui Gao",
"Michel Valstar",
"Hatice Gunes"
] |
[
"Survey"
] | 2025-06-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Adaptive Discriminator Augmentation",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Data Augmentation** refers to a class of methods that augment an image dataset to increase the effective size of the training set, or as a form of regularization to help the network learn more effective representations.",
"name": "Image Data Augmentation",
"parent": null
},
"name": "ADA",
"source_title": "Training Generative Adversarial Networks with Limited Data",
"source_url": "https://arxiv.org/abs/2006.06676v2"
}
] |
https://paperswithcode.com/paper/towards-unified-neural-decoding-with-brain
|
2506.12055
| null | null |
Towards Unified Neural Decoding with Brain Functional Network Modeling
|
Recent achievements in implantable brain-computer interfaces (iBCIs) have demonstrated the potential to decode cognitive and motor behaviors with intracranial brain recordings; however, individual physiological and electrode implantation heterogeneities have constrained current approaches to neural decoding within single individuals, rendering interindividual neural decoding elusive. Here, we present Multi-individual Brain Region-Aggregated Network (MIBRAIN), a neural decoding framework that constructs a whole functional brain network model by integrating intracranial neurophysiological recordings across multiple individuals. MIBRAIN leverages self-supervised learning to derive generalized neural prototypes and supports group-level analysis of brain-region interactions and inter-subject neural synchrony. To validate our framework, we recorded stereoelectroencephalography (sEEG) signals from a cohort of individuals performing Mandarin syllable articulation. Both real-time online and offline decoding experiments demonstrated significant improvements in both audible and silent articulation decoding, enhanced decoding accuracy with increased multi-subject data integration, and effective generalization to unseen subjects. Furthermore, neural predictions for regions without direct electrode coverage were validated against authentic neural data. Overall, this framework paves the way for robust neural decoding across individuals and offers insights for practical clinical applications.
| null |
https://arxiv.org/abs/2506.12055v1
|
https://arxiv.org/pdf/2506.12055v1.pdf
| null |
[
"Di wu",
"Linghao Bu",
"Yifei Jia",
"Lu Cao",
"Siyuan Li",
"Siyu Chen",
"Yueqian Zhou",
"Sheng Fan",
"Wenjie Ren",
"Dengchang Wu",
"Kang Wang",
"Yue Zhang",
"Yuehui Ma",
"Jie Yang",
"Mohamad Sawan"
] |
[
"Data Integration",
"Self-Supervised Learning"
] | 2025-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/decoding-cortical-microcircuits-a-generative
|
2506.11062
| null | null |
Decoding Cortical Microcircuits: A Generative Model for Latent Space Exploration and Controlled Synthesis
|
A central idea in understanding brains and building artificial intelligence is that structure determines function. Yet, how the brain's complex structure arises from a limited set of genetic instructions remains a key question. The ultra high-dimensional detail of neural connections vastly exceeds the information storage capacity of genes, suggesting a compact, low-dimensional blueprint must guide brain development. Our motivation is to uncover this blueprint. We introduce a generative model, to learn this underlying representation from detailed connectivity maps of mouse cortical microcircuits. Our model successfully captures the essential structural information of these circuits in a compressed latent space. We found that specific, interpretable directions within this space directly relate to understandable network properties. Building on this, we demonstrate a novel method to controllably generate new, synthetic microcircuits with desired structural features by navigating this latent space. This work offers a new way to investigate the design principles of neural circuits and explore how structure gives rise to function, potentially informing the development of more advanced artificial neural networks.
| null |
https://arxiv.org/abs/2506.11062v1
|
https://arxiv.org/pdf/2506.11062v1.pdf
| null |
[
"Xingyu Liu",
"Yubin Li",
"Guozhang Chen"
] |
[] | 2025-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/natural-intelligence-the-information
|
2506.16478
| null | null |
Natural Intelligence: the information processing power of life
|
Merely by existing, all physical systems contain information, and physical dynamics transforms and processes that information. This note investigates the information processing power of living systems. Living systems harvest free energy from the sun, from geothermal sources, and from each other. They then use that free energy to drive the complex set of chemical interactions that underlie life. All molecules -- be they simple molecules such as water, or complex molecules such as DNA -- register information via their chemical composition. When these molecules undergo chemical reactions, that information is transformed and processed. These chemical transformations can be thought of as elementary logical operations: such bio-ops include the absorption of a photon in a chromophore during photosynthesis, the formation or breaking of covalent, hydrogen, and van der Waals bonds in the process of metabolism and reproduction, or the release of a neurotransmitter molecule when a synapse fires in the brain. This paper estimates the total number of bio-ops that have been, and are being performed, by life on earth. We find that the current number of bio-ops performed by all life on earth is around $10^{33}-10^{35}$ bio-ops per second. The cells in an individual human being perform around $10^{20}-10^{22}$ bio-ops per second, comparable to the information processing power of all the computers, cell phones, and server farms on earth. Depending on how one defines a neural operation, at most a few percent of human bio-ops take place in the firing of neurons and synapses in the brain. Over the course of life on earth, about $10^{50}-10^{52}$ bio-ops have taken place.
| null |
https://arxiv.org/abs/2506.16478v1
|
https://arxiv.org/pdf/2506.16478v1.pdf
| null |
[
"Seth Lloyd",
"Michele Reilly"
] |
[] | 2025-06-19T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/geo-somatic-resonance-theory-a-vibrational
|
2506.14760
| null | null |
Geo-Somatic Resonance Theory: A Vibrational Framework for Sleep as Planetary Entrainment
|
Sleep is commonly studied through neurochemical, evolutionary, and behavioral frameworks, typically emphasizing circadian rhythms and energy conservation. However, these approaches do not fully explain a deeper biophysical question: why does sleep universally involve physical stillness, a lying posture, and disconnection from conscious control? This paper introduces a new hypothesis that sleep is not merely a biological function, but a state of vibrational synchronization between the human body and natural frequencies generated by the Earth. In this state, the body reduces its autonomous activity and aligns with external environmental rhythms, allowing for energy restoration, internal recalibration, and systemic reorganization. This perspective reframes life as a continuous process of internally driven vibration influenced by external physical fields. The proposed model offers new avenues for understanding aging, health, death, and consciousness.
| null |
https://arxiv.org/abs/2506.14760v1
|
https://arxiv.org/pdf/2506.14760v1.pdf
| null |
[
"Brathikan Vijayamohan Mankayarkarasi"
] |
[] | 2025-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-elixir-scoping-review-on-domain-specific
|
2506.14508
| null | null |
An ELIXIR scoping review on domain-specific evaluation metrics for synthetic data in life sciences
|
Synthetic data has emerged as a powerful resource in life sciences, offering solutions for data scarcity, privacy protection and accessibility constraints. By creating artificial datasets that mirror the characteristics of real data, allows researchers to develop and validate computational methods in controlled environments. Despite its promise, the adoption of synthetic data in Life Sciences hinges on rigorous evaluation metrics designed to assess their fidelity and reliability. To explore the current landscape of synthetic data evaluation metrics in several Life Sciences domains, the ELIXIR Machine Learning Focus Group performed a systematic review of the scientific literature following the PRISMA guidelines. Six critical domains were examined to identify current practices for assessing synthetic data. Findings reveal that, while generation methods are rapidly evolving, systematic evaluation is often overlooked, limiting researchers ability to compare, validate, and trust synthetic datasets across different domains. This systematic review underscores the urgent need for robust, standardized evaluation approaches that not only bolster confidence in synthetic data but also guide its effective and responsible implementation. By laying the groundwork for establishing domain-specific yet interoperable standards, this scoping review paves the way for future initiatives aimed at enhancing the role of synthetic data in scientific discovery, clinical practice and beyond.
| null |
https://arxiv.org/abs/2506.14508v1
|
https://arxiv.org/pdf/2506.14508v1.pdf
| null |
[
"Styliani-Christina Fragkouli",
"Somya Iqbal",
"Lisa Crossman",
"Barbara Gravel",
"Nagat Masued",
"Mark Onders",
"Devesh Haseja",
"Alex Stikkelman",
"Alfonso Valencia",
"Tom Lenaerts",
"Fotis Psomopoulos",
"Pilib Ó Broin",
"Núria Queralt-Rosinach",
"Davide Cirillo"
] |
[
"scientific discovery",
"Synthetic Data Evaluation"
] | 2025-06-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/scaling-the-human-niche
|
2506.20902
| null | null |
Scaling the human niche
|
The human niche represents the intersection of biological, ecological, cultural, and technological processes that have co-evolved to shape human adaptation and societal complexity. This paper explores the human niche through the lens of macroecological scaling theory, seeking to define and quantify the dimensions along which human ecological strategies have diversified. By leveraging concepts from classic niche theory, niche construction, and complex adaptive systems, I develop a framework for understanding human ecology as both predictable within mammalian scaling relationships and uniquely divergent due to social, cognitive, and technological factors. Key dimensions of the human niche-metabolism, cognition, sociality, and computation-are examined through scaling laws that structure human interactions with the environment and each other. The paper demonstrates how human niche expansion over evolutionary time has been characterized by increasing metabolic consumption, information processing capacity, and the formation of larger, more interconnected social networks. This cumulative trajectory has led to the unprecedented scale of contemporary human societies, with implications for sustainability, economic development, and future niche expansion, including into space. The study underscores the need for an integrative, quantitative approach to human ecology that situates human adaptability within broader ecological and evolutionary constraints.
| null |
https://arxiv.org/abs/2506.20902v1
|
https://arxiv.org/pdf/2506.20902v1.pdf
| null |
[
"Marcus J. Hamilton"
] |
[] | 2025-06-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/balancing-the-cellular-budget-lessons-in
|
2506.20776
| null | null |
Balancing the cellular budget: lessons in metabolism from microbes to cancer
|
Cancer cells are often seen to prefer glycolytic metabolism over oxidative phosphorylation even in the presence of oxygen-a phenomenon termed the Warburg effect. Despite significant strides in the decades since its discovery, a clear basis is yet to be established for the Warburg effect and why cancer cells show such a preference for aerobic glycolysis. In this review, we draw on what is known about similar metabolic shifts both in normal mammalian physiology and overflow metabolism in microbes to shed new light on whether aerobic glycolysis in cancer represents some form of optimisation of cellular metabolism. From microbes to cancer, we find that metabolic shifts favouring glycolysis are sometimes driven by the need for faster growth, but the growth rate is by no means a universal goal of optimal metabolism. Instead, optimisation goals at the cellular level are often multi-faceted and any given metabolic state must be considered in the context of both its energetic costs and benefits over a range of environmental contexts. For this purpose, we identify the conceptual framework of resource allocation as a potential testbed for the investigation of the cost-benefit balance of cellular metabolic strategies. Such a framework is also readily integrated with dynamical systems modelling, making it a promising avenue for new answers to the age-old question of why cells, from cancers to microbes, choose the metabolic strategies they do.
| null |
https://arxiv.org/abs/2506.20776v1
|
https://arxiv.org/pdf/2506.20776v1.pdf
| null |
[
"B. Vibishan",
"Mohit Kumar Jolly",
"Akshit Goyal"
] |
[] | 2025-06-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/training-flexible-models-of-genetic-variant
|
2506.19598
| null | null |
Training Flexible Models of Genetic Variant Effects from Functional Annotations using Accelerated Linear Algebra
|
To understand how genetic variants in human genomes manifest in phenotypes -- traits like height or diseases like asthma -- geneticists have sequenced and measured hundreds of thousands of individuals. Geneticists use this data to build models that predict how a genetic variant impacts phenotype given genomic features of the variant, like DNA accessibility or the presence of nearby DNA-bound proteins. As more data and features become available, one might expect predictive models to improve. Unfortunately, training these models is bottlenecked by the need to solve expensive linear algebra problems because variants in the genome are correlated with nearby variants, requiring inversion of large matrices. Previous methods have therefore been restricted to fitting small models, and fitting simplified summary statistics, rather than the full likelihood of the statistical model. In this paper, we leverage modern fast linear algebra techniques to develop DeepWAS (Deep genome Wide Association Studies), a method to train large and flexible neural network predictive models to optimize likelihood. Notably, we find that larger models only improve performance when using our full likelihood approach; when trained by fitting traditional summary statistics, larger models perform no better than small ones. We find larger models trained on more features make better predictions, potentially improving disease predictions and therapeutic target identification.
|
Notably, we find that larger models only improve performance when using our full likelihood approach; when trained by fitting traditional summary statistics, larger models perform no better than small ones.
|
https://arxiv.org/abs/2506.19598v1
|
https://arxiv.org/pdf/2506.19598v1.pdf
| null |
[
"Alan N. Amin",
"Andres Potapczynski",
"Andrew Gordon Wilson"
] |
[] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rethinking-ecological-measures-of-functional
|
2506.17839
| null | null |
Rethinking Ecological Measures Of Functional Diversity
|
Understanding functional diversity, the range and variability of species' roles and actions within their communities, is key to predicting and preserving the functions that sustain both nature and human well-being. In this paper, we provide a comprehensive review of the literature on functional diversity measurement. We begin by consolidating essential criteria that effective measures of functional diversity should meet. We then evaluate fifteen widely used functional diversity metrics against these criteria and assess their performance across six synthetic ecosystem scenarios where optimal behavior is known. Surprisingly, our analysis reveals that none of the widely used metrics fully satisfy all the established requirements, and all fail in at least one ecosystem scenario. In particular, we find that almost all metrics flagrantly violate set monotonicity and distance monotonicity, requirements that adding a novel species should increase diversity, and that the magnitude of that increase should grow with trait dissimilarity. We also find that metrics fail to decline when rare, functionally extreme species are lost, and even increase when a perfectly redundant species is added. These critical flaws leave them blind to the very biodiversity loss that functional diversity measures are intended to detect. Our findings underscore the urgent need to develop a new generation of functional diversity metrics that more accurately reflect ecological realities.
| null |
https://arxiv.org/abs/2506.17839v1
|
https://arxiv.org/pdf/2506.17839v1.pdf
| null |
[
"Ines Meraoumia",
"Adji Bousso Dieng"
] |
[
"Diversity"
] | 2025-06-21T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/modeling-and-inferring-metacommunity-dynamics
|
2506.17495
| null | null |
Modeling and Inferring Metacommunity Dynamics with Maximum Caliber
|
A major challenge for community ecology is to use distribution patterns to infer basic parameters of dynamical models without conducting laborious experimental manipulations. We present a novel framework drawn from statistical physics -- Maximum Caliber -- for characterizing the temporal dynamics of complex ecological systems in spatially extended landscapes and inferring parameters from temporal data. As an extension of Maximum Entropy modeling, Maximum Caliber models the probability of possible trajectories of a stochastic system, rather than focusing on system states. We demonstrate the ability of the Maximum Caliber framework to capture ecological processes ranging from near- to far- from-equilibrium, using an array of species interaction motifs including random interactions, apparent competition, intraguild competition, and non-transitive competition, along with dispersal among multiple patches. For spatio-temporal data of species occurrence in a metacommunity, the parameters of a Maximum Caliber model can be estimated through a simple logistic regression to reveal migration rates between patches, magnitudes of interactions between species, and effects of intrinsic local environmental suitabilities. We test the accuracy of the method over a range of system sizes and time periods, and find that these parameters can be estimated without bias. We introduce entropy production as a system-level measure of disequilibrium, and use ``pseudo-$R^2$'' to characterize the predictability of the system. We show that our model can predict the dynamics of metacommunities much better than steady state models, when the system is far from equilibrium. The capacity to estimate basic parameters of dynamical metacommunity models from spatio-temporal data represents an important breakthrough for the study of metacommunities with application to practical problems in conservation and restoration ecology.
| null |
https://arxiv.org/abs/2506.17495v1
|
https://arxiv.org/pdf/2506.17495v1.pdf
| null |
[
"Zachary Jackson",
"Mathew A. Leibold",
"Robert D. Holt",
"BingKan Xue"
] |
[] | 2025-06-20T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.\r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression)\r\n\r\nImage: [Michaelg2015](https://commons.wikimedia.org/wiki/User:Michaelg2015)",
"full_name": "Logistic Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Logistic Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/trilobite-tridents-hydrodynamic-lift-and
|
2506.15922
| null | null |
Trilobite tridents: hydrodynamic lift and stability mechanisms for queue formation
|
The bizarre trident-like cephalic projections of Walliserops trifurcatus have previously been interpreted as sexually selected weapons for intraspecific combat. We propose an alternative hypothesis grounded in biomechanics and collective behavior, that tridents evolved as adaptations for hydrodynamic lift and queue stability, conferring energetic advantages during group locomotion. Under this hypothesis, lift could offset gravitational forces, enabling greater locomotor efficiency, while mechanically linked formations, where tridents rested on the pygidia of leading individuals, enhanced pitch and roll stability and minimized costly accelerations and collisions. These formations also facilitated hydrodynamic drafting, allowing weaker individuals to conserve energy and remain integrated within the group. The trident's structure, though inefficient for solitary lift or combat, functioned effectively in cooperative formations, suggesting that its original selective advantage lay not in individual performance but in enhancing group cohesion and efficiency. Rather than emerging from competitive pressures alone, the trident may have arisen through selection for coordinated, cooperative movement -- potentially representing a precursor stage to traits later exapted for sexual selection.
| null |
https://arxiv.org/abs/2506.15922v1
|
https://arxiv.org/pdf/2506.15922v1.pdf
| null |
[
"Hugh A. Trenchard",
"Carlton E. Brett",
"Matjaz Perc"
] |
[] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/eukaryotic-ancestry-in-a-finite-world
|
2506.15764
| null | null |
Eukaryotic ancestry in a finite world
|
Following genetic ancestry in eukaryote populations poses several open problems due to sexual reproduction and recombination. The history of extant genetic material is usually modeled backwards in time, but tracking chromosomes at a large scale is not trivial, as successive recombination events break them into several segments. For this reason, the behavior of the distribution of genetic segments across the ancestral population is not fully understood. Moreover, as individuals transmit only half of their genetic content to their offspring, after a few generations, it is possible that ghosts arise, that is, genealogical ancestors that transmit no genetic material to any individual. While several theoretical predictions exist to estimate properties of ancestral segments or ghosts, most of them rely on simplifying assumptions such as an infinite population size or an infinite chromosome length. It is not clear how well these results hold in a finite universe, and current simulators either make other approximations or cannot handle the scale required to answer these questions. In this work, we use an exact back-in-time simulator of large diploid populations experiencing recombination that tracks genealogical and genetic ancestry, without approximations. We focus on the distinction between genealogical and genetic ancestry and, additionally, we explore the effects of genome structure on ancestral segment distribution and the proportion of genetic ancestors. Our study reveals that some of the theoretical predictions hold well in practice, but that, in several cases, it highlights discrepancies between theoretical predictions assuming infinite parameters and empirical results in finite populations, emphasizing the need for cautious application of mathematical models in biological contexts.
| null |
https://arxiv.org/abs/2506.15764v1
|
https://arxiv.org/pdf/2506.15764v1.pdf
| null |
[
"Juliette Luiselli",
"Manuel Lafond"
] |
[] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/modeling-transmission-dynamics-of
|
2506.19869
| null | null |
Modeling Transmission Dynamics of Tuberculosis: Parameter Estimation and Sensitivity Analysis Using Real-World Data
|
Tuberculosis (TB) continues to pose a major public health challenge, particularly in high-burden regions such as Ethiopia, necessitating a more profound understanding of its transmission dynamics. In this study, we developed an SVEITRS compartmental model to investigate the transmission dynamics of TBs, utilizing real data from Ethiopia from 2011-2021. Model parameters were estimated via two methods: nonlinear least squares and maximum likelihood, with maximum likelihood providing more accurate and reliable results, as confirmed by a test case. The model's stability analysis indicated that there is a disease-free equilibrium in areas where the basic reproduction number ($\mathscr{R}_0$) is less than one. The results suggest that optimal conditions could lead to the elimination of TB. On the other hand, there is an endemic equilibrium in areas where $\mathscr{R}_0$ is greater than one, which means that the disease is still present. Sensitivity analysis revealed important factors affecting TB levels: higher natural death rates, vaccination rates, treatment rates, and disease-related death rates lower TB cases, whereas higher recruitment rates, contact rates, infection rates, and loss of vaccine protection increase its spread. These findings highlights to the necessity of enhancing vaccination, treatment, and recovery strategies while addressing drivers of transmission to achieve TB control in Ethiopia. This study provides useful advice for guiding TB control efforts and public health interventions in Ethiopia and similar regions.
| null |
https://arxiv.org/abs/2506.19869v1
|
https://arxiv.org/pdf/2506.19869v1.pdf
| null |
[
"Moksina Seyid",
"Abdu Mohammed Seid",
"Yassin Tesfaw Abebe"
] |
[
"parameter estimation",
"Sensitivity"
] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/understanding-the-rift-between-update-rules
|
2506.15528
| null | null |
Understanding the rift between update rules in Evolutionary Graph Theory: The intrinsic death rate drives star graphs from amplifying to suppressing natural selection
|
Evolutionary graph theory is the study of evolutionary dynamics in structured populations. A well-known problem in evolutionary graph theory is that the spread of mutation (measured by fixation probability) is impacted by the graph type chosen and the update rule. For example, the star graph is an amplifier of natural selection under the birth-death with fitness on birth (Bd) update rule but a suppressor of natural selection under the death-birth with fitness on birth (dB) update rule. A continuous-time EGT model has been found to replicate Bd and dB results as special cases. Using this model, we show that changing the natural (intrinsic) death rate can cause a shift from Bd to dB dynamics. Assuming the mutant is advantageous, we show that if the natural death rate is greater than $\frac{1}{\sqrt{N}}$ the star is a suppressor, where $N$ is the number of nodes. As $N \longrightarrow \infty$, the natural death rate required to drive the star to a suppressor tends towards zero, so as the size of the graph increases, the star graph is likely to be suppressing for any non-zero natural death rate.
| null |
https://arxiv.org/abs/2506.15528v1
|
https://arxiv.org/pdf/2506.15528v1.pdf
| null |
[
"Max Dew",
"Christopher E. Overton"
] |
[] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Transformer neural networks have achieved state-of-the-art results for unstructured data such as text and images but their adoption for graph-structured data has been limited. This is partly due to the difficulty of incorporating complex structural information in the basic transformer framework. We propose a simple yet powerful extension to the transformer - residual edge channels. The resultant framework, which we call Edge-augmented Graph Transformer (EGT), can directly accept, process and output structural information as well as node information. It allows us to use global self-attention, the key element of transformers, directly for graphs and comes with the benefit of long-range interaction among nodes. Moreover, the edge channels allow the structural information to evolve from layer to layer, and prediction tasks on edges/links can be performed directly from the output embeddings of these channels. In addition, we introduce a generalized positional encoding scheme for graphs based on Singular Value Decomposition which can improve the performance of EGT. Our framework, which relies on global node feature aggregation, achieves better performance compared to Convolutional/Message-Passing Graph Neural Networks, which rely on local feature aggregation within a neighborhood. We verify the performance of EGT in a supervised learning setting on a wide range of experiments on benchmark datasets. Our findings indicate that convolutional aggregation is not an essential inductive bias for graphs and global self-attention can serve as a flexible and adaptive alternative.",
"full_name": "Edge-augmented Graph Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "EGT",
"source_title": "Global Self-Attention as a Replacement for Graph Convolution",
"source_url": "https://arxiv.org/abs/2108.03348v3"
}
] |
https://paperswithcode.com/paper/assessing-the-impact-of-vaccination-on
|
2506.14536
| null | null |
Assessing the Impact of Vaccination on Rotavirus Transmission Dynamics Using Bayesian Inference
|
The introduction of the rotavirus vaccine in the United Kingdom (UK) in 2013 led to a noticeable decline in laboratory reports in subsequent years. To assess the impact of vaccination on rotavirus transmissibility we calibrated a stochastic compartmental epidemiological model using Sequential Monte Carlo (SMC) methods. Our analysis focuses on estimating the time-varying transmissibility parameter and documenting its evolution before and after vaccine rollout. We observe distinct periods of increasing and decreasing transmissibility, reflecting the dynamic response of rotavirus spread to immunization efforts. These findings improve our understanding of vaccination-driven shifts in disease transmission and provide a quantitative framework for evaluating long-term epidemiological trends.
| null |
https://arxiv.org/abs/2506.14536v1
|
https://arxiv.org/pdf/2506.14536v1.pdf
| null |
[
"Conor Rosato",
"Joshua Murphy",
"Simon Maskell",
"John Harris"
] |
[
"Bayesian Inference"
] | 2025-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-wastewater-based-epidemiology
|
2506.14331
| null | null |
Improving wastewater-based epidemiology through strategic placement of samplers
|
Wastewater-based epidemiology (WBE) is a fast emerging method for passively monitoring diseases in a population. By measuring the concentrations of pathogenic materials in wastewater, WBE negates demographic biases in clinical testing and healthcare demand, and may act as a leading indicator of disease incidence. For a WBE system to be effective, it should detect the presence of a new pathogen of concern early enough and with enough precision that it can still be localised and contained. In this study, then, we show how multiple wastewater sensors can be strategically placed across a wastewater system, to detect the presence of disease faster than if sampling was done at the wastewater treatment plant only. Our approach generalises to any tree-like network and takes into account the structure of the network and how the population is distributed over it. We show how placing sensors further upstream from the treatment plant improves detection sensitivity and can inform how an outbreak is evolving in different geographical regions. However, this improvement diminishes once individual-level shedding is modelled as highly dispersed. With overdispersed shedding, we show using real COVID-19 cases in Scotland that broad trends in disease incidence (i.e., whether the epidemic is in growth or decline) can still be reasonably estimated from the wastewater signal once incidence exceeds about 5 infections per day.
| null |
https://arxiv.org/abs/2506.14331v1
|
https://arxiv.org/pdf/2506.14331v1.pdf
| null |
[
"Anthony J Wood",
"Jessica Enright",
"Aeron R Sanchez",
"Ewan Colman",
"Rowland R Kao"
] |
[
"Epidemiology"
] | 2025-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/evaluating-pde-discovery-methods-for
|
2506.20694
| null | null |
Evaluating PDE discovery methods for multiscale modeling of biological signals
|
Biological systems are non-linear, include unobserved variables and the physical principles that govern their dynamics are partly unknown. This makes the characterization of their behavior very challenging. Notably, their activity occurs on multiple interdependent spatial and temporal scales that require linking mechanisms across scales. To address the challenge of bridging gaps between scales, we leverage partial differential equations (PDE) discovery. PDE discovery suggests meso-scale dynamics characteristics from micro-scale data. In this article, we present our framework combining particle-based simulations and PDE discovery and conduct preliminary experiments to assess equation discovery in controlled settings. We evaluate five state-of-the-art PDE discovery methods on particle-based simulations of calcium diffusion in astrocytes. The performances of the methods are evaluated on both the form of the discovered equation and the forecasted temporal variations of calcium concentration. Our results show that several methods accurately recover the diffusion term, highlighting the potential of PDE discovery for capturing macroscopic dynamics in biological systems from microscopic data.
| null |
https://arxiv.org/abs/2506.20694v1
|
https://arxiv.org/pdf/2506.20694v1.pdf
| null |
[
"Andréa Ducos",
"Audrey Denizot",
"Thomas Guyet",
"Hugues Berry"
] |
[
"Equation Discovery"
] | 2025-06-25T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/integrating-pharmacokinetics-and
|
2506.20157
| null | null |
Integrating Pharmacokinetics and Pharmacodynamics Modeling with Quantum Regression for Predicting Herbal Compound Toxicity
|
Herbal compounds present complex toxicity profiles that are often influenced by both intrinsic chemical properties and pharmacokinetics (PK) governing absorption and clearance. In this study, we develop a quantum regression model to predict acute toxicity severity for herbal-derived compounds by integrating toxicity data from NICEATM with pharmacological features from TCMSP.
| null |
https://arxiv.org/abs/2506.20157v1
|
https://arxiv.org/pdf/2506.20157v1.pdf
| null |
[
"Don Roosan",
"Saif Nirzhor",
"Rubayat Khan"
] |
[
"regression"
] | 2025-06-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/tube-into-pearls-a-membrane-driven-pearling
|
2506.19966
| null | null |
Tube into pearls: A membrane-driven pearling instability shapes platelet biogenesis
|
At the end of the 19th century, Rayleigh and Plateau explained the physical principle behind the fragmentation of a liquid jet into regular droplets commonly observed in everyday life from a faucet. The classical Rayleigh-Plateau instability concerns liquid jets governed by inertia and surface tension, whereas biological tubes are membrane-bounded and inertia-free. We therefore refer to the process observed here as a pearling instability, formally analogous to Rayleigh-Plateau but dominated by membrane mechanics. Although pearling-type instabilities have long been recognised in lipid tubes and some biological systems, a clear physiological example remained elusive. Here, we present results showing that pearling instability occurs during the physiological process of platelet formation. Platelets are formed from megakaryocytes in the bone marrow by the extension of long protrusions, called proplatelets, that traverse the blood vessels. As they extend in the bloodstream, proplatelets become pearled and detach. Long and pearled proplatelets then circulate in the peripheral blood before their fragmentation into calibrated platelets. We propose that this pearling, by creating regular constrictions along the proplatelet, is key to the process of proplatelet fragmentation into individual platelets of calibrated size. Pearling instability thus acts as a mechanobiological regulator allowing local delivery of the right size platelets to the right place at the right time. Our observations quantitatively match parameter-free theoretical predictions for membrane pearling, supporting a unified physical picture.
| null |
https://arxiv.org/abs/2506.19966v1
|
https://arxiv.org/pdf/2506.19966v1.pdf
| null |
[
"C. Léon",
"N. Brassard-Jollive",
"D. Gonzalez-Rodriguez",
"D. Riveline"
] |
[] | 2025-06-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Given a pattern $P,$ that is more complicated than the patterns, we fragment $P$ into simpler patterns such that their exact count is known. In the subgraph GNN proposed earlier, look into the subgraph of the host graph. We have seen that this technique is scalable on large graphs. Also, we have seen that subgraph GNN is more expressive and efficient than traditional GNN. So, we tried to explore the expressibility when the pattern is fragmented into smaller subpatterns.",
"full_name": "Fragmentation",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Localization Models",
"parent": null
},
"name": "Fragmentation",
"source_title": "Improving Expressivity of Graph Neural Networks using Localization",
"source_url": "https://arxiv.org/abs/2305.19659v3"
}
] |
https://paperswithcode.com/paper/riemannian-generative-decoder
|
2506.19133
| null | null |
Riemannian generative decoder
|
Riemannian representation learning typically relies on approximating densities on chosen manifolds. This involves optimizing difficult objectives, potentially harming models. To completely circumvent this issue, we introduce the Riemannian generative decoder which finds manifold-valued maximum likelihood latents with a Riemannian optimizer while training a decoder network. By discarding the encoder, we vastly simplify the manifold constraint compared to current approaches which often only handle few specific manifolds. We validate our approach on three case studies -- a synthetic branching diffusion process, human migrations inferred from mitochondrial DNA, and cells undergoing a cell division cycle -- each showing that learned representations respect the prescribed geometry and capture intrinsic non-Euclidean structure. Our method requires only a decoder, is compatible with existing architectures, and yields interpretable latent spaces aligned with data geometry.
|
Riemannian representation learning typically relies on approximating densities on chosen manifolds.
|
https://arxiv.org/abs/2506.19133v1
|
https://arxiv.org/pdf/2506.19133v1.pdf
| null |
[
"Andreas Bjerregaard",
"Søren Hauberg",
"Anders Krogh"
] |
[
"Decoder",
"Representation Learning"
] | 2025-06-23T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/harnessing-diet-and-gene-expression-insights
|
2506.19093
| null | null |
Harnessing Diet and Gene Expression Insights through a Centralized Nutrigenomics Database to Improve Public Health
|
Nutrigenomics is an emerging field that explores the intricate interaction between genes and diet. This study aimed to develop a comprehensive database to help clinicians and patients understand the connections between genetic disorders, associated genes, and tailored nutritional recommendations.
| null |
https://arxiv.org/abs/2506.19093v1
|
https://arxiv.org/pdf/2506.19093v1.pdf
| null |
[
"Fahmida Hai",
"Shriya Samudrala",
"Ijeoma Ezengwa",
"Rubayat Khan",
"Saif Nirzhor",
"Don Roosan"
] |
[] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-analytical-neighborhood-enrichment-score
|
2506.18692
| null | null |
An Analytical Neighborhood Enrichment Score for Spatial Omics
|
The neighborhood enrichment test is used to quantify spatial enrichment and depletion between spatial points with categorical labels, which is a common data type in spatial omics. Traditionally, this test relies on a permutation-based Monte Carlo approach, which tends to be computationally expensive for large datasets. In this study, we present a modified version of the test that can be computed analytically. This analytical version showed a minimum Pearson correlation of 0.95 with the conventional Monte Carlo-based method across eight spatial omics datasets, but with substantial speed-ups. Additional experiments on a large Xenium dataset demonstrated the method's ability to efficiently analyze large-scale data, making it a valuable tool for analyzing spatial omics data.
| null |
https://arxiv.org/abs/2506.18692v1
|
https://arxiv.org/pdf/2506.18692v1.pdf
| null |
[
"Axel Andersson",
"Hanna Nyström"
] |
[] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ehcube4p-learning-epistatic-patterns-through
|
2506.16921
| null | null |
EHCube4P: Learning Epistatic Patterns Through Hypercube Graph Convolution Neural Network for Protein Fitness Function Estimation
|
Understanding the relationship between protein sequences and their functions is fundamental to protein engineering, but this task is hindered by the combinatorially vast sequence space and the experimental noise inherent in fitness measurements. In this study, we present a novel framework that models the sequence landscape as a hypercube $H(k,2)$ and integrates wavelet-based signal denoising with a graph convolutional neural network (GCN) to predict protein fitness across rugged fitness landscapes. Using a dataset of 419 experimentally measured mutant sequences of the Tobacco 5-Epi-Aristolochene Synthase (TEAS) enzyme, we preprocess the fitness signals using a 1-D discrete wavelet transform with a Daubechies-3 basis to suppress experimental noise while preserving local epistatic patterns. Our model comprises two GCN layers, allowing for beyond pairwise aggregation, followed by a multi-layer perceptron (MLP). We show that our approach, EHCube4P, generalizes well across different enzyme activity datasets and effectively captures higher-order mutational interactions. Performance varies with the ruggedness of the fitness landscape, with smoother signals yielding higher test set $r^2$ scores. These results demonstrate that combining wavelet preprocessing with graph-based deep learning enhances the robustness and generalization of fitness prediction, particularly for sparse and noisy biological datasets. The approach provides a scalable and interpretable framework for protein fitness estimation applicable to a broad range of combinatorial biological systems.
| null |
https://arxiv.org/abs/2506.16921v1
|
https://arxiv.org/pdf/2506.16921v1.pdf
| null |
[
"Muhammad Daud",
"Philippe Charton",
"Cedric Damour",
"Jingbo Wang",
"Frederic Cadet"
] |
[
"Denoising"
] | 2025-06-20T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
},
{
"code_snippet_url": null,
"description": "A **Graph Convolutional Network**, or **GCN**, is an approach for semi-supervised learning on graph-structured data. It is based on an efficient variant of [convolutional neural networks](https://paperswithcode.com/methods/category/convolutional-neural-networks) which operate directly on graphs. The choice of convolutional architecture is motivated via a localized first-order approximation of spectral graph convolutions. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes.",
"full_name": "Graph Convolutional Network",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "GCN",
"source_title": "Semi-Supervised Classification with Graph Convolutional Networks",
"source_url": "http://arxiv.org/abs/1609.02907v4"
}
] |
https://paperswithcode.com/paper/integrating-dynamical-systems-learning-with
|
2506.14782
| null | null |
Integrating Dynamical Systems Learning with Foundational Models: A Meta-Evolutionary AI Framework for Clinical Trials
|
Artificial intelligence (AI) has evolved into an ecosystem of specialized "species," each with unique strengths. We analyze two: DeepSeek-V3, a 671-billion-parameter Mixture of Experts large language model (LLM) exemplifying scale-driven generality, and NetraAI, a dynamical system-based framework engineered for stability and interpretability on small clinical trial datasets. We formalize NetraAI's foundations, combining contraction mappings, information geometry, and evolutionary algorithms to identify predictive patient cohorts. Features are embedded in a metric space and iteratively contracted toward stable attractors that define latent subgroups. A pseudo-temporal embedding and long-range memory enable exploration of higher-order feature interactions, while an internal evolutionary loop selects compact, explainable 2-4-variable bundles ("Personas"). To guide discovery, we introduce an LLM Strategist as a meta-evolutionary layer that observes Persona outputs, prioritizes promising variables, injects domain knowledge, and assesses robustness. This two-tier architecture mirrors the human scientific process: NetraAI as experimentalist, the LLM as theorist, forming a self-improving loop. In case studies (schizophrenia, depression, pancreatic cancer), NetraAI uncovered small, high-effect-size subpopulations that transformed weak baseline models (AUC ~0.50-0.68) into near-perfect classifiers using only a few features. We position NetraAI at the intersection of dynamical systems, information geometry, and evolutionary learning, aligned with emerging concept-level reasoning paradigms such as LeCun's Joint Embedding Predictive Architecture (JEPA). By prioritizing reliable, explainable knowledge, NetraAI offers a new generation of adaptive, self-reflective AI to accelerate clinical discovery.
| null |
https://arxiv.org/abs/2506.14782v2
|
https://arxiv.org/pdf/2506.14782v2.pdf
| null |
[
"Joseph Geraci",
"Bessi Qorri",
"Christian Cumbaa",
"Mike Tsay",
"Paul Leonczyk",
"Luca Pani"
] |
[
"Evolutionary Algorithms",
"Large Language Model",
"Mixture-of-Experts"
] | 2025-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-causally-predictable-outcomes-from
|
2506.16629
| null | null |
Learning Causally Predictable Outcomes from Psychiatric Longitudinal Data
|
Causal inference in longitudinal biomedical data remains a central challenge, especially in psychiatry, where symptom heterogeneity and latent confounding frequently undermine classical estimators. Most existing methods for treatment effect estimation presuppose a fixed outcome variable and address confounding through observed covariate adjustment. However, the assumption of unconfoundedness may not hold for a fixed outcome in practice. To address this foundational limitation, we directly optimize the outcome definition to maximize causal identifiability. Our DEBIAS (Durable Effects with Backdoor-Invariant Aggregated Symptoms) algorithm learns non-negative, clinically interpretable weights for outcome aggregation, maximizing durable treatment effects and empirically minimizing both observed and latent confounding by leveraging the time-limited direct effects of prior treatments in psychiatric longitudinal data. The algorithm also furnishes an empirically verifiable test for outcome unconfoundedness. DEBIAS consistently outperforms state-of-the-art methods in recovering causal effects for clinically interpretable composite outcomes across comprehensive experiments in depression and schizophrenia.
|
Causal inference in longitudinal biomedical data remains a central challenge, especially in psychiatry, where symptom heterogeneity and latent confounding frequently undermine classical estimators.
|
https://arxiv.org/abs/2506.16629v1
|
https://arxiv.org/pdf/2506.16629v1.pdf
| null |
[
"Eric V. Strobl"
] |
[
"Causal Inference"
] | 2025-06-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/shrec-and-pheona-using-large-language-models
|
2506.16359
| null | null |
SHREC and PHEONA: Using Large Language Models to Advance Next-Generation Computational Phenotyping
|
Objective: Computational phenotyping is a central informatics activity with resulting cohorts supporting a wide variety of applications. However, it is time-intensive because of manual data review, limited automation, and difficulties in adapting algorithms across sources. Since LLMs have demonstrated promising capabilities for text classification, comprehension, and generation, we posit they will perform well at repetitive manual review tasks traditionally performed by human experts. To support next-generation computational phenotyping methods, we developed SHREC, a framework for comprehensive integration of LLMs into end-to-end phenotyping pipelines. Materials and Methods: We applied and tested the ability of three lightweight LLMs (Gemma2 27 billion, Mistral Small 24 billion, and Phi-4 14 billion) to classify concepts and phenotype patients using previously developed phenotypes for ARF respiratory support therapies. Results: All models performed well on concept classification, with the best model (Mistral) achieving an AUROC of 0.896 across all relevant concepts. For phenotyping, models demonstrated near-perfect specificity for all phenotypes, and the top-performing model (Mistral) reached an average AUROC of 0.853 for single-therapy phenotypes, despite lower performance on multi-therapy phenotypes. Discussion: There are several advantages of LLMs that support their application to computational phenotyping, such as their ability to adapt to new tasks with prompt engineering alone and their ability to incorporate raw EHR data. Future steps to advance next-generation phenotyping methods include determining optimal strategies for integrating biomedical data, exploring how LLMs reason, and advancing generative model methods. Conclusion: Current lightweight LLMs can feasibly assist researchers with resource-intensive phenotyping tasks such as manual data review.
| null |
https://arxiv.org/abs/2506.16359v1
|
https://arxiv.org/pdf/2506.16359v1.pdf
| null |
[
"Sarah Pungitore",
"Shashank Yadav",
"Molly Douglas",
"Jarrod Mosier",
"Vignesh Subbian"
] |
[
"Computational Phenotyping",
"Prompt Engineering",
"Specificity",
"text-classification",
"Text Classification"
] | 2025-06-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/geometric-deep-learning-assists-protein
|
2506.16091
| null | null |
Geometric deep learning assists protein engineering. Opportunities and Challenges
|
Protein engineering is experiencing a paradigmatic shift through the integration of geometric deep learning into computational design workflows. While traditional strategies, such as rational design and directed evolution, have enabled relevant advances, they remain limited by the complexity of sequence space and the cost of experimental validation. Geometric deep learning addresses these limitations by operating on non-Euclidean domains, capturing spatial, topological, and physicochemical features essential to protein function. This perspective outlines the current applications of GDL across stability prediction, functional annotation, molecular interaction modeling, and de novo protein design. We highlight recent methodological advances in model generalization, interpretability, and robustness, particularly under data-scarce conditions. A unified framework is proposed that integrates GDL with explainable AI and structure-based validation to support transparent, autonomous design. As GDL converges with generative modeling and high-throughput experimentation, it is emerging as a central technology in next-generation protein engineering and synthetic biology.
| null |
https://arxiv.org/abs/2506.16091v1
|
https://arxiv.org/pdf/2506.16091v1.pdf
| null |
[
"Julián García-Vinuesa",
"Jorge Rojas",
"Nicole Soto-García",
"Nicolás Martínez",
"Diego Alvarez-Saravia",
"Roberto Uribe-Paredes",
"Mehdi D. Davari",
"Carlos Conca",
"Juan A. Asenjo",
"David Medina-Ortiz"
] |
[
"Deep Learning",
"Protein Design"
] | 2025-06-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/bayesian-non-negative-matrix-factorization
|
2506.15855
| null | null |
Bayesian Non-Negative Matrix Factorization with Correlated Mutation Type Probabilities for Mutational Signatures
|
Somatic mutations, or alterations in DNA of a somatic cell, are key markers of cancer. In recent years, mutational signature analysis has become a prominent field of study within cancer research, commonly with Nonnegative Matrix Factorization (NMF) and Bayesian NMF. However, current methods assume independence across mutation types in the signatures matrix. This paper expands upon current Bayesian NMF methodologies by proposing novel methods that account for the dependencies between the mutation types. First, we implement the Bayesian NMF specification with a Multivariate Truncated Normal prior on the signatures matrix in order to model the covariance structure using external information, in our case estimated from the COSMIC signatures database. This model converges in fewer iterations, using MCMC, when compared to a model with independent Truncated Normal priors on elements of the signatures matrix and results in improvements in accuracy, especially on small sample sizes. In addition, we develop a hierarchical model that allows the covariance structure of the signatures matrix to be discovered rather than specified upfront, giving the algorithm more flexibility. This flexibility for the algorithm to learn the dependence structure of the signatures allows a better understanding of biological interactions and how these change across different types of cancer. The code for this project is contributed to an open-source R software package. Our work lays the groundwork for future research to incorporate dependency structure across mutation types in the signatures matrix and is also applicable to any use of NMF beyond just single-base substitution (SBS) mutational signatures.
| null |
https://arxiv.org/abs/2506.15855v1
|
https://arxiv.org/pdf/2506.15855v1.pdf
| null |
[
"Iris Lang",
"Jenna Landy",
"Giovanni Parmigiani"
] |
[] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/universal-laboratory-model-prognosis-of
|
2506.15330
| null | null |
Universal Laboratory Model: prognosis of abnormal clinical outcomes based on routine tests
|
Clinical laboratory results are ubiquitous in any diagnosis making. Predicting abnormal values of not prescribed tests based on the results of performed tests looks intriguing, as it would be possible to make early diagnosis available to everyone. The special place is taken by the Common Blood Count (CBC) test, as it is the most widely used clinical procedure. Combining routine biochemical panels with CBC presents a set of test-value pairs that varies from patient to patient, or, in common settings, a table with missing values. Here we formulate a tabular modeling problem as a set translation problem where the source set comprises pairs of GPT-like label column embedding and its corresponding value while the target set consists of the same type embeddings only. The proposed approach can effectively deal with missing values without implicitly estimating them and bridges the world of LLM with the tabular domain. Applying this method to clinical laboratory data, we achieve an improvement up to 8% AUC for joint predictions of high uric acid, glucose, cholesterol, and low ferritin levels.
| null |
https://arxiv.org/abs/2506.15330v1
|
https://arxiv.org/pdf/2506.15330v1.pdf
| null |
[
"Pavel Karpov",
"Ilya Petrenkov",
"Ruslan Raiman"
] |
[
"CBC TEST",
"Missing Values",
"Prognosis"
] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/disprotedit-exploring-disentangled
|
2506.14853
| null | null |
DisProtEdit: Exploring Disentangled Representations for Multi-Attribute Protein Editing
|
We introduce DisProtEdit, a controllable protein editing framework that leverages dual-channel natural language supervision to learn disentangled representations of structural and functional properties. Unlike prior approaches that rely on joint holistic embeddings, DisProtEdit explicitly separates semantic factors, enabling modular and interpretable control. To support this, we construct SwissProtDis, a large-scale multimodal dataset where each protein sequence is paired with two textual descriptions, one for structure and one for function, automatically decomposed using a large language model. DisProtEdit aligns protein and text embeddings using alignment and uniformity objectives, while a disentanglement loss promotes independence between structural and functional semantics. At inference time, protein editing is performed by modifying one or both text inputs and decoding from the updated latent representation. Experiments on protein editing and representation learning benchmarks demonstrate that DisProtEdit performs competitively with existing methods while providing improved interpretability and controllability. On a newly constructed multi-attribute editing benchmark, the model achieves a both-hit success rate of up to 61.7%, highlighting its effectiveness in coordinating simultaneous structural and functional edits.
| null |
https://arxiv.org/abs/2506.14853v1
|
https://arxiv.org/pdf/2506.14853v1.pdf
| null |
[
"Max Ku",
"Sun Sun",
"Hongyu Guo",
"Wenhu Chen"
] |
[
"Attribute",
"Disentanglement",
"Large Language Model",
"Representation Learning"
] | 2025-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/leveraging-transfer-learning-and-user
|
2506.14120
| null | null |
Leveraging Transfer Learning and User-Specific Updates for Rapid Training of BCI Decoders
|
Lengthy subject- or session-specific data acquisition and calibration remain a key barrier to deploying electroencephalography (EEG)-based brain-computer interfaces (BCIs) outside the laboratory. Previous work has shown that cross subject, cross-session invariant features exist in EEG. We propose a transfer learning pipeline based on a two-layer convolutional neural network (CNN) that leverages these invariants to reduce the burden of data acquisition and calibration. A baseline model is trained on EEG data from five able-bodied individuals and then rapidly updated with a small amount of data from a sixth, holdout subject. The remaining holdout data were used to test the performance of both the baseline and updated models. We repeated this procedure via a leave-one-subject out (LOSO) validation framework. Averaged over six LOSO folds, the updated model improved classification accuracy upon the baseline by 10.0, 18.8, and 22.1 percentage points on two binary and one ternary classification tasks, respectively. These results demonstrate that decoding accuracy can be substantially improved with minimal subject-specific data. They also indicate that a CNN-based decoder can be personalized rapidly, enabling near plug-and-play BCI functionality for neurorehabilitation and other time-critical EEG applications.
| null |
https://arxiv.org/abs/2506.14120v1
|
https://arxiv.org/pdf/2506.14120v1.pdf
| null |
[
"Ziheng Chen",
"Po T. Wang",
"Mina Ibrahim",
"Shivali Baveja",
"Rong Mu",
"An H. Do",
"Zoran Nenadic"
] |
[
"Decoder",
"EEG",
"Transfer Learning"
] | 2025-06-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/inhibiting-alzheimer-s-disease-by-targeting
|
2506.14052
| null | null |
Inhibiting Alzheimer's Disease by Targeting Aggregation of Beta-Amyloid
|
Alzheimer's disease is characterized by dangerous amyloid plaques formed by deposits of the protein Beta-Amyloid aggregates in the brain. The specific amino acid sequence that is responsible for the aggregates of Beta-Amyloid is lys-leu-val-phe-phe (KLVFF). KLVFF aggregation inhibitors, which we design in this paper, prevent KLVFF from binding with itself to form oligomers or fibrils (and eventually plaques) that cause neuronal death. Our binder-blocker peptides are designed such that, on one side, they bind strongly to KLVFF, and on the other side, they disrupt critical interactions, thus preventing aggregation. Our methods use optimization techniques and molecular simulations and identify 10 candidate sequences for trial of the 3.2 million possible sequences. This approach for inhibitor identification can be generalized to other diseases characterized by protein aggregation, such as Parkinson's, Huntington's, and prion diseases.
| null |
https://arxiv.org/abs/2506.14052v1
|
https://arxiv.org/pdf/2506.14052v1.pdf
| null |
[
"Ananya Joshi",
"George Khoury",
"Christodoulas Floudas"
] |
[] | 2025-06-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-11000-study-open-access-dataset-of
|
2506.14021
| null | null |
An 11,000-Study Open-Access Dataset of Longitudinal Magnetic Resonance Images of Brain Metastases
|
Brain metastases are a common complication of systemic cancer, affecting over 20% of patients with primary malignancies. Longitudinal magnetic resonance imaging (MRI) is essential for diagnosing patients, tracking disease progression, assessing therapeutic response, and guiding treatment selection. However, the manual review of longitudinal imaging is time-intensive, especially for patients with multifocal disease. Artificial intelligence (AI) offers opportunities to streamline image evaluation, but developing robust AI models requires comprehensive training data representative of real-world imaging studies. Thus, there is an urgent necessity for a large dataset with heterogeneity in imaging protocols and disease presentation. To address this, we present an open-access dataset of 11,884 longitudinal brain MRI studies from 1,430 patients with clinically confirmed brain metastases, paired with clinical and image metadata. The provided dataset will facilitate the development of AI models to assist in the long-term management of patients with brain metastasis.
| null |
https://arxiv.org/abs/2506.14021v1
|
https://arxiv.org/pdf/2506.14021v1.pdf
| null |
[
"Saahil Chadha",
"David Weiss",
"Anastasia Janas",
"Divya Ramakrishnan",
"Thomas Hager",
"Klara Osenberg",
"Klara Willms",
"Joshua Zhu",
"Veronica Chiang",
"Spyridon Bakas",
"Nazanin Maleki",
"Durga V. Sritharan",
"Sven Schoenherr",
"Malte Westerhoff",
"Matthew Zawalich",
"Melissa Davis",
"Ajay Malhotra",
"Khaled Bousabarah",
"Cornelius Deuschl",
"MingDe Lin",
"Sanjay Aneja",
"Mariam S. Aboian"
] |
[
"Management"
] | 2025-06-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/beyond-black-boxes-enhancing-interpretability
|
2506.14014
| null | null |
Beyond Black Boxes: Enhancing Interpretability of Transformers Trained on Neural Data
|
Transformer models have become state-of-the-art in decoding stimuli and behavior from neural activity, significantly advancing neuroscience research. Yet greater transparency in their decision-making processes would substantially enhance their utility in scientific and clinical contexts. Sparse autoencoders offer a promising solution by producing hidden units that respond selectively to specific variables, enhancing interpretability. Here, we introduce SAEs into a neural decoding framework by augmenting a transformer trained to predict visual stimuli from calcium imaging in the mouse visual cortex. The enhancement of the transformer model with an SAE preserved its original performance while yielding hidden units that selectively responded to interpretable features, such as stimulus orientation and genetic background. Furthermore, ablating units associated with a given variable impaired the model's ability to process that variable, revealing how specific internal representations support downstream computations. Together, these results demonstrate that integrating SAEs with transformers combines the power of modern deep learning with the interpretability essential for scientific understanding and clinical translation.
| null |
https://arxiv.org/abs/2506.14014v1
|
https://arxiv.org/pdf/2506.14014v1.pdf
| null |
[
"Laurence Freeman",
"Philip Shamash",
"Vinam Arora",
"Caswell Barry",
"Tiago Branco",
"Eva Dyer"
] |
[
"Decision Making"
] | 2025-06-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/blastdiffusion-a-latent-diffusion-model-for
|
2506.13843
| null | null |
BlastDiffusion: A Latent Diffusion Model for Generating Synthetic Embryo Images to Address Data Scarcity in In Vitro Fertilization
|
Accurately identifying oocytes that progress to the blastocyst stage is crucial in reproductive medicine, but the limited availability of annotated high-quality embryo images presents challenges for developing automated diagnostic tools. To address this, we propose BlastDiffusion, a generative model based on Latent Diffusion Models (LDMs) that synthesizes realistic oocyte images conditioned on developmental outcomes. Our approach utilizes a pretrained Variational Autoencoder (VAE) for latent space representation, combined with a diffusion process to generate images that distinguish between oocytes that reach the blastocyst stage and those that do not. When compared to Blastocyst-GAN, a GAN-based model we trained for this task, BlastDiffusion achieves superior performance, with a global Frechet Inception Distance (FID) of 94.32, significantly better than Blastocyst-GAN's FID of 232.73. Additionally, our model shows improvements in perceptual (LPIPS) and structural (SSIM) similarity to real oocyte images. Qualitative analysis further demonstrates that BlastDiffusion captures key morphological differences linked to developmental outcomes. These results highlight the potential of diffusion models in reproductive medicine, offering an effective tool for data augmentation and automated embryo assessment.
| null |
https://arxiv.org/abs/2506.13843v1
|
https://arxiv.org/pdf/2506.13843v1.pdf
| null |
[
"Alejandro Golfe",
"Natalia P. García-de-la-puente",
"Adrián Colomer",
"Valery Naranjo"
] |
[
"Data Augmentation",
"Diagnostic",
"SSIM"
] | 2025-06-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.